On Mathstodon, Robin Houston pointed out a video where Oded Margalit claimed that it’s an open problem why this integral:

is so absurdly close to but not quite equal.

They agree to 41 decimal places, but they’re not the same!

while

So, a bunch of us tried to figure out what was going on.

Jaded nonmathematicians told us it’s just a coincidence, so what is there to explain? But of course an agreement this close is unlikely to be “just a coincidence”. It might be, but you’ll never get anywhere in math with that attitude.

We were reminded of the famous cosine Borwein integral

which equals for up to and including 55, but not for any larger

But it was Sean O who really cracked the case, by showing that the integral we were struggling with could actually be reduced to an version of the cosine Borwein integral, namely

The point is this. A little calculation using the Weierstrass factorizations

lets you show

and thus

Then, a change of variables on the right-hand side gives

So, showing that

is microscopically less than is equivalent to showing that

is microscopically less than

This sets up a clear strategy for solving the mystery! People understand why the cosine Borwein integral

equals for up to 55, and then drops ever so slightly below The mechanism is clear once you watch the right sort of movie. It’s very visual. Greg Egan explains it here with an animation, based on ideas by Hanspeter Schmid:

• John Baez, Patterns that eventually fail, *Azimuth*, September 20, 2018.

Or you can watch this video, which covers a simpler but related example:

• 3Blue1Brown, Researchers thought this was a bug (Borwein integrals).

So, we just need to show that as the value of the cosine Borwein integral doesn’t drop much more! It drops by just a tiny amount: about

Alas, this doesn’t seem easy to show. At least I don’t know how to do it yet. But what had seemed an utter mystery has now become a chore in analysis: estimating how much

drops each time you increase a bit.

At this point if you’re sufficiently erudite you are probably screaming: *“BUT THIS IS ALL WELL-KNOWN!”*

And you’re right! We had a lot of fun discovering this stuff, but it was not new. When I was posting about it on MathOverflow, I ran into an article that mentions a discussion of this stuff:

• Eric W. Weisstein, Infinite cosine product integral, from MathWorld—A Wolfram Web Resource.

and it turns out Borwein and his friends had already studied it. There’s a little bit here:

• J. M. Borwein, D. H. Bailey, V. Kapoor and E. W. Weisstein, Ten problems in experimental mathematics, *Amer. Math. Monthly* **113** (2006), 481–509.

and a lot more in this book:

• J. M. Borwein, D. H. Bailey and R. Girgensohn, *Experimentation in Mathematics: Computational Paths to Discovery*, Wellesley, Massachusetts, A K Peters, 2004.

In fact the integral

was discovered by Bernard Mares at the age of 17. Apparently he posed the challenge of proving that it was less than Borwein and others dived into this and figured out how.

But there is still work left to do!

As far as I can tell, the known proofs that

all involve a lot of brute-force calculation. Is there a more conceptual way to understand this difference, at least approximately? There *is* a clear conceptual proof that

That’s what Greg Egan explained in my blog article. But can we get a clear proof that

for some small constant say or so?

One can argue that until we do, Oded Margalit is right: there’s an open problem here. Not a problem in proving that something is true. A problem in understanding *why* it is true.

Computers have really dissolved mathematical rigour somewhat. There are a lot of questions that have been considered interesting just a century ago, which have now been “solved” but still are just as perplexing as they were during those times. What was once considered a defining attribute of mathematics: to be able to definitely prove something as guaranteed to be correct or guaranteed to be wrong, turned out to have no value in of itself

It has some value, especially if you’re using the math to build bridges or something like that. But in pure math it really just sets the stage for what I consider the

interestingpart, namely understanding things.I suppose there’s a concentration-of-measure thing going on with the upper bound . Note that, if we pass to the rect-convolutions, writing and , then we are interested in how far down falls as . But notice that is the PDF of a sum of independent random variables which are independent, and is uniformly distributed in . Note that is the maximum value of , whereas by law-of-large-numbers intuition the bulk of the mass of should be contained in the interval , i.e. the mass of in should be . Since maximizes at , we must have that is not too small.

Convolutions are definitely the key to understanding this problem! I like your line of thinking. But there’s a nuance here. For the original Borwein integrals

we are interested in how far falls. But for the

cosineBorwein integralswe are interested in how far falls. We need to reach for to start falling, but we need to reach for to start falling. All this is explained by Greg Egan here:

• Patterns that eventually fail,

Azimuth, September 18, 2018.Ah, I see! Thanks for that note, that makes sense.

I have a sketchy idea. I’ll use Ben’s definition of and . Also let be the convolution of all the other ‘s. We want where for any . My idea is to choose a good , expand the Taylor series of about 1, and find moments for , and hope that something nice happens when it all gets put together.

The moments of come from those of , and involves sums like for even $m$, and the moments of a uniform distribution between -1/2 and 1/2. Perhaps someone knows formulas for these.

The m’th derivative of (for ) can be found (I think!) by replacing of the by its derivatives , and those are made of Dirac delta functions at something like . The remaining part has to be evaluated at these points.

I wonder if there is a differential version of this story. Does that infinite cosine product satisfy a differential equation (maybe an ODE with an infinite number of derivatives could be cooked up), and can that weird “anomaly” in the integral be related to an anomaly in the ODE?