## Lebesgue Universal Covering Problem (Part 3)

Back in 2015, I reported some progress on this difficult problem in plane geometry. I’m happy to report some more.

First, remember the story. A subset of the plane has diameter 1 if the distance between any two points in this set is ≤ 1. A universal covering is a convex subset of the plane that can cover a translated, reflected and/or rotated version of every subset of the plane with diameter 1. In 1914, the famous mathematician Henri Lebesgue sent a letter to a fellow named Pál, challenging him to find the universal covering with the least area.

Pál worked on this problem, and 6 years later he published a paper on it. He found a very nice universal covering: a regular hexagon in which one can inscribe a circle of diameter 1. This has area

0.86602540…

But he also found a universal covering with less area, by removing two triangles from this hexagon—for example, the triangles C1C2C3 and E1E2E3 here:

The resulting universal covering has area

0.84529946…

In 1936, Sprague went on to prove that more area could be removed from another corner of Pál’s original hexagon, giving a universal covering of area

0.8441377708435…

In 1992, Hansen took these reductions even further by removing two more pieces from Pál’s hexagon. Each piece is a thin sliver bounded by two straight lines and an arc. The first piece is tiny. The second is downright microscopic!

Hansen claimed the areas of these regions were 4 · 10-11 and 6 · 10-18. This turned out to be wrong. The actual areas are 3.7507 · 10-11 and 8.4460 · 10-21. The resulting universal covering had an area of

0.844137708416…

This tiny improvement over Sprague’s work led Klee and Wagon to write:

it does seem safe to guess that progress on [this problem], which has been painfully slow in the past, may be even more painfully slow in the future.

However, in 2015 Philip Gibbs found a way to remove about a million times more area than Hansen’s larger region: a whopping 2.233 · 10-5. This gave a universal covering with area

0.844115376859…

Karine Bagdasaryan and I helped Gibbs write up a rigorous proof of this result, and we published it here:

• John Baez, Karine Bagdasaryan and Philip Gibbs,The Lebesgue universal
covering problem
, Journal of Computational Geometry 6 (2015), 288–299.

Greg Egan played an instrumental role as well, catching various computational errors.

At the time Philip was sure he could remove even more area, at the expense of a more complicated proof. Since the proof was already quite complicated, we decided to stick with what we had.

But this week I met Philip at The philosophy and physics of Noether’s theorems, a wonderful workshop in London which deserves a full blog article of its own. It turns out that he has gone further: he claims to have found a vastly better universal covering, with area

0.8440935944…

This is an improvement of 2.178245 × 10-5 over our earlier work—roughly equal to our improvement over Hansen.

You can read his argument here:

• Philip Gibbs, An upper bound for Lebesgue’s universal covering problem, 22 January 2018.

I say ‘claims’ not because I doubt his result—he’s clearly a master at this kind of mathematics!—but because I haven’t checked it and it’s easy to make mistakes, for example mistakes in computing the areas of the shapes removed.

It seems we are closing in on the final result; however, Philip Gibbs believes there is still room for improvement, so I expect it will take at least a decade or two to solve this problem… unless, of course, some mathematicians start working on it full-time, which could speed things up considerably.

### 16 Responses to Lebesgue Universal Covering Problem (Part 3)

1. Bruce Smith says:

Suppose someone found an optimal universal covering — does anyone have a plausible scheme for proving it optimal?

• John Baez says:

People know how to get upper bounds using arguments like those constructed by the people listed in this post. People know how to get lower bounds by showing that any shape that contains all shapes of certain kinds must have at least some area. In theory these arguments could meet! In practice, they seem to be creeping closer and closer, but with a gap much bigger than the latest improvements.

The current upper bound, if Philip’s new paper is correct, is

0.8440935944…

This is roughly 0.00002 less than the previous upper bound.

On the other hand, the best known lower bound I know is due to Brass and Sharifi in 2005. They got a lower bound of

0.832

This is 0.0049 higher than the previous lower bound.

I think people haven’t put as much work into lower bounds, so I think the main reason for the gap is that the lower bounds aren’t big enough… not that the upper bounds are too big, though Philip is sure someone could bring them down with further work.

I seem to recall proving by a rather abstract argument that the actual constant we’re closing in on is effectively computable. (I never wrote up this proof in full detail.) But the argument did not give a practical algorithm. So, the challenge is just getting a practical way to compute it.

The most interesting thing about the Lebesgue universal covering problem, in my opinion, is that it’s an example of a type of geometry problem that’s easy to state but hard to solve, because we currently have no sophisticated tools for solving it.

• Philip Gibbs says:

Setting a good lower bound is certainly much harder for this problem. One way to proceed is to make some assumption. For example, assume that the best cover is contained in a regular hexagon of unit width. Then any curve of constant width (or orbiform) can only fit in finitely many ways, so the problem becomes more tractable. If you can solve that, then you only have to prove the assumption to finish the job. Try proving that the best cover can be contained within a strip of unit width, then in a parallelogram, … Both parts of this approach are still too hard. Something more is needed.

2. For the laymen in the audience, what’s the motivation, i.e., what’s the canonical example of a set of diameter 1 that is not covered by a circle of diameter 1?

3. Warming to this theme there is the classical Kakeya needle problem. A plane set K is called a Kakeya set if a unit segment, usually referred to as the needle, can be continuously turned around in K so as to return to its original position with its ends reversed. The problem posed by Kakeya was to find a Kakeya set of minimum area. He apparently thought that the answer was the deltoid (three-cusped hypocycloid) with area $\frac{\pi}{8}$. Besicovitch surprised everybody by showing that actually there exist Kakeya sets of arbitrarily small area. In fact it can be proved that given $\epsilon >0$, there exists a simply connected Kakeya set of area less than $\epsilon$ contained in a circle of radius 1. F Cunningham, The American Mathematical Monthly, Vol. 78, No. 2 (Feb., 1971), pp. 114-129 contains the details. There is a good exposition in Steven Krantz’s “A Panorama of Harmonic Analysis” (pages 148-170) where he discusses the method of “sprouting” which looks a lot like what is going on with this problem in terms of technique. Just adding this in for those who may not be aware of it and who want to find out more about it.

• Philip Gibbs says:

Yes, you are right to put this in the same class of problem. See this nice Greg Egan animation from G+ https://plus.google.com/113086553300459368002/posts/Ab1LKAxpdDC

• John Baez says:

Since G+ is scheduled to disappear in 10 months, I’ll copy Greg’s post here:

Kakeya needle set

In a previous post, I described the idea of a Besicovitch set: a subset of the plane that contains copies of a unit line segment oriented in every possible way. And I sketched a nice technique that Besicovitch came up with to make a set like that with a small area: you start with an equilateral triangle, slice it up into smaller triangles using cuts going from one vertex to equally spaced points on the opposite side, slide these smaller triangles together (without rotating them), so they overlap and occupy a reduced area, but still include line segments that cover one third of all the possibilities, and then repeat the recipe starting from each of the three vertices of the original triangle, and combine the results.

Here’s a link to that previous post:

Now, suppose you want to impose a stronger condition: not only does your set need to contain a unit line segment in every possible orientation, you need to be able to take any one of these line segments and continuously rotate it through a 360-degree turn (possibly translating it as well), without it ever leaving the set, eventually bringing it back to its original position and orientation.

A set that meets that stronger condition is known as a Kakeya needle set, since the problem of finding such sets was posed by the Japanese mathematician Sōichi Kakeya in 1917.

https://en.wikipedia.org/wiki/Kakeya_set

The Besicovitch sets we’ve discussed are not Kakeya needle sets, because when we slide the small triangles together to make their areas overlap, edges that they shared before we moved them become separated, and once you arrive at one of those edges, there’s no longer any way to get to the matching, parallel edge of the next triangle that you need to reach to continue rotating.

So, to turn those kind of Besicovitch sets into Kakeya needle sets, you need to add some new pieces to the set to provide a route for the “needle” as it moves between the now-separated edges. If you rotate the needle around its midpoint by a small angle, it will sweep out two narrow segments of a disk centred on that midpoint. You can then translate it along its new orientation, adding essentially no area (because the needle has infinitesimal thickness), until its midpoint lies on the edge you want to reach (or on a line that continues that edge beyond its endpoints). You can then rotate it back into its original orientation, again sweeping out a couple of narrow disk segments, and translate it into the desired position on the destination edge.

This trick is known as a Pál join, after the Hungarian-Danish geometer Gyula Pál (also known as Julius Pál), who seems to have worked on a number of problems involving minimising the areas of sets meeting various conditions.

Using the techniques we’ve sketched, it’s possible to construct a Kakeya needle set whose area is as small as you like. However, the diameter of these sets gets larger as you try to shrink the area, because when you rotate the needle by a smaller angle to reduce the area of the segments in the Pál join, you then need to translate the needle over a greater distance to move it to its destination.

The Wikipedia article linked above mentions a 1941 proof by H. J. Van Alphen that says you can construct a Kakeya needle set of arbitrarily small area inside a circle of radius 2+e for arbitrarily small e, but I don’t know the details.

“The Serenity of Kakeya” by Evelyn Lamb

https://blogs.scientificamerican.com/roots-of-unity/the-serenity-of-kakeya/

and learned the technical details here:

“The Kakeya Problem”, Diploma Thesis by Markus Furtner

4. jonas says:

It seems that in the first article, you’re searching for a convex subset of the plane that covers any shape of diameter 1 if you allow reflections, but here you seem to allow only translations and rotations. Why did you change the goal this way? Is there a reason to think that the two problems are the same or are not the same?

• John Baez says:

I think you got it backwards: I didn’t mention reflections in Part 1, but this article here says:

First, remember the story. A subset of the plane has diameter 1 if the distance between any two points in this set is ≤ 1. A universal covering is a convex subset of the plane that can cover a translated, reflected and/or rotated version of every subset of the plane with diameter 1. In 1914, the famous mathematician Henri Lebesgue sent a letter to a fellow named Pál, challenging him to find the universal covering with the least area.

But you raise a very important point! I believe the two problems are not the same: if you allow yourself to reflect your sets of diameter 1, you can fit them in a convex set with smaller area. In the paper Gibbs wrote with Bagdasaryan and me, we really needed to allow reflections to obtain the shape we did. I believe Gibbs’ new paper also uses reflections.

I have tried to edit all 3 articles so they only mention the version that allows reflections, since that’s what we really consider. But both problems are interesting.

It’s also interesting to consider the version that doesn’t demand convexity. I discussed this alternative in Part 1.

• Philip Gibbs says:

It’s also interesting to look at the case where only translations are allowed.

This site uses Akismet to reduce spam. Learn how your comment data is processed.