Visual Insight

1 March, 2015

I have another blog, called Visual Insight. Over here, our focus is on applying science to help save the planet. Over there, I try to make the beauty of pure mathematics visible to the naked eye.

I’m always looking for great images, so if you know about one, please tell me about it! If not, you may still enjoy taking a look.

Here are three of my favorite images from that blog, and a bit about the people who created them.

I suspect that these images, and many more on Visual Insight, are all just different glimpses of the same big structure. I have a rough idea what that structure is. Sometimes I dream of a computer program that would let you tour the whole thing. Unfortunately, a lot of it lives in more than 3 dimensions.

Less ambitiously, I sometimes dream of teaming up with lots of mathematicians and creating a gorgeous coffee-table book about this stuff.

Schmidt arrangement of the Eisenstein integers

This picture drawn by Katherine Stange shows what happens when we apply fractional linear transformations

$z \mapsto \frac{a z + b}{c z + d}$

to the real line sitting in the complex plane, where $a,b,c,d$ are Eisenstein integers: that is, complex numbers of the form

$m + n \sqrt{-3}$

where $m,n$ are integers. The result is a complicated set of circles and lines called the ‘Schmidt arrangement’ of the Eisenstein integers. For more details go here.

Katherine Stange did her Ph.D. with Joseph H. Silverman, an expert on elliptic curves at Brown University. Now she is an assistant professor at the University of Colorado, Boulder. She works on arithmetic geometry, elliptic curves, algebraic and integer sequences, cryptography, arithmetic dynamics, Apollonian circle packings, and game theory.

{7,3,3} honeycomb

This is the {7,3,3} honeycomb as drawn by Danny Calegari. The {7,3,3} honeycomb is built of regular heptagons in 3-dimensional hyperbolic space. It’s made of infinite sheets of regular heptagons in which 3 heptagons meet at vertex. 3 such sheets meet at each edge of each heptagon, explaining the second ‘3’ in the symbol {7,3,3}.

The 3-dimensional regions bounded by these sheets are unbounded: they go off to infinity. They show up as holes here. In this image, hyperbolic space has been compressed down to an open ball using the so-called Poincaré ball model. For more details, go here.

Danny Calegari did his Ph.D. work with Andrew Casson and William Thurston on foliations of three-dimensional manifolds. Now he’s a professor at the University of Chicago, and he works on these and related topics, especially geometric group theory.

{7,3,3} honeycomb meets the plane at infinity

This picture, by Roice Nelson, is another view of the {7,3,3} honeycomb. It shows the ‘boundary’ of this honeycomb—that is, the set of points on the surface of the Poincaré ball that are limits of points in the {7,3,3} honeycomb.

Roice Nelson used stereographic projection to draw part of the surface of the Poincaré ball as a plane. The light-colored circles here are holes, not contained in the boundary of the {7,3,3} honeycomb. There are infinitely many holes, and the actual boundary, the region left over, is a fractal with area zero. The white region on the outside of the picture is yet another hole. For more details, and a different version of this picture, go here.

Roice Nelson is a software developer for a flight data analysis company. There’s a good chance the data recorded on the airplane from your last flight moved through one of his systems! He enjoys motorcycling and recreational mathematics, he has a blog with lots of articles about geometry, and he makes plastic models of interesting geometrical objects using a 3d printer.

Scholz’s Star

19 February, 2015

100,000 years ago, some of my ancestors came out of Africa and arrived in the Middle East. 50,000 years ago, some of them reached Asia. But between those dates, about 70,000 years ago, two stars passed through the outer reaches of the Solar System, where icy comets float in dark space!

One was a tiny red dwarf called Scholz’s star. It’s only 90 times as heavy as Jupiter. Right now it’s 20 light years from us, so faint that it was discovered only in 2013, by Ralf-Dieter Scholz—an expert on nearby stars, high-velocity stars, and dwarf stars.

The other was a brown dwarf: a star so small that it doesn’t produce energy by fusion. This one is only 65 times the mass of Jupiter, and it orbits its companion at a distance of 80 AU.

(An AU, or astronomical unit, is the distance between the Earth and the Sun.)

A team of scientists has just computed that while some of my ancestors were making their way to Asia, these stars passed about 0.8 light years from our Sun. That’s not very close. But it’s close enough to penetrate the large cloud of comets surrounding the Sun: the Oort cloud.

They say this event didn’t affect the comets very much. But if it shook some comets loose from the Oort cloud, they would take about 2 million years to get here! So, they won’t arrive for a long time.

At its closest approach, Scholz’s star would have had an apparent magnitude of about 11.4. This is a bit too faint to see, even with binoculars. So, don’t look for it myths and legends!

As usual, the paper that made this discovery is expensive in journals but free on the arXiv:

• Eric E. Mamajek, Scott A. Barenfeld, Valentin D. Ivanov, Alexei Y. Kniazev, Petri Vaisanen, Yuri Beletsky, Henri M. J. Boffin, The closest known flyby of a star to the Solar System.

It must be tough being a scientist named ‘Boffin’, especially in England! Here’s a nice account of how the discovery was made:

• University of Rochester, A close call of 0.8 light years, 16 February 2015.

The brown dwarf companion to Scholz’s star is a ‘class T’ star. What does that mean? It’s pretty interesting. Let’s look at an example just 7 light years from Earth!

Brown dwarfs

Thanks to some great new telescopes, astronomers have been learning about weather on brown dwarfs! It may look like this artist’s picture. (It may not.)

Luhman 16 is a pair of brown dwarfs orbiting each other just 7 light years from us. The smaller one, Luhman 16B, is half covered by huge clouds. These clouds are hot—1200 °C—so they’re probably made of sand, iron or salts. Some of them have been seen to disappear! Why? Maybe ‘rain’ is carrying this stuff further down into the star, where it melts.

So, we’re learning more about something cool: the ‘L/T transition’.

Brown dwarfs can’t fuse ordinary hydrogen, but a lot of them fuse the isotope of hydrogen called deuterium that people use in H-bombs—at least until this runs out. The atmosphere of a hot brown dwarf is similar to that of a sunspot: it contains molecular hydrogen, carbon monoxide and water vapor. This is called a class M brown dwarf.

But as they run out of fuel, they cool down. The cooler class L brown dwarfs have clouds! But the even cooler class T brown dwarfs do not. Why not?

This is the mystery we may be starting to understand: the clouds may rain down, with material moving deeper into the star! Luhman 16B is right near the L/T transition, and we seem to be watching how the clouds can disappear as a brown dwarf cools. (Its larger companion, Luhman 16A, is firmly in class L.)

Finally, as brown dwarfs cool below 300 °C, astronomers expect that ice clouds start to form: first water ice, and eventually ammonia ice. These are the class Y brown dwarfs. Wouldn’t that be neat to see? A star with icy clouds!

Could there be life on some of these stars?

Caroline Morley regularly blogs about astronomy. If you want to know more about weather on Luhman 16B, try this:

• Caroline Morley, Swirling, patchy clouds on a teenage brown dwarf, 28 February 2012.

She doesn’t like how people call brown dwarfs “failed stars”. I agree! It’s like calling a horse a “failed giraffe”.

For more, try:

Brown dwarfs, Scholarpedia.

Higher-Dimensional Rewriting in Warsaw

18 February, 2015

This summer there will be a conference on higher-dimensional algebra and rewrite rules in Warsaw. They want people to submit papers! I’ll give a talk about presentations of symmetric monoidal categories that arise in electrical engineering and control theory. This is part of the network theory program, which we talk about so often here on Azimuth.

There should also be interesting talks about combinatorial algebra, homotopical aspects of rewriting theory, and more:

Higher-Dimensional Rewriting and Applications, 28-29 June 2015, Warsaw, Poland. Co-located with the RDP, RTA and TLCA conferences. Organized by Yves Guiraud, Philippe Malbos and Samuel Mimram.

Description

Over recent years, rewriting methods have been generalized from strings and terms to richer algebraic structures such as operads, monoidal categories, and more generally higher dimensional categories. These extensions of rewriting fit in the general scope of higher-dimensional rewriting theory, which has emerged as a unifying algebraic framework. This approach allows one to perform homotopical and homological analysis of rewriting systems (Squier theory). It also provides new computational methods in combinatorial algebra (Artin-Tits monoids, Coxeter and Garside structures), in homotopical and homological algebra (construction of cofibrant replacements, Koszulness property). The workshop is open to all topics concerning higher-dimensional generalizations and applications of rewriting theory, including

• higher-dimensional rewriting: polygraphs / computads, higher-dimensional generalizations of string/term/graph rewriting systems, etc.

• homotopical invariants of rewriting systems: homotopical and homological finiteness properties, Squier theory, algebraic Morse theory, coherence results in algebra and higher-dimensional category theory, etc.

• linear rewriting: presentations and resolutions of algebras and operads, Gröbner bases and generalizations, homotopy and homology of algebras and operads, Koszul duality theory, etc.

• applications of higher-dimensional and linear rewriting and their interactions with other fields: calculi for quantum computations, algebraic lambda-calculi, proof nets, topological models for concurrency, homotopy type theory, combinatorial group theory, etc.

• implementations: the workshop will also be interested in implementation issues in higher-dimensional rewriting and will allow demonstrations of prototypes of existing and new tools in higher-dimensional rewriting.

Submitting

Important dates:

• Submission: April 15, 2015

• Final version: May 20, 2015

• Conference: 28-29 June, 2015

Submissions should consist of an extended abstract, approximately 5 pages long, in standard article format, in PDF. The page for uploading those is here. The accepted extended abstracts will be made available electronically before the
workshop.

Organizers

Program committee:

• Vladimir Dotsenko (Trinity College, Dublin)

• Yves Guiraud (INRIA / Université Paris 7)

• Jean-Pierre Jouannaud (École Polytechnique)

• Philippe Malbos (Université Claude Bernard Lyon 1)

• Paul-André Melliès (Université Paris 7)

• Samuel Mimram (École Polytechnique)

• Tim Porter (University of Wales, Bangor)

• Femke van Raamsdonk (VU University, Amsterdam)

Earth-Like Planets Near Red Dwarf Stars

14 February, 2015

Can red dwarf stars have Earth-like planets with life?

This is an important question, at least in the long run, because 80% of the stars in the Milky Way are red dwarfs, even though none are visible to the naked eye. 20 of the 30 nearest stars are red dwarfs! It would be nice to know if they can have planets with life.

Also, red dwarf stars live a long time! They’re small—and the smaller a star is, the longer it lives. Calculations show that a red dwarf one-tenth the mass of our Sun should last for 10 trillion years!

So if life is possible on planets orbiting red dwarf stars—or if life could get there—we could someday have very, very old civilizations. That idea excites me. Imagine what a galactic civilization spanning the 80 billion red dwarfs in our galaxy could do in 10 trillion years!

(No: you can’t imagine it. You don’t have time to think of all the amazing things they could do.)

Proxima Centauri

Let’s start close to home. Proxima Centauri, the nearest star to the Sun, is a red dwarf. If we ever explore interstellar space, we may stop by this star. So, it’s worth knowing a bit about it.

We don’t know if it has planets. But it could be part of a triple star system! The closest neighboring stars, Alpha Centauri A and B, orbit each other every 80 years. One is a bit bigger than the Sun, the other a bit smaller. They orbit in a fairly eccentric ellipse. At their closest, their distance is like the distance from Saturn to the Sun. At their farthest, it’s more like the distance from Pluto to the Sun.

Proxima Centauri is fairly far from both: a quarter of a light year away. That’s about 350 times the distance from Pluto to the Sun! We’re not even sure Proxima Centauri is gravitationally bound to the other stars. If it is, its orbital period could easily exceed 500,000 years.

If Proxima Centauri had an Earth-like planet, there’s a bit of a problem: it’s a flare star.

You see, convection stirs up this star’s whole interior, unlike the Sun. Convection of charged plasma makes strong magnetic fields. Magnetic fields get tied in knots, and the energy gets released through enormous flares! They can become as large as the star itself, and get so hot that they radiate lots of X-rays.

This could be bad for life on nearby planets… especially since an Earth-like planet would have to be very close. You see, Proxima Centauri is very faint: just 0.17% the brightness of our Sun!

In fact many red dwarfs are flare stars, for the same reasons. Proxima Centauri is actually fairly tame as red dwarfs go, because it’s 4.9 billion years old. Younger ones are more lively, with bigger flares.

Proxima Centauri is just 4.24 light-years away. If explore interstellar space it may be a good place to visit. It’s actually getting closer: it’ll come within about 3 light-years of us in roughly 27,000 years, and then drift by. We should take advantage of this and go visit it soon, like in a few centuries!

Gliese 667 Cc

Gliese 667C is a red dwarf just 1.4% as bright as our Sun. Unremarkable: such stars are a dime a dozen. But it’s famous, because we know it has at least two planets, one of which is quite Earth-like!

This planet, called Gliese 667 Cc, is one of the most Earth-like ones we know today. But it’s weirdly different from our home in many ways. Its mass is 3.8 times that of Earth. It should be a bit warmer than Earth—but dimly lit as seen by our eyes, since most of the light it gets is in the infrared.

Being close to its dim red dwarf star, its year is just 28 Earth days long. But there’s something even cooler about this planet. You can see it in the NASA artist’s depiction above. The red dwarf Gliese 667C is part of a triple star system!

The largest star in this system, Gliese 667 A, is three-quarters the mass of our Sun, but only 12% as bright. It’s an orange dwarf, intermediate between a red dwarf and our Sun, which is considered a yellow dwarf.

The second largest, Gliese 667 B, is also an orange dwarf, only 5% as bright as our sun.

These two orbit each other every 42 years. The red dwarf Gliese 667 C is considerably farther away, orbiting this pair.

What could the planet Gliese 667 Cc be like?

Tidally locked planets

Since a planet needs to be close to a red dwarf to be warm enough for liquid water, such planets are likely to be be tidally locked, with one side facing their sun all the time.

For a long time, this made scientists believe the day side of such a planet would be hot and dry, with all the water locked in ice on the night side, as shown above. People call this a water-trapped world. Perhaps not so good for life!

But a new paper argues that other kinds of worlds are likely too!

In a thin ice waterworld, an ocean covers most of the planet. It’s covered with ice on the night side, maybe 10 meters thick. The day side has open ocean. Ice melts near the edge of the ice, pours into the ocean on the day side… while on the night side, water freezes onto the bottom of the ice layer.

In an ice sheet-ocean world, there’s a big ocean on the day side and a big continent on the night side. As in the water-trapped world, a lot of ice forms on the night side, up to a kilometer thick. But if there’s enough geothermal heat, and enough water, not all the water gets frozen on the night side: enough melts to form an ocean on the day side.

Needless to say, these new scenarios are exciting because they could be more conducive to life!

• Jun Yang, Yonggang Liu, Yongyun Hu and Dorian S. Abbot, Water trapping on tidally locked terrestrial planets requires special conditions.

Abstract: Surface liquid water is essential for standard planetary habitability. Calculations of atmospheric circulation on tidally locked planets around M stars suggest that this peculiar orbital configuration lends itself to the trapping of large amounts of water in kilometers-thick ice on the night side, potentially removing all liquid water from the day side where photosynthesis is possible. We study this problem using a global climate model including coupled atmosphere, ocean, land, and sea-ice components as well as a continental ice sheet model driven by the climate model output.

For a waterworld we find that surface winds transport sea ice toward the day side and the ocean carries heat toward the night side. As a result, night-side sea ice remains about 10 meters thick and night-side water trapping is insignificant. If a planet has large continents on its night side, they can grow ice sheets about a kilometer thick if the geothermal heat flux is similar to Earth’s or smaller. Planets with a water complement similar to Earth’s would therefore experience a large decrease in sea level when plate tectonics drives their continents onto the night side, but would not experience complete day-side dessication. Only planets with a geothermal heat flux lower than Earth’s, much of their surface covered by continents, and a surface water reservoir about 10% of Earth’s would be susceptible to complete water trapping.

From a technical viewpoint, what’s fun about this new paper is that it uses detailed climate models that have been radically hacked to deal with a red dwarf star. Paraphrasing:

We perform climate simulations with the Community Climate System Model version 3.0 (CCSM3) which was originally developed by the National Center for Atmospheric Research to study the climate of Earth. The model contains four coupled components: atmosphere, ocean, sea ice, and land. The atmosphere component calculates atmospheric circulation and parameterizes sub-grid processes such as convection, precipitation, clouds, and boundary- layer mixing. The ocean component computes ocean circulation using the hydrostatic and Boussinesq approximations. The sea-ice component predicts ice fraction, ice thickness, ice velocity, and energy exchanges between the ice and the atmosphere/ ocean. The land component calculates surface temperature, soil water content, and evaporation.

We modify CCSM3 to simulate the climate of habitable planets around M stars following Rosenbloom et al., Liu et al., and Hu & Yang. The stellar spectrum we use is a blackbody with an effective temperature of 3400 K. We employ planetary parameters typical of a super-Earth: a radius of 1.5 R, gravity of 1.38 g, and an orbital period of 37 Earth-days. The orbital period of habitable zone planets around M stars is roughly 10–100 days. We set the insolation to 866 watts per square meter and both the obliquity and eccentricity to zero. The atmospheric surface pressure is 1.0 bar, including N2, H2O, and 355 parts per million CO2.

And so on. Way cool! They consider a variety of different kinds of continents and oceans… including one where they’re just like those here on Earth—just because the data for that is easy to get!

Here’s a question I don’t know the answer to. To what extent can models like Community Climate System Model version 3.0 be tweaked to handle different planets? And what are the main things we should worry about: ways Earth-like planets can be different enough to seriously throw off the models?

We live in exciting times, where just as we’re making huge progress trying to understand the Earth’s climate in time to make wise decisions, we’re discovering hundreds of new planets with their own very different climates.

Exploring Climate Data (Part 3)

9 February, 2015

This blog article is about the temperature data used in the reports of the Intergovernmental panel on Climate Change (IPCC). I present the results of an investigation into the completeness of global land surface temperature records. There are noticeable gaps in the data records, but I leave discussion about the implications of these gaps to the readers.

The data used in the newest IPCC report, namely the Fifth Assessment Report (AR5) is, as it seems, at the time of writing not yet available at the IPCC data distribution centre.

The temperature databases used for the previous report, AR4, are listed here on the website of the IPCC. These databases are:

CRUTEM3,

NCDC (probably as a guess using the data set GHCNM v3),

GISTEMP, and

• the collection of Lugina et al.

The temperature collection CRUTEM3 was put together by the Climatic Research Unit (CRU) at the University of East Anglia. According to the CRU temperature page the CRUTEM3 data and in particular the CRUTEM3 land air temperature anomalies on a 5° × 5° grid-box basis has now been superseded by the so-called CRUTEM4 collection.

Since the CRUTEM collection appeared to be an important data source for the IPCC, I started by investigating the land air temperature data collection CRUTEM4. In what follows, only the availability of so-called land air temperature measurements will be investigated. (The collections often also contain sea surface temperature (SST) measurements.)

Usually only ‘temperature grid data’ or other averaged data is used for the climate assessments. Here ‘grid’ means that data is averaged over regions that cover the earth in a grid. However, the data is originally generated by temperature measuring stations around the world. So, I was interested in this original data and its quality. For the CRUTEM collection the latest station data is called the CRUTEM4 station data collection.

I downloaded the station’s data file, which is a simple text file, from the bottom of the CRUTEM4 station data page. I noticed on a first glance that there are big gaps in the file in some regions of the world. The file is huge, though: it contains monthly measurements starting in January 1701 ending in 2011 and there are altogether 4634 stations. Quickly finding a gap in such a huge file was a sufficiently disconcerting experience that persuaded my husband Tim Hoffmann to help me to investigate this station data in more accessible way, via a visualization.

The visualization takes a long time to load, and due to some unfortunate software configuration issues (not on our side) it sometimes doesn’t work at all. Please open it now in a separate tab while reading this article:

• Nadja Kutz and Tim Hoffman, Temperature data from stations around the globe, collected by CRUTEM 4.

For those who are too lazy to explore the data themselves, or in case the visualization is not working, here are some screenshots from the visualization which documents the missing data in the CRUTEM4 dataset.

The images should speak for themselves. However, an additional explanation is provided after the images. One should in particular mention that it looks as if the deterioration of the CRUTEM4 data set has been greater in the years 2000-2009 than in the years 1980-2000.

Now you could say: okay, we know that there are budget cuts in the UK, and so probably the University of East Anglia was subject to those, but what about all these other collections in the world? This will be addressed after the images.

North America

Jan 1980

Jan 2000

Jan 2009

Africa

Jan 1980

Jan 2000

Jan 2009

Asia

Jan 1980

Jan 2000

Jan 2009

Eurasia/Northern Africa

Jan 1980

Jan 2000

Jan 2009

Arctic

Jan 1980

Jan 2000

Jan 2009

These screenshots comprise various regions of the world for the month of January for the years 1980, 2000 and 2009. Each station is represented by a small rectangle around its coordinates. The color of a rectangle indicates the monthly temperature value for that station: blue is the coldest, red is the hotttest. Black rectangles are what CRU calls ‘missing data’, denoted with -999 in the file. I prefer instead to call it ‘invalid’ data, in order to distinguish it from the missing data due to stations that have been closed down. In the visualization, closed down stations are encoded by a transparent rectangle and their markers are also present.

We couldn’t find the reasons for this invalid data. At the end of the post John Baez has provided some more literature on this question. It is worth noting that satellites can replace surface measurements only to a certain degree, as was highlighted by Stefan Rahmstorf in a blog post on RealClimate:

the satellites cannot measure the near-surface temperatures but only those overhead at a certain altitude range in the troposphere. And secondly, there are a few question marks about the long-term stability of these measurements (temporal drift).

Apart from the already mentioned collections, which were used in the IPCC’s AR4 report, there are actually some more institutional collections, and I also found some private weather collections. However among those private collections I haven’t found any collection that goes back in time as far as CRUTEM4. However, it could be that some of those private collections might be more complete in terms of actual data than the collections that reach further back in time.

After discussing our visualization on the Azimuth Forum it turned out that Nick Stokes, who runs the blog MOYHU in Australia, had the same idea as me—however, already in 2011. That is in this year he had visualized station data. For his visualization he used Google Earth. Moreover, for his visualization he used different temperature collections.

If you have Google Earth installed then you can see his visualizations here:

The link is from the documentation page of Nick Stoke’s website.

What are the major collections?

As far as we can tell, the major global collections of temperature data that go back to the 18th or 19th or at least early 20th century seem to be following. First, there are the collections already mentioned, which are also used in the AR4 report:

• The CRUTEM collection from the University of East Anglia (UK).

• the GISTEMP collection from the Goddard Institute of Space Science (GISS) at NASA (US).

• the collection of Lugina et al, which is a cooperative project involving NCDC/NOAA (US) (see also below), the University of Maryland (US), St. Petersburg State University (Russia) and the State Hydrological Institute, St. Petersburg, (Russia).

• the GHCN collection from NOAA.

Then there are these:

• the Berkeley Earth collection, called BEST

• The GSOD (Global Summary Of the Day) and Global Historical Climatology Network (GHCN) collections. Both these are run by the National Climatic Data Center (NCDC) at National Oceanic and Atmospheric Administration (NOAA) (US). It is not clear to me to what extent these two databases overlap with those of Lugina et al, which were made in cooperation with NCDC/NOAA. It is also not clear to me whether the GHCN collection had been used for the AR4 report (it seems so). There is currently also a very partially working visualization of the GSOD data here. The sparse data in specific regions (see images above) is also apparent in this visualization.

• There is a comparatively new initiative called International Surface Temperatures Initiative (ISTI) which gathers collections in a databank and seeks to provide temperature data “from hourly to century timescales”. As written on their blog, this data seems not to be quality controlled:

The ISTI dataset is not quality controlled, so, after re-reading section 3.3 of Lawrimore et al 2011, I implemented an extremely simple quality control scheme, MADQC.

What did you visualize?

As far as I had understood in the visualization by Nick Stokes—which you just opened—the collection BEST (before 1850-2010), the collections GSOD (1921-2010) and GHCN v2 (before 1850-1990) from NOAA and CRUTEM3 (before 1850-2000) are represented.

CRUTEM3 is also visualized in another way Clive Best. In Clive Best’s visualization, it seems however that one has apart from the station name no further access to other data, like station temperatures, etc. Moreover, it is not possible to set a recent time range, which is important for checking how much the dataset changed in recent times.

Unfortunately this limited possibility to set a time range holds also true for two visualizations of Nick Stokes here and here. In his first visualization, which is more exhaustive than the second, the following datasets are shown: GHCNv3 and an adjusted version of it (GADJ), a prelimary dataset from ISTI, BEST and CRUTEM 4. So his first visualization seems quite exhaustive also with respect to newer data. Unfortunately, as mentioned, setting the time range didn’t work properly (at least when I tested it). The same holds for his second visualization of GHCN v3 data. So, I was only able to trace the deterioration of recent data manually (for example, by clicking on individual stations).

Tim and I visualized CRUTEM4, that is, the updated version of CRUTEM3.

What did you not visualize?

Newer datasets after 2011/2012, for example from the aforementioned ISTI or from the private collections, are not visualized in the two collections you just opened.

Moreover in the visualizations mentioned hre, there is no coverage of the GISS collection, which however now uses NOAA’S GHCN v3 collections. The historical data of GISS could, however, be different from the other collections. The visualizations may also not cover the Lugina et al. collection, which was mentioned above in the context of the IPCC report. Lugina et al. could however be similar to GSOD (and GHCN) due to cooperation. Moreover, GHCN v3 could be substantially more exhaustive than CRUTEM or GHCN v2 (as shown in Nick Stoves visualization). However here the last collection was—like CRUTEM4—released in the spring of 2011.

GCHN v3 is also represented in Nick Stokes’ visualizations (here and here). Upon manually investigating it, it didn’t seem to much crucial additional data not found in CRUTEM4. Since this manual exploration was not exhaustive, I may be wrong—but I don’t think so.

Hence, to our knowledge, in the two visualizations you just opened, quite a lot of the available data is visualized—and as it seems “almost all” (?) of the far-back-reaching original quality controlled global surface temperature data collections as of 2011 or 2012. If you know of other similar collections please let us know.

As mentioned above private collections and in particular the ISTI collection may contain much more data. At the point of writing we don’t know in how far those newer collections will be taken into account for the new IPCC reports and in particular for the AR5 report. Moreover it seems not so clear how quality control may be ensured for those newer collections.

In conclusion, the previous IPCC reports seem to have been informed by the collections described here. Thus the coverage problems you see here need to be taken into account in discussions about the scientific base of previous climate descriptions.

Hopefully the visualizations from Nick Stokes and from Tim and me are ready for exploration! You can start to explore them yourself, and in particular see that the ‘deterioration of data’ is—just as in our CRUTEM4 visualization—also visible in Nick’s collections.

Note: I would like to thank people at the Azimuth Forum for pointing out references, and in particular Nick Stokes and Nathan Urban.

The effects of missing data

supplement by John Baez

There have always been fewer temperature recording stations in Arctic regions than other regions. The following paper initiated a controversy over how this fact affects our picture of the Earth’s climate:

Here is some discussion:

• Kevin Cowtan, Robert Way, and Dana Nuccitelli, Global warming since 1997 more than twice as fast as previously estimated, new study shows, Skeptical Science, 13 November 2013.

• Stefan Rahmstorf, Global warming since 1997 underestimated by half, RealClimate, 13 November 2013 in which it is highlighted that satellites can replace surface measurements only to a certain degree.

Anthony Watts’ protest about Cowtan, Way and the Arctic, HotWhopper, 15 November 2013.

• Victor Venema, Temperature trend over last 15 years is twice as large as previously thought , Variable Variability, 13 November 2013.

However, these posts seem to say little about the increasing amount of ‘missing data’.

Azimuth News (Part 3)

6 February, 2015

post by David Tanzer

Here are some notes from the back offices of the Azimuth project. After a long and productive stay as the Azimuth tech guy, Andrew Stacey is moving along and passing the baton to me. As part of this change, we’ve relocated the servers to a new Azimuth hosted account, and updated the forum software.

The forum is now at a new location:

https://forum.azimuthproject.org

This is where we collaborate on writing wiki and blog articles, on research and education projects, and on software development and systems issues. It’s also a fun place to chat with other professionals in a wide range of science-related fields.

So come on down to the forum! If you want to post, just apply for an account there. Acceptance criteria are minimal. A sincere desire to help goes a long way.

Important:  please use your full name, using “camel case” capitalization e.g. DavidTanzer, as your userid.  I will then put the spaces into your user ID.  (We want the spaces, but the registration form blocks them.)  The point is that we want to present ourselves as we really are.

Lebesgue’s Universal Covering Problem (Part 2)

3 February, 2015

A while back I described a century-old geometry problem posed by the famous French mathematician Lebesgue, inventor of our modern theory of areas and volumes.

This problem is famously difficult. So I’m happy to report some progress:

• John Baez, Karine Bagdasaryan and Philip Gibbs, Lebesgue’s universal covering problem.

But we’d like you to check our work! It will help if you’re good at programming. As far as the math goes, it’s just high-school geometry… carried to a fanatical level of intensity.

Here’s the story:

A subset of the plane has diameter 1 if the distance between any two points in this set is ≤ 1. You know what a circle of diameter 1 looks like. But an equilateral triangle with edges of length 1 also has diameter 1:

After all, two points in this triangle are farthest apart when they’re at two corners.

Note that this triangle doesn’t fit inside a circle of diameter 1:

There are lots of sets of diameter 1, so it’s interesting to look for a set that can contain them all.

In 1914, the famous mathematician Henri Lebesgue sent a letter to a pal named Pál. And in this letter he challenged Pál to find the convex set with smallest possible area such that every set of diameter 1 fits inside.

More precisely, he defined a universal covering to be a convex subset of the plane that can cover a translated, reflected and/or rotated version of every subset of the plane with diameter 1. And his challenge was to find the universal covering with the least area.

Pál worked on this problem, and 6 years later he published a paper on it. He found a very nice universal covering: a regular hexagon in which one can inscribe a circle of diameter 1. This has area

0.86602540…

But he also found a universal covering with less area, by removing two triangles from this hexagon—for example, the triangles C1C2C3 and E1E2E3 here:

Our paper explains why you can remove these triangles, assuming the hexagon was a universal covering in the first place. The resulting universal covering has area

0.84529946…

In 1936, Sprague went on to prove that more area could be removed from another corner of Pál’s original hexagon, giving a universal covering of area

0.8441377708435…

In 1992, Hansen took these reductions even further by removing two more pieces from Pál’s hexagon. Each piece is a thin sliver bounded by two straight lines and an arc. The first piece is tiny. The second is downright microscopic!

Hansen claimed the areas of these regions were 4 · 10-11 and 6 · 10-18. However, our paper redoes his calculation and shows that the second number is seriously wrong. The actual areas are 3.7507 · 10-11 and 8.4460 · 10-21.

Philip Gibbs has created a Java applet illustrating Hansen’s universal cover. I urge you to take a look! You can zoom in and see the regions he removed:

• Philip Gibbs, Lebesgue’s universal covering problem.

I find that my laptop, a Windows machine, makes it hard to view Java applets because they’re a security risk. I promise this one is safe! To be able to view it, I had to go to the “Search programs and files” window, find the “Configure Java” program, go to “Security”, and add

to the “Exception Site List”. It’s easy once you know what to do.

And it’s worth it, because only the ability to zoom lets you get a sense of the puny slivers that Hansen removed! One is the region XE2T here, and the other is T’C3V:

You can use this picture to help you find these regions in Philip Gibbs’ applet. But this picture is not in scale! In fact the smaller region, T’C3V, has length 3.7 · 10-7 and maximum width 1.4 · 10-14, tapering down to a very sharp point.

That’s about a few atoms wide if you draw the whole hexagon on paper! And it’s about 30 million times longer than it is wide. This is the sort of thing you can only draw with the help of a computer.

Anyway, Hansen’s best universal covering had an area of

0.844137708416…

This tiny improvement over Sprague’s work led Klee and Wagon to write:

it does seem safe to guess that progress on [this problem], which has been painfully slow in the past, may be even more painfully slow in the future.

However, our new universal covering removes about a million times more area than Hansen’s larger region: a whopping 2.233 · 10-5. So, we get a universal covering with area

0.844115376859…

The key is to slightly rotate the dodecagon shown in the above pictures, and then use the ideas of Pál and Sprague.

There’s a lot of room between our number and the best lower bound on this problem, due to Brass and Sharifi:

0.832

So, one way or another, we can expect a lot of progress now that computers are being brought to bear.

Read our paper for the details! If you want to check our work, we’ll be glad to answer lots of detailed questions. We want to rotate the dodecagon by an amount that minimizes the area of the universal covering we get, so we use a program to compute the area for many choices of rotation angle:

• Philip Gibbs, Java program.

The program is not very long—please study it or write your own, in your own favorite language! The output is here:

• Philip Gibbs, Java program output.

and as explained at the end of our paper, the best rotation angle is about 1.3°.