Give the Earth a Present: Help Us Save Climate Data

28 December, 2016

getz_ice_shelf

We’ve been busy backing up climate data before Trump becomes President. Now you can help too, with some money to pay for servers and storage space. Please give what you can at our Kickstarter campaign here:

Azimuth Climate Data Backup Project.

If we get $5000 by the end of January, we can save this data until we convince bigger organizations to take over. If we don’t get that much, we get nothing. That’s how Kickstarter works. Also, if you donate now, you won’t be billed until January 31st.

So, please help! It’s urgent.

I will make public how we spend this money. And if we get more than $5000, I’ll make sure it’s put to good use. There’s a lot of work we could do to make sure the data is authenticated, made easily accessible, and so on.

The idea

The safety of US government climate data is at risk. Trump plans to have climate change deniers running every agency concerned with climate change. So, scientists are rushing to back up the many climate databases held by US government agencies before he takes office.

We hope he won’t be rash enough to delete these precious records. But: better safe than sorry!

The Azimuth Climate Data Backup Project is part of this effort. So far our volunteers have backed up nearly 1 terabyte of climate data from NASA and other agencies. We’ll do a lot more! We just need some funds to pay for storage space and a server until larger institutions take over this task.

The team

Jan Galkowski is a statistician with a strong interest in climate science. He works at Akamai Technologies, a company responsible for serving at least 15% of all web traffic. He began downloading climate data on the 11th of December.

• Shortly thereafter John Baez, a mathematician and science blogger at U. C. Riverside, joined in to publicize the project. He’d already founded an organization called the Azimuth Project, which helps scientists and engineers cooperate on environmental issues.

• When Jan started running out of storage space, Scott Maxwell jumped in. He used to work for NASA—driving a Mars rover among other things—and now he works for Google. He set up a 10-terabyte account on Google Drive and started backing up data himself.

• A couple of days later Sakari Maaranen joined the team. He’s a systems architect at Ubisecure, a Finnish firm, with access to a high-bandwidth connection. He set up a server, he’s downloading lots of data, he showed us how to authenticate it with SHA-256 hashes, and he’s managing many other technical aspects of this project.

There are other people involved too. You can watch the nitty-gritty details of our progress here:

Azimuth Backup Project – Issue Tracker.

and you can learn more here:

Azimuth Climate Data Backup Project.


Saving Climate Data (Part 3)

23 December, 2016

You can back up climate data, but how can anyone be sure your backups are accurate? Let’s suppose the databases you’ve backed up have been deleted, so that there’s no way to directly compare your backup with the original. And to make things really tough, let’s suppose that faked databases are being promoted as competitors with the real ones! What can you do?

One idea is ‘safety in numbers’. If a bunch of backups all match, and they were made independently, it’s less likely that they all suffer from the same errors.

Another is ‘safety in reputation’. If a bunch of backups of climate data are held by academic institutes of climate science, and another are held by climate change denying organizations (conveniently listed here), you probably know which one you trust more. (And this is true even if you’re a climate change denier, though your answer may be different than mine.)

But a third idea is to use a cryptographic hash function. In very simplified terms, this is a method of taking a database and computing a fairly short string from it, called a ‘digest’.

740px-cryptographic_hash_function-svg

A good hash function makes it hard to change the database and get a new one with the same digest. So, if the person owning a database computes and publishes the digest, anyone can check that your backup is correct by computing its digest and comparing it to the original.

It’s not foolproof, but it works well enough to be helpful.

Of course, it only works if we have some trustworthy record of the original digest. But the digest is much smaller than the original database: for example, in the popular method called SHA-256, the digest is 256 bits long. So it’s much easier to make copies of the digest than to back up the original database. These copies should be stored in trustworthy ways—for example, the Internet Archive.

When Sakari Maraanen made a backup of the University of Idaho Gridded Surface Meteorological Data, he asked the custodians of that data to publish a digest, or ‘hash file’. One of them responded:

Sakari and others,

I have made the checksums for the UofI METDATA/gridMET files (1979-2015) as both md5sums and sha256sums.

You can find these hash files here:

https://www.northwestknowledge.net/metdata/data/hash.md5

https://www.northwestknowledge.net/metdata/data/hash.sha256

After you download the files, you can check the sums with:

md5sum -c hash.md5

sha256sum -c hash.sha256

Please let me know if something is not ideal and we’ll fix it!

Thanks for suggesting we do this!

Sakari replied:

Thank you so much! This means everything to public mirroring efforts. If you’d like to help further promoting this Best Practice, consider getting it recognized as a standard when you do online publishing of key public information.

1. Publishing those hashes is already a major improvement on its own.

2. Publishing them on a secure website offers people further guarantees that there has not been any man-in-the-middle.

3. Digitally signing the checksum files offers the best easily achievable guarantees of data integrity by the person(s) who sign the checksum files.

Please consider having these three steps included in your science organisation’s online publishing training and standard Best Practices.

Feel free to forward this message to whom it may concern. Feel free to rephrase as necessary.

As a separate item, public mirroring instructions for how to best download your data and/or public websites would further guarantee permanence of all your uniquely valuable science data and public contributions.

Right now we should get this message viral through the government funded science publishing people. Please approach the key people directly – avoiding the delay of using official channels. We need to have all the uniquely valuable public data mirrored before possible changes in funding.

Again, thank you for your quick response!

There are probably lots of things to be careful about. Here’s one. Maybe you can think of more, and ways to deal with them.

What if the data keeps changing with time? This is especially true of climate records, where new temperatures and so on are added to a database every day, or month, or year. Then I think we need to ‘time-stamp’ everything. The owners of the original database need to keep a list of digests, with the time each one was made. And when you make a copy, you need to record the time it was made.


Saving Climate Data (Part 2)

16 December, 2016

I want to get you involved in the Azimuth Environmental Data Backup Project, so click on that for more. But first some background.

A few days ago, many scientists, librarians, archivists, computer geeks and environmental activists started to make backups of US government environmental databases. We’re trying to beat the January 20th deadline just in case.

Backing up data is always a good thing, so there’s no point in arguing about politics or the likelihood that these backups are needed. The present situation is just a nice reason to hurry up and do some things we should have been doing anyway.

As of 2 days ago the story looked like this:

Saving climate data (Part 1), Azimuth, 13 December 2016.

A lot has happened since then, but much more needs to be done. Right now you can see a list of 90 databases to be backed up here:

Gov. Climate Datasets (Archive). (Click on the tiny word “Datasets” at the bottom of the page!)

Despite the word ‘climate’, the scope includes other environmental databases, and rightly so. Here is a list of databases that have been backed up:

The Climate Mirror Project.

By going here and clicking “Start Here to Help”:

Climate Mirror.

you can nominate a dataset for rescue, claim a dataset to rescue, let everyone know about a data rescue event, or help in some other way (which you must specify). There’s also other useful information on this page, which was set up by Nick Santos.

The overall effort is being organized by the Penn Program in the Environmental Humanities, or ‘PPEHLab’ for short, headed by Bethany Wiggin. If you want to know what’s going on, it helps to look at their blog:

DataRescuePenn.

However, the people organizing the project are currently overwhelmed with offers of help! People worldwide are proceeding to take action in a decentralzed way! So, everything is a bit chaotic, and nobody has an overall view of what’s going on.

I can’t overstate this: if you think that ‘they’ have a plan and ‘they’ know what’s going on, you’re wrong. ‘They’ is us. Act accordingly.

Here’s a list of news articles, a list of ‘data rescue events’ where people get together with lots of computers and do stuff, and a bit about archives and archivists.

News

Here are some things to read:

• Jason Koebler, Researchers are preparing for Trump to delete government science from the web, Vice, 13 December 2016.

• Brady Dennis, Scientists frantically copying U.S. climate data, fearing it might vanish under Trump, Washington Post, 13 December, 2016. (Also at the Chicago Tribune.)

• Eric Holthaus, Why I’m trying to preserve federal climate data before Trump takes office, Washington Post, 13 December 2016.

• Nicole Mortillaro, U of T heads ‘guerrilla archiving event’ to preserve climate data ahead of Trump presidency, CBC News, 14 December 2016.

• Audie Kornish and Eric Holthaus, Scientists race to preserve climate change data before Trump takes office, All Things Considered, National Public Radio, 14 December 2016.

Data rescue events

There’s one in Toronto:

Guerrilla archiving event, 10 am – 4 pm EST, Saturday 17 December 2016. Location: Bissell Building, 4th Floor, 140 St. George St. University of Toronto.

There will be one in Philadelphia:

DataRescuePenn Data Harvesting, Friday–Saturday 13–14 January 2017. Location: not determined yet, probably somewhere at the University of Pennsylvania, Philadelphia.

I hear there will also be events in New York City and Los Angeles, but I don’t know details. If you do, please tell me!

Archives and archivists

Today I helped catalyze a phone conversation between Bethany Wiggin, who heads the PPEHLab, and Nancy Beaumont, head of the Society of American Archivists. Digital archivists have a lot of expertise in saving information, so their skills are crucial here. Big wads of disorganized data are not very useful.

In this conversation I learned that some people are already in contact with the Internet Archive. This archive always tries to save US government websites and databases at the end of each presidential term. Their efforts are not limited to environmental data, and they save not only webpages but entire databases, e.g. data in ftp sites. You can nominate sites to be saved here:

• Internet Archive, End of Presidential Term Harvest 2016.

For more details read this:

• Internet Archive blog, Preserving U.S. Government Websites and Data as the Obama Term Ends, 15 December 2016.


Saving Climate Data (Part 1)

13 December, 2016

guerrilla-2

I try to stay out of politics on this website. This post is not mainly about politics. It’s a call to action. We’re trying to do something rather simple and clearly worthwhile. We’re trying to create backups of US government climate data.

The background is, of course, political. Many signs point to a dramatic change in US climate policy:

• Oliver Milman, Trump’s transition: sceptics guide every agency dealing with climate change, The Guardian, 12 December 2016.

So, scientists are now backing up large amounts of climate data, just in case the Trump administration tries to delete it after he takes office on January 20th:

• Brady Dennis, Scientists are frantically copying U.S. climate data, fearing it might vanish under Trump, Washington Post, 13 December 2016.

Of course saving the data publicly available on US government sites is not nearly as good as keeping climate programs fully funded! New data is coming in all the time from satellites and other sources. We need it—and we need the experts who understand it.

Also, it’s possible that the Trump administration won’t go so far as trying to delete big climate science databases. Still, I think it can’t be a bad thing to have backups. Or as my mother always said: better safe than sorry!

Quoting the Washington Post article:

Alarmed that decades of crucial climate measurements could vanish under a hostile Trump administration, scientists have begun a feverish attempt to copy reams of government data onto independent servers in hopes of safeguarding it from any political interference.

The efforts include a “guerrilla archiving” event in Toronto, where experts will copy irreplaceable public data, meetings at the University of Pennsylvania focused on how to download as much federal data as possible in the coming weeks, and a collaboration of scientists and database experts who are compiling an online site to harbor scientific information.

“Something that seemed a little paranoid to me before all of a sudden seems potentially realistic, or at least something you’d want to hedge against,” said Nick Santos, an environmental researcher at the University of California at Davis, who over the weekend began copying government climate data onto a nongovernment server, where it will remain available to the public. “Doing this can only be a good thing. Hopefully they leave everything in place. But if not, we’re planning for that.”

[…]

“What are the most important .gov climate assets?” Eric Holthaus, a meteorologist and self-proclaimed “climate hawk,” tweeted from his Arizona home Saturday evening. “Scientists: Do you have a US .gov climate database that you don’t want to see disappear?”

Within hours, responses flooded in from around the country. Scientists added links to dozens of government databases to a Google spreadsheet. Investors offered to help fund efforts to copy and safeguard key climate data. Lawyers offered pro bono legal help. Database experts offered to help organize mountains of data and to house it with free server space. In California, Santos began building an online repository to “make sure these data sets remain freely and broadly accessible.”

In Philadelphia, researchers at the University of Pennsylvania, along with members of groups such as Open Data Philly and the software company Azavea, have been meeting to figure out ways to harvest and store important data sets.

At the University of Toronto this weekend, researchers are holding what they call a “guerrilla archiving” event to catalogue key federal environmental data ahead of Trump’s inauguration. The event “is focused on preserving information and data from the Environmental Protection Agency, which has programs and data at high risk of being removed from online public access or even deleted,” the organizers said. “This includes climate change, water, air, toxics programs.”

The event is part of a broader effort to help San Francisco-based Internet Archive with its End of Term 2016 project, an effort by university, government and nonprofit officials to find and archive valuable pages on federal websites. The project has existed through several presidential transitions.

I hope that small “guerilla archiving” efforts will be dwarfed by more systematic work, because it’s crucial that databases be copied along with all relevant metadata—and some sort of cryptographic certificate of authenticity, if possible. However, getting lots of people involved is bound to be a good thing, politically speaking.

If you have good computer skills, good understanding of databases, or lots of storage space, please get involved. Efforts are being coordinated by Barbara Wiggin and others at the Data Refuge Project:

• PPEHLab (Penn Program in the Environmental Humanities), DataRefuge.

You can contact them at DataRefuge@ppehlab.org. Nick Santos is also involved, and if you want to get “more plugged into the project” you can contact him here. They are trying to build a climate database mirror website here:

Climate Mirror.

At the help form on this website you can nominate a dataset for rescue, claim a dataset to rescue, let them know about a data rescue event, or help in some other way (which you must specify).

PPEHLab and Penn Libraries are organizing a data rescue event this Thursday:

• PPEHLab, DataRefuge meeting, 14 December 2016.

At the American Geophysical Union meeting in San Francisco, where more than 20,000 earth and climate scientists gather from around the world, there was a public demonstration today starting at 1:30 PST:

Rally to stand up for science, 13 December 2016.

And the “guerilla archiving” hackathon in Toronto is this Saturday—see below. If you know people with good computer skills in Toronto, get them to check it out!

To follow progress, also read Eric Holthaus’s tweets and replies here:

Eric Holthaus.

Guerrilla archiving in Toronto

Here are details on this:

Guerrilla Archiving Hackathon

Date: 10am-4pm, December 17, 2016

Location: Bissell Building, 4th Floor, 140 St. George St. University of Toronto

RSVP and up-to-date information: Guerilla archiving: saving environmental data from Trump.

Bring: laptops, power bars, and snacks. Coffee and pizza provided.

This event collaborates with the Internet Archive’s End of Term 2016 project, which seeks to archive the federal online pages and data that are in danger of disappearing during the Trump administration. Our event is focused on preserving information and data from the Environmental Protection Agency, which has programs and data at high risk of being removed from online public access or even deleted. This includes climate change, water, air, toxics programs. This project is urgent because the Trump transition team has identified the EPA and other environmental programs as priorities for the chopping block.

The Internet Archive is a San Francisco-based nonprofit digital library which aims at preserving and making universally accessible knowledge. Its End of Term web archive captures and saves U.S. Government websites that are at risk of changing or disappearing altogether during government transitions. The Internet Archive has asked volunteers to help select and organize information that will be preserved before the Trump transition.

End of Term web archive: http://eotarchive.cdlib.org/2016.html

New York Times article: “Harvesting Government History, One Web Page at a Time

Activities:

Identifying endangered programs and data

Seeding the End of Term webcrawler with priority URLs

Identifying and mapping the location of inaccessible environmental databases

Hacking scripts to make accessible to the webcrawler hard to reach databases.

Building a toolkit so that other groups can hold similar events

Skills needed: We need all kinds of people — and that means you!

People who can locate relevant webpages for the Internet Archive’s webcrawler

People who can identify data targeted for deletion by the Trump transition team and the organizations they work with

People with knowledge of government websites and information, including the EPA

People with library and archive skills

People who are good at navigating databases

People interested in mapping where inaccessible data is located at the EPA

Hackers to figure out how to extract data and URLs from databases (in a way that Internet Archive can use)

People with good organization and communication skills

People interested in creating a toolkit for reproducing similar events

Contacts: michelle.murphy@utoronto.ca, p.keilty@utoronto.ca


The Price of Everything

29 February, 2016

Astronomers using the Hubble Space Telescope have captured the most comprehensive picture ever assembled of the evolving Universe — and one of the most colourful. The study is called the Ultraviolet Coverage of the Hubble Ultra Deep Field (UVUDF) project.

I’m wondering whether anyone has attempted to compute the value of the whole Universe, in dollars.

This strikes me as a crazy idea—a kind of reductio ad absurdum of the economist’s worldview. But people have come pretty close, so I figure it’s just a matter of time. We might as well try it now.

Let me explain.

The price of the Earth

There’s a trend toward trying to estimate the value of ‘ecosystem services’, which means ‘the benefits of nature to households, communities, and economies’. There’s a practical reason to do this. Governments are starting to offer money to farmers and landowners in exchange for managing their land in a way that provides some sort of ecological service. So, they want to know how much these services are worth. You can read about this trend here:

• Wikipedia, Payment for ecosystem services.

It’s a booming field in economics. So, it’s perhaps inevitable that eventually someone would try to estimate the value of ecosystem services that the whole Earth provides to humanity each year:

• Robert Costanza et al, The value of the world’s ecosystem services and natural capital, Nature 387 (1997), 253–260.

They came up with an estimate of $33 trillion per year, which was almost twice the global GDP at the time. More precisely:

Abstract. The services of ecological systems and the natural capital stocks that produce them are critical to the functioning of the Earth’s life-support system. They contribute to human welfare, both directly and indirectly, and therefore represent part of the total economic value of the planet. We have estimated the current economic value of 17 ecosystem services for 16 biomes, based on published studies and a few original calculations. For the entire biosphere, the value (most of which is outside the market) is estimated to be in the range of US $16–54 trillion (1012) per year, with an average of US $33 trillion per year. Because of the nature of the uncertainties, this must be considered a minimum estimate. Global gross national product total is around US $18 trillion per year.

You can read the paper if you’re interested in the methodology.

In 2014, some of the authors of this paper redid the assessment—using a slightly modified methodology but with more detailed 2011 data—and increased their estimate to between $125–145 trillion a year:

• Robert Costanza, Changes in the global value of ecosystem services, Global Environmental Change 26 (2014), 152–158.

They also estimated a $4.3–20.2 trillion loss of ecosystem services due to land use change during the period from 1997 to 2011. While still difficult to define, this loss per year could be more meaningful than the total value of ecosystem services. Sometimes a change in some quantity can be measured even when the quantity itself cannot: a famous example is the electrostatic potential!

The price of humanity

Back in 1984, before he became the famous guru of string theory, the physicist Ed Witten did a rough calculation and got a surprising result:

• Edward Witten, Cosmic separation of phases, Phys. Rev. D 30 (1984), 272–285.

Protons and neutrons are made of up and down quarks held together by gluons. Strange quarks are more massive and thus only show up in more short-lived particles. However, at high pressures, when nuclear matter becomes a quark-gluon plasma, a mix of up, down and strange quarks could have less energy than just ups and downs!

The reason is the Pauli exclusion principle. You can only fit one up and one down in each energy level (or two, if you count their spin), so as you pack in more the energy has to increase. But adding strange quarks to the mix means you can pack 3 quarks into each energy level (or 6, counting spin). So, you can have more quarks at low energies. At high pressures, this effect will become more important than the fact that strange quarks have more mass.

For this reason, astronomers have become interested in the possibility of ‘strange stars’, more dense than ordinary neutron stars:

• Fridolin Weber, Strange quark matter and compact stars, Progress in Particle and Nuclear Physics 54 (2005), 193–288.

Unfortunately, nobody has seen evidence for them, as far as I can tell.

But the really weird part is that Witten’s calculations suggested that ‘strange matter’, containing a mix of up, down and strange quarks, could even be more stable than normal matter at ordinary temperatures and pressures! His calculation was very rough, so I wouldn’t take this too seriously. The fact that we don’t actually see strange matter is a very good sign that it’s not more stable than ordinary matter. In principle ordinary matter could be just ‘metastable’, waiting to turn into strange matter under the right conditions—sort of like how water turned into ice-9 in Kurt Vonnegut’s novel Cat’s Cradle. But it seems implausible.

Nonetheless, when the Relativistic Heavy Ion Collider or RHIC was getting ready to start colliding nuclei at high speeds at the Brookhaven National Laboratory, some people got worried that the resulting quark-gluon plasma could turn into strange matter—and then catalyze a reaction in which the whole Earth was quickly transformed into strange matter!

This is interesting example of a disaster that would have huge consequences, that is very improbable, but for which it’s hard to estimate the precise probability—or the precise cost.

So, a debate started!

Needless to say, not all the participants behaved rationally. Frank Close, professor of physics at the University of Oxford, said:

the chance of this happening is like you winning the major prize on the lottery 3 weeks in succession; the problem is that people believe it is possible to win the lottery 3 weeks in succession.

Eventually John Marburger, the director of the Brookhaven National Laboratory, commissioned a risk assessment by a committee of physicists before authorizing RHIC to begin operating:

• R.L. Jaffe, W. Busza, J.Sandweiss and F. Wilczek, Review of speculative “disaster scenarios” at RHIC, 1999.

In 2000, a lawyer and former physics lab technician named Walter L. Wagner tried to stop experiments at RHIC by filing federal lawsuits in San Francisco and New York. Both suits were dismissed. The experiment went ahead, nuclei of gold were collided to form a quark-gluon plasma with a temperature of 4 trillion kelvin, and we lucked out: nothing bad happened.

This is very interesting, but what matters to me now is this book:

• Richard A. Posner, Catastrophe: Risk and Response, Oxford U. Press, Oxford, 2004.

in which a distinguished US judge attempted to do a cost-benefit analysis of the Relativistic Heavy Ion Collider.

He estimated a $600 million cost for constructing the device and a $1.1 billion cost for operating it for ten years (discounted at a rate of 3% per year). He guessed at a potential total benefit of $2.1 billion—which he said was probably a huge overestimate. This gave a net benefit of $400 million.

Then he took into account the risk that the experiment would destroy the Earth! He very conservatively estimated the price of a human life at $50,000. He multiplied this by the number of people now living, and doubled the result to include the value of all people who might live in the future, getting $600 trillion.

This may seem odd, but discounting the value of future goods can make even an endless stream of future human lives have a finite total value. More annoying to me is that he only took humans into account: as far as I can tell, he did not assign any value to any other organisms on the Earth!

But let’s not make fun of Posner: he freely admitted that this result was very rough and perhaps meaningless! He was clearly just trying to start a discussion. His book tries to examine both sides of every issue.

Anyway: his estimate of the cost of human extinction was $600 trillion. He then multiplied this by the probability that RHIC could wipe out the human race. He estimated that probability at 1 in 10 million per year, or 1 in a million for a ten-year-long experiment. So, he got $600 million as the extra cost of RHIC due to the possibility that it could make the human race go extinct.

Taking the net benefit of $400 million and subtracting this $600 million cost of our possible extinction, he got a negative number. So, he argued, we should not turn on RHIC.

Clearly there are lots of problems with this idea. I don’t believe the entire human race has a well-defined monetary value. I’m inclined to believe that monetary value only makes sense for things that you can buy and sell. But it’s not so easy to figure out the ‘correct’ way to make decisions that involve small probabilities of huge disasters.

The price of the Universe

Suppose, just for fun, that we accept Posner’s $600 trillion estimate for the value of the Earth. What then is the value of the Universe?

I think it’s a stupid question, but I feel sure someone is going to answer it someday, so it might as well be me. Maybe someone has already done it: if so, let me know. But let me give it a try.

I’ll be very relaxed about this, so it won’t take long.

We could try to calculate the value of the Universe by estimating the number of planets with intelligent life and multiplying that by $600 trillion. It’s very hard to guess the number of such planets per cubic megaparsec. But since the Universe seems to extend indefinitely, the result is infinite.

That’s my best estimate: infinity!

But that’s not very satisfying. What if we limit ourselves to the observable Universe?

No matter what I say, I’ll get in trouble, but let me estimate that there’s one intelligent civilization per galaxy.

A conservative estimate is that there are 100 billion galaxies in the observable universe. There might be twice as many, but perhaps a lot of them are small or less likely to support life for various other reasons.

So, I get $600 trillion times 100 billion, or

$60,000,000,000,000,000,000,000,000

as my estimate of the value of the observable Universe. That’s $6 × 1025, or $60 septillion.

The price of everything

The title of the article is taken from a line in Oscar Wilde’s play Lady Windermere’s Fan:

Cecil Graham: What is a cynic?

Lord Darlington: A man who knows the price of everything, and the value of nothing.


Aggressively Expanding Civilizations

5 February, 2016

Ever since I became an environmentalist, the potential destruction wrought by aggressively expanding civilizations has been haunting my thoughts. Not just here and now, where it’s easy to see, but in the future.

In October 2006, I wrote this in my online diary:

A long time ago on this diary, I mentioned my friend Bruce Smith’s nightmare scenario. In the quest for ever faster growth, corporations evolve toward ever faster exploitation of natural resources. The Earth is not enough. So, ultimately, they send out self-replicating von Neumann probes that eat up solar systems as they go, turning the planets into more probes. Different brands of probes will compete among each other, evolving toward ever faster expansion. Eventually, the winners will form a wave expanding outwards at nearly the speed of light—demolishing everything behind them, leaving only wreckage.

The scary part is that even if we don’t let this happen, some other civilization might.

The last point is the key one. Even if something is unlikely, in a sufficiently large universe it will happen, as long as it’s possible. And then it will perpetuate itself, as long as it’s evolutionarily fit. Our universe seems pretty darn big. So, even if a given strategy is hard to find, if it’s a winning strategy it will get played somewhere.

So, even in this nightmare scenario of "spheres of von Neumann probes expanding at near lightspeed", we don’t need to worry about a bleak future for the universe as a whole—any more than we need to worry that viruses will completely kill off all higher life forms. Some fraction of civilizations will probably develop defenses in time to repel the onslaught of these expanding spheres.

It’s not something I stay awake worrying about, but it’s a depressingly plausible possibility. As you can see, I was trying to reassure myself that everything would be okay, or at least acceptable, in the long run.

Even earlier, S. Jay Olson and I wrote a paper together on the limitations in accurately measuring distances caused by quantum gravity. If you try to measure a distance too accurately, you’ll need to concentrate so much energy in such a small space that you’ll create a black hole!

That was in 2002. Later I lost touch with him. But now I’m happy to discover that he’s doing interesting work on quantum gravity and quantum information processing! He is now at Boise State University in Idaho, his home state.

But here’s the cool part: he’s also studying aggressively expanding civilizations.

Expanding bubbles

What will happen if some civilizations start aggressively expanding through the Universe at a reasonable fraction of the speed of light? We don’t have to assume most of them do. Indeed, there can’t be too many, or they’d already be here! More precisely, the density of such civilizations must be low at the present time. The number of them could be infinite, since space is apparently infinite. But none have reached us. We may eventually become such a civilization, but we’re not one yet.

Each such civilization will form a growing ‘bubble’: an expanding sphere of influence. And occasionally, these bubbles will collide!

Here are some pictures from a simulation he did:





As he notes, the math of these bubbles has already been studied by researchers interested in inflationary cosmology, like Alan Guth. These folks have considered the possibility that in the very early Universe, most of space was filled with a ‘false vacuum’: a state of matter that resembles the actual vacuum, but has higher energy density.

A false vacuum could turn into the true vacuum, liberating energy in the form of particle-antiparticle pairs. However, it might not do this instantly! It might be ‘metastable’, like ball number 1 in this picture:

It might need a nudge to ‘roll over the hill’ (metaphorically) and down into the lower-energy state corresponding to the true vacuum, shown as ball number 3. Or, thanks to quantum mechanics, it might ‘tunnel’ through this hill.

The balls and the hill are just an analogy. What I mean is that the false vacuum might need to go through a stage of having even higher energy density before it could turn into the true vacuum. Random fluctuations, either quantum-mechanical or thermal, could make this happen. Such a random fluctuation could happen in one location, forming a ‘bubble’ of true vacuum that—under certain conditions—would rapidly expand.

It’s actually not very different from bubbles of steam forming in superheated water!

But here’s the really interesting Jay Olson noted in his first paper on this subject. Research on bubbles in the inflationary cosmology could actually be relevant to aggressively expanding civilizations!

Why? Just as a bubble of expanding true vacuum has different pressure than the false vacuum surrounding it, the same might be true for an aggressively expanding civilization. If they are serious about expanding rapidly, they may convert a lot of matter into radiation to power their expansion. And while energy is conserved in this process, the pressure of radiation in space is a lot bigger than the pressure of matter, which is almost zero.

General relativity says that energy density slows the expansion of the Universe. But also—and this is probably less well-known among nonphysicists—it says that pressure has a similar effect. Also, as the Universe expands, the energy density and pressure of radiation drops at a different rate than the energy density of matter.

So, the expansion of the Universe itself, on a very large scale, could be affected by aggressively expanding civilizations!

The fun part is that Jay Olson actually studies this in a quantitative way, making some guesses about the numbers involved. Of course there’s a huge amount of uncertainty in all matters concerning aggressively expanding high-tech civilizations, so he actually considers a wide range of possible numbers. But if we assume a civilization turns a large fraction of matter into radiation, the effects could be significant!

The effect of the extra pressure due to radiation would be to temporarily slow the expansion of the Universe. But the expansion would not be stopped. The radiation will gradually thin out. So eventually, dark energy—which has negative pressure, and does not thin out as the Universe expands—will win. Then the Universe will expand exponentially, as it is already beginning to do now.

(Here I am ignoring speculative theories where dark energy has properties that change dramatically over time.)

Jay Olson’s work

Here are his papers on this subject. The abstracts sketch his results, but you have to look at the papers to see how nice they are. He’s thought quite carefully about these things.

• S. Jay Olson, Homogeneous cosmology with aggressively expanding civilizations, Classical and Quantum Gravity 32 (2015) 215025.

Abstract. In the context of a homogeneous universe, we note that the appearance of aggressively expanding advanced life is geometrically similar to the process of nucleation and bubble growth in a first-order cosmological phase transition. We exploit this similarity to describe the dynamics of life saturating the universe on a cosmic scale, adapting the phase transition model to incorporate probability distributions of expansion and resource consumption strategies. Through a series of numerical solutions spanning several orders of magnitude in the input assumption parameters, the resulting cosmological model is used to address basic questions related to the intergalactic spreading of life, dealing with issues such as timescales, observability, competition between strategies, and first-mover advantage. Finally, we examine physical effects on the universe itself, such as reheating and the backreaction on the evolution of the scale factor, if such life is able to control and convert a significant fraction of the available pressureless matter into radiation. We conclude that the existence of life, if certain advanced technologies are practical, could have a significant influence on the future large-scale evolution of the universe.

• S. Jay Olson, Estimates for the number of visible galaxy-spanning civilizations and the cosmological expansion of life.

Abstract. If advanced civilizations appear in the universe with a desire to expand, the entire universe can become saturated with life on a short timescale, even if such expanders appear but rarely. Our presence in an untouched Milky Way thus constrains the appearance rate of galaxy-spanning Kardashev type III (K3) civilizations, if it is assumed that some fraction of K3 civilizations will continue their expansion at intergalactic distances. We use this constraint to estimate the appearance rate of K3 civilizations for 81 cosmological scenarios by specifying the extent to which humanity could be a statistical outlier. We find that in nearly all plausible scenarios, the distance to the nearest visible K3 is cosmological. In searches where the observable range is limited, we also find that the most likely detections tend to be expanding civilizations who have entered the observable range from farther away. An observation of K3 clusters is thus more likely than isolated K3 galaxies.

• S. Jay Olson, On the visible size and geometry of aggressively expanding civilizations at cosmological distances.

Abstract. If a subset of advanced civilizations in the universe choose to rapidly expand into unoccupied space, these civilizations would have the opportunity to grow to a cosmological scale over the course of billions of years. If such life also makes observable changes to the galaxies they inhabit, then it is possible that vast domains of life-saturated galaxies could be visible from the Earth. Here, we describe the shape and angular size of these domains as viewed from the Earth, and calculate median visible sizes for a variety of scenarios. We also calculate the total fraction of the sky that should be covered by at least one domain. In each of the 27 scenarios we examine, the median angular size of the nearest domain is within an order of magnitude of a percent of the whole celestial sphere. Observing such a domain would likely require an analysis of galaxies on the order of a giga-lightyear from the Earth.

Here are the main assumptions in his first paper:

1. At early times (relative to the appearance of life), the universe is described by the standard cosmology – a benchmark Friedmann-Robertson-Walker (FRW) solution.

2. The limits of technology will allow for self-reproducing spacecraft, sustained relativistic travel over cosmological distances, and an efficient process to convert baryonic matter into radiation.

3. Control of resources in the universe will tend to be dominated by civilizations that adopt a strategy of aggressive expansion (defined as a frontier which expands at a large fraction of the speed of the individual spacecraft involved), rather than those expanding diffusively due to the conventional pressures of population dynamics.

4. The appearance of aggressively expanding life in the universe is a spatially random event and occurs at some specified, model-dependent rate.

5. Aggressive expanders will tend to expand in all directions unless constrained by the presence of other civilizations, will attempt to gain control of as much matter as is locally available for their use, and once established in a region of space, will consume mass as an energy source (converting it to radiation) at some specified, model-dependent rate.


Ken Caldeira on What To Do

25 January, 2016

Famous climate scientist Ken Caldeira has a new article out:

• Ken Caldeira, Stop Emissions!, Technology Review, January/February 2016, 41–43.

Let me quote a bit:

Many years ago, I protested at the gates of a nuclear power plant. For a long time, I believed it would be easy to get energy from biomass, wind, and solar. Small is beautiful. Distributed power, not centralized.

I wish I could still believe that.

My thinking changed when I worked with Marty Hoffert of New York University on research that was first published in Nature in 1998. It was the first peer-reviewed study that examined the amount of near-zero-emission energy we would need in order to solve the climate problem. Unfortunately, our conclusions still hold. We need massive deployment of affordable and dependable near-zero-emission energy, and we need a major research and development program to develop better energy and transportation systems.

It’s true that wind and solar power have been getting much more attractive in recent years. Both have gotten significantly cheaper. Even so, neither wind nor solar is dependable enough, and batteries do not yet exist that can store enough energy at affordable prices to get a modern industrial society through those times when the wind is not blowing and the sun is not shining.

Recent analyses suggest that wind and solar power, connected by a continental-scale electric grid and using natural-gas power plants to provide backup, could reduce greenhouse-gas emissions from electricity production by about two-thirds. But generating electricity is responsible for only about one-third of total global carbon dioxide emissions, which are increasing by more than 2 percent a year. So even if we had this better electric sector tomorrow, within a decade or two emissions would be back where they are today.

We need to bring much, much more to bear on the climate problem. It can’t be solved unless it is addressed as seriously as we address national security. The politicians who go to the Paris Climate Conference are making commitments that fall far short of what would be needed to substantially reduce climate risk.

Daunting math

Four weeks ago, a hurricane-strength cyclone smashed into Yemen, in the Arabian Peninsula, for the first time in recorded history. Also this fall, a hurricane with the most powerful winds ever measured slammed into the Pacific coast of Mexico.

Unusually intense storms such as these are a predicted consequence of global warming, as are longer heat waves and droughts and many other negative weather-related events that we can expect to become more commonplace. Already, in the middle latitudes of the Northern Hemisphere, average temperatures are increasing at a rate that is equivalent to moving south about 10 meters (30 feet) each day. This rate is about 100 times faster than most climate change that we can observe in the geologic record, and it gravely threatens biodiversity in many parts of the world. We are already losing about two coral reefs each week, largely as a direct consequence of our greenhouse-gas emissions.

Recently, my colleagues and I studied what will happen in the long term if we continue pulling fossil carbon out of the ground and releasing it into the atmosphere. We found that it would take many thousands of years for the planet to recover from this insult. If we burn all available fossil-fuel resources and dump the resulting carbon dioxide waste in the sky, we can expect global average temperatures to be 9 °C (15 °F) warmer than today even 10,000 years into the future. We can expect sea levels to be about 60 meters (200 feet) higher than today. In much of the tropics, it is possible that mammals (including us) would not be able to survive outdoors in the daytime heat. Thus, it is essential to our long-term well-being that fossil-fuel carbon does not go into our atmosphere.

If we want to reduce the threat of climate change in the near future, there are actions to take now: reduce emissions of short-lived pollutants such as black carbon, cut emissions of methane from natural-gas fields and landfills, and so on. We need to slow and then reverse deforestation, adopt electric cars, and build solar, wind, and nuclear plants.

But while existing technologies can start us down the path, they can’t get us to our goal. Most analysts believe we should decarbonize electricity generation and use electricity for transportation, industry, and even home heating. (Using electricity for heating is wildly inefficient, but there may be no better solution in a carbon-constrained world.) This would require a system of electricity generation several times larger than the one we have now. Can we really use existing technology to scale up our system so dramatically while markedly reducing emissions from that sector?

Solar power is the only energy source that we know can power civilization indefinitely. Unfortunately, we do not have global-scale electricity grids that could wheel solar energy from day to night. At the scale of the regional electric grid, we do not have batteries that can balance daytime electricity generation with nighttime demand.

We should do what we know how to do. But all the while, we need to be thinking about what we don’t know how to do. We need to find better ways to generate, store, and transmit electricity. We also need better zero-carbon fuels for the parts of the economy that can’t be electrified. And most important, perhaps, we need better ways of using energy.

Energy is a means, not an end. We don’t want energy so much as we want what it makes possible: transportation, entertainment, shelter, and nutrition. Given United Nations estimates that the world will have at least 11 billion people by the end of this century (50 percent more than today), and given that we can expect developing economies to grow rapidly, demand for services that require energy is likely to increase by a factor of 10 or more over the next century. If we want to stabilize the climate, we need to reduce total emissions from today’s level by a factor of 10. Put another way, if we want to destroy neither our environment nor our economy, we need to reduce the emissions per energy service provided by a factor of 100. This requires something of an energy miracle.

The essay continues.

Near the end, he writes “despite all these reasons for despair, I’m hopeful”. He is hopeful that a collective change of heart is underway that will enable humanity to solve this problem. But he doesn’t claim to know any workable solution to the problem. In fact, he mostly list reasons why various possible solutions won’t be enough.