Solar Irradiance Measurements

14 January, 2017

guest post by Nadja Kutz

This blog post is based on a thread in the Azimuth Forum.

The current theories about the Sun’s life-time indicate that the Sun will turn into a red giant in about 5 billion years. How and when this process is going to be destructive to the Earth is still debated. Apparently, according to more or less current theories, there has been a quasilinear increase in luminosity. On page 3 of

• K.-P. Schröder and Robert Connon Smith, Distant future of the Sun and Earth revisited, 2008.

we read:

The present Sun is increasing its average luminosity at a rate of 1% in every 110 million years, or 10% over the next billion years.

Unfortunately I feel a bit doubtful about this, in particular after I looked at some irradiation measurements. But let’s recap a bit.

In the Azimuth Forum I asked for information about solar irradiance measurements . Why I was originally interested in how bright the Sun is shining is a longer story, which includes discussions about the global warming potential of methane. For this post I prefer to omit this lengthy historical survey about my original motivations (maybe I’ll come back to this later). Meanwhile there is an also a newer reason why I am interested in solar irradiance measurements, which I want to talk about here.

Strictly speaking I was not only interested in knowing more about how bright the sun is shining, but how bright each of its ‘components’ is shining. That is, I wanted to see spectrally resolved solar irradiance measurements—and in particular, measurements in the range between the wavelengths of roughly 650 and 950 nanometers.

This led me to the the Sorce mission, which is a NASA sponsored satellite mission, whose website is located at the University of Colorado. The website very nicely provides an interactive interface including a fairly clear and intuitive LISIRD interactive app with which the spectral measurements of the Sun can be studied.

As a side remark I should mention that this NASA mission belongs to the NASA Earth Science mission, which is currently threatened to be scrapped.

By using this app, I found in the 650–950 nanometer range a very strange rise in radiation between 2003 and 2016, which happened mainly in the last 2-3 years. You can see this rise here (click to enlarge):

verlauf774-51linie
spectral line 774.5nm from day 132 to 5073, day 132 starting Jan 24 in 2003, day 5073 is end of 2016

Now, fluctuations within certain spectral ranges within the Sun’s spectrum are not news. Here, however, it looked as if a rather stable range suddenly started to change rather “dramatically”.

I put the word “dramatically” in quotes for a couple of reasons.

Spectral measurements are complicated and prone to measurement errors. Subtle issues of dirty lenses and the like are already enough to suggest that this is no easy feat, so that this strange rise might easily be due to a measurement failure. Moreover, as I said, it looked as this was a fairly stable range over the course of ten years. But maybe this new rise in irradiation is part of the 11 years solar cycle, i.e., a common phenomenon. In addition, although the rise looks big, it may overall still be rather subtle.

So: how subtle or non-subtle is it then?

In order to assess that, I made a quick estimate (see the Forum discussion) and found that if all the additional radiation would reach the ground (which of course it doesn’t due to absorption), then on 1000 square meters you could easily power a lawn mower with that subtle change! I.e., my estimate was 1200 watts for that patch of lawn. Whoa!

That was disconcerting enough to download the data and linearly interpolate it and calculate the power of that change. I wrote a program in Javascript to do that. The computer calculations revealed an answer of 1000 watts, i.e., my estimate was fairly close. Whoa again!

How does this translate to overall changes in solar irradiance? Some increase had already been noticed. NASA wrote 2003 on its webpage:

Although the inferred increase of solar irradiance in 24 years, about 0.1 percent, is not enough to cause notable climate change, the trend would be important if maintained for a century or more.

That was 13 years ago.

I now used my program to calculate the irradiance for one day in 2016 between the wavelengths of 180.5 nm and 1797.62 nm, a quite big part of the solar spectrum, and got the value 627 W/m2. I computed the difference between this and one day in 2003, approximately one solar cycle earlier. I got 0.61 W/m2, which is 0.1% in 13 years, rather then 24 years. Of course this is not an average value, and not really well adjusted to the sun cycle, and fluctuations play a big role in some parts of the spectrum, but well—this might indicate that the overall rate of rise in solar radiation may have doubled. Likewise concerning the question of the sun’s luminosity: for assessing luminosity one would need to take the concrete satellite-earth orbit at the day of measurement into account, as the distance to the sun varies. But still, on a first glance this all appears disconcerting.

Given that this spectral range has for example an overlap with the absorption of water (clouds!), this should at least be discussed.

See how the spectrum splits into a purple and dark red line in the lower circle? (Click to enlarge.)

bergbildtag132tag5073at300kreis
Difference in spectrum between day 132 and 5073

The upper circle displays another rise, which is discussed in the forum.

So concluding, all this looks as if this needs to be monitored a bit more closely. It is important to see whether these rises in irradiance are also displayed in other measurements, so I asked in the Azimuth Forum, but so far have gotten no answer.

The Russian Wikipedia site about solar irradiance unfortunately contains no links to Russian satellite missions (if I haven’t overlooked something), and there exists no Chinese or Indian Wikipedia about solar irradiance. I also couldn’t find any publicly accessible spectral irradiance measurements on the ESA website (although they have some satellites out there). In December I wrote an email to the head of the section solar radiometry of the World Radiation Center (WRC) Wolfgang Finsterle, but I’ve had no answer yet.

In short: if you know about publicly available solar spectral irradiance measurements other than the LISIRD ones, then please let me know.


Give the Earth a Present: Help Us Save Climate Data

28 December, 2016

getz_ice_shelf

We’ve been busy backing up climate data before Trump becomes President. Now you can help too, with some money to pay for servers and storage space. Please give what you can at our Kickstarter campaign here:

Azimuth Climate Data Backup Project.

If we get $5000 by the end of January, we can save this data until we convince bigger organizations to take over. If we don’t get that much, we get nothing. That’s how Kickstarter works. Also, if you donate now, you won’t be billed until January 31st.

So, please help! It’s urgent.

I will make public how we spend this money. And if we get more than $5000, I’ll make sure it’s put to good use. There’s a lot of work we could do to make sure the data is authenticated, made easily accessible, and so on.

The idea

The safety of US government climate data is at risk. Trump plans to have climate change deniers running every agency concerned with climate change. So, scientists are rushing to back up the many climate databases held by US government agencies before he takes office.

We hope he won’t be rash enough to delete these precious records. But: better safe than sorry!

The Azimuth Climate Data Backup Project is part of this effort. So far our volunteers have backed up nearly 1 terabyte of climate data from NASA and other agencies. We’ll do a lot more! We just need some funds to pay for storage space and a server until larger institutions take over this task.

The team

Jan Galkowski is a statistician with a strong interest in climate science. He works at Akamai Technologies, a company responsible for serving at least 15% of all web traffic. He began downloading climate data on the 11th of December.

• Shortly thereafter John Baez, a mathematician and science blogger at U. C. Riverside, joined in to publicize the project. He’d already founded an organization called the Azimuth Project, which helps scientists and engineers cooperate on environmental issues.

• When Jan started running out of storage space, Scott Maxwell jumped in. He used to work for NASA—driving a Mars rover among other things—and now he works for Google. He set up a 10-terabyte account on Google Drive and started backing up data himself.

• A couple of days later Sakari Maaranen joined the team. He’s a systems architect at Ubisecure, a Finnish firm, with access to a high-bandwidth connection. He set up a server, he’s downloading lots of data, he showed us how to authenticate it with SHA-256 hashes, and he’s managing many other technical aspects of this project.

There are other people involved too. You can watch the nitty-gritty details of our progress here:

Azimuth Backup Project – Issue Tracker.

and you can learn more here:

Azimuth Climate Data Backup Project.


Saving Climate Data (Part 3)

23 December, 2016

You can back up climate data, but how can anyone be sure your backups are accurate? Let’s suppose the databases you’ve backed up have been deleted, so that there’s no way to directly compare your backup with the original. And to make things really tough, let’s suppose that faked databases are being promoted as competitors with the real ones! What can you do?

One idea is ‘safety in numbers’. If a bunch of backups all match, and they were made independently, it’s less likely that they all suffer from the same errors.

Another is ‘safety in reputation’. If a bunch of backups of climate data are held by academic institutes of climate science, and another are held by climate change denying organizations (conveniently listed here), you probably know which one you trust more. (And this is true even if you’re a climate change denier, though your answer may be different than mine.)

But a third idea is to use a cryptographic hash function. In very simplified terms, this is a method of taking a database and computing a fairly short string from it, called a ‘digest’.

740px-cryptographic_hash_function-svg

A good hash function makes it hard to change the database and get a new one with the same digest. So, if the person owning a database computes and publishes the digest, anyone can check that your backup is correct by computing its digest and comparing it to the original.

It’s not foolproof, but it works well enough to be helpful.

Of course, it only works if we have some trustworthy record of the original digest. But the digest is much smaller than the original database: for example, in the popular method called SHA-256, the digest is 256 bits long. So it’s much easier to make copies of the digest than to back up the original database. These copies should be stored in trustworthy ways—for example, the Internet Archive.

When Sakari Maraanen made a backup of the University of Idaho Gridded Surface Meteorological Data, he asked the custodians of that data to publish a digest, or ‘hash file’. One of them responded:

Sakari and others,

I have made the checksums for the UofI METDATA/gridMET files (1979-2015) as both md5sums and sha256sums.

You can find these hash files here:

https://www.northwestknowledge.net/metdata/data/hash.md5

https://www.northwestknowledge.net/metdata/data/hash.sha256

After you download the files, you can check the sums with:

md5sum -c hash.md5

sha256sum -c hash.sha256

Please let me know if something is not ideal and we’ll fix it!

Thanks for suggesting we do this!

Sakari replied:

Thank you so much! This means everything to public mirroring efforts. If you’d like to help further promoting this Best Practice, consider getting it recognized as a standard when you do online publishing of key public information.

1. Publishing those hashes is already a major improvement on its own.

2. Publishing them on a secure website offers people further guarantees that there has not been any man-in-the-middle.

3. Digitally signing the checksum files offers the best easily achievable guarantees of data integrity by the person(s) who sign the checksum files.

Please consider having these three steps included in your science organisation’s online publishing training and standard Best Practices.

Feel free to forward this message to whom it may concern. Feel free to rephrase as necessary.

As a separate item, public mirroring instructions for how to best download your data and/or public websites would further guarantee permanence of all your uniquely valuable science data and public contributions.

Right now we should get this message viral through the government funded science publishing people. Please approach the key people directly – avoiding the delay of using official channels. We need to have all the uniquely valuable public data mirrored before possible changes in funding.

Again, thank you for your quick response!

There are probably lots of things to be careful about. Here’s one. Maybe you can think of more, and ways to deal with them.

What if the data keeps changing with time? This is especially true of climate records, where new temperatures and so on are added to a database every day, or month, or year. Then I think we need to ‘time-stamp’ everything. The owners of the original database need to keep a list of digests, with the time each one was made. And when you make a copy, you need to record the time it was made.


Azimuth Backup Project (Part 2)

20 December, 2016


azimuth_logo

I want to list some databases that are particularly worth backing up. But to do this, we need to know what’s already been backed up. That’s what this post is about.

Azimuth backups

Here is information as of now (21:45 GMT 20 December 2016). I won’t update this information. For up-to-date information see

Azimuth Backup Project: Issue Tracker.

For up-to-date information on the progress of each of individual databases listed below, click on my summary of what’s happening now.

Here are the databases that we’ve backed up:

• NASA GISTEMP website at http://data.giss.nasa.gov/gistemp/downloaded by Jan and uploaded to Sakari’s datarefuge server.

• NOAA Carbon Dioxide Information Analysis Center (CDIAC) data at ftp.ncdc.noaa.gov/pub/data/paleo/cdiac.ornl.gov-pub — downloaded by Jan and uploaded to Sakari’s datarefuge server.

• NOAA Carbon Tracker website at http://www.esrl.noaa.gov/psd/data/gridded/data.carbontracker.htmldownloaded by Jan, uploaded to Sakari’s datarefuge server.

These are still in progress, but I think we have our hands on the data:

• NOAA Precipitation Frequency Data at http://hdsc.nws.noaa.gov/hdsc/pfds/ and ftp://hdsc.nws.noaa.gov/pubdownloaded by Borislav, not yet uploaded to Sakari’s datarefuge server.

• NOAA Carbon Dioxide Information Analysis Center (CDIAC) website at http://cdiac.ornl.govdownloaded by Jan, uploaded to Sakari’s datarefuge server, but there’s evidence that the process was incomplete.

• NOAA website at https://www.ncdc.noaa.govdownloaded by Jan, who is now attempting to upload it to Sakari’s datarefuge server.

• NOAA National Centers for Environmental Information (NCEI) website at https://www.ncdc.noaa.govdownloaded by Jan, who is now attempting to upload it to Sakari’s datarefuge server, but there are problems.

• Ocean and Atmospheric Research data at ftp.oar.noaa.gov — downloaded by Jan, now attempting to upload it to Sakari’s datarefuge server.

• NOAA NCEP/NCAR Reanalysis ftp site at ftp.cdc.noaa.gov/Datasets/ncep.reanalysis/ — downloaded by Jan, now attempting to upload it to Sakari’s datarefuge server.

I think we’re getting these now, more or less:

• NOAA National Centers for Environmental Information (NCEI) ftp site at ftp://eclipse.ncdc.noaa.gov/pub/ — in the process of being downloaded by Jan, “Very large. May be challenging to manage with my facilities”.

• NASA Planetary Data System (PDS) data at https://pds.nasa.govin the process of being downloaded by Sakari.

• NOAA tides and currents products website at https://tidesandcurrents.noaa.gov/products.html, which includes the sea level trends data at https://tidesandcurrents.noaa.gov/sltrends/sltrends.htmlJan is downloading this.

• NOAA National Centers for Environmental Information (NCEI) satellite datasets website at https://www.ncdc.noaa.gov/data-access/satellite-data/satellite-data-access-datasetsJan is downloading this.

• NASA JASON3 sea level data at http://sealevel.jpl.nasa.gov/missions/jason3/Jan is downloading this.

• U.S. Forest Service Climate Change Atlas website at http://www.fs.fed.us/nrs/atlas/Jan is downloading this.

• NOAA Global Monitoring Division website at http://www.esrl.noaa.gov/gmd/dv/ftpdata.htmlJan is downloading this.

• NOAA Global Monitoring Division ftp data at aftp.cmdl.noaa.gov/ — Jan is downloading this.

• NOAA National Data Buoy Center website at http://www.ndbc.noaa.gov/Jan is downloading this.

• NASA-ESDIS Oak Ridge National Laboratory Distributed Active Archive (DAAC) on Biogeochemical Dynamics at https://daac.ornl.gov/get_data.shtmlJan is downloading this.

• NASA-ESDIS Oak Ridge National Laboratory Distributed Active Archive (DAAC) on Biogeochemical Dynamics website at https://daac.ornl.gov/Jan is downloading this.

Other backups

Other backups are listed at

The Climate Mirror Project, https://climate.daknob.net/.

This nicely provides the sizes of various backups, and other useful information. Some are ‘signed and verified’ with cryptographic keys, but I’m not sure exactly what that means, and the details matter.

About 90 databases are listed here, along with some size information and some information about whether people have already backed them up or are in process:

Gov. Climate Datasets (Archive). (Click on the tiny word “Datasets” at the bottom of the page!)


azimuth_logo


Azimuth Backup Project (Part 1)

16 December, 2016


azimuth_logo

This blog page is to help organize the Azimuth Environmental Data Backup Project, or Azimuth Backup Project for short. This is part of the larger but decentralized, frantic and somewhat disorganized project discussed elsewhere:

Saving Climate Data (Part 2), Azimuth, 15 December 2016.

Here I’ll just say what we’re doing at Azimuth.

Jan Galkowski is a statistician and engineer at Akamai Technologies, a company in Cambridge Massachusetts whose content delivery network is one of the world’s largest distributed computing platforms, responsible for serving at least 15% of all web traffic. He has begun copying some of the publicly accessible US government climate databases. On 11 December he wrote:

John, so I have just started trying to mirror all of CDIAC [the Carbon Dioxide Information Analysis Center]. We’ll see. I’ll put it in a tarball, and then throw it up on Google. It should keep everything intact. Using WinHTTrack. I have coordinated with Eric Holthaus via Twitter, creating, per your suggestion, a new personal account which I am using exclusively to follow the principals.

Once CDIAC is done, and checked over, I’ll move on to other sites.

There are things beyond our control, such as paper records, or records which are online but are not within visibility of the public.

Oh, and I’ve formally requested time off from work for latter half of December so I can work this on vacation. (I have a number of other projects I want to work in parallel, anyway.)

By 14 December he was wanting some more storage space. He asked David Tanzer and me:

Do either of you have a large Google account, or the “unlimited storage” option at Amazon?

I’m using WebDrive, a commercial product. What I’m (now) doing is defining an FTP map at a .gov server, and then a map to my Amazon Cloud Drive. I’m using Windows 7, so these appear as standard drives (or mounts, in *nix terms). I navigate to an appropriate place on the Amazon Drive, and then I proceed to copy from .gov to Amazon.

There is no compression, and, in order to be sure I don’t abuse the .gov site, I’m deliberately passing this over a wireless network in my home, which limits the transfer rate. If necessary, and if the .gov site permits, I could hardwire the workstation to our FIOS router and get appreciably faster transfer. (I often do that for large work files.)

The nice thing is I get to work from home 3 days a week, so I can keep an eye on this. And I’m taking days off just to do this.

I’m thinking about how I might get a second workstation in the act.

The Web sites themselves I’m downloading, as mentioned, using HTTrack. I intended to tarball-up the site structure and then upload to Amazon. I’m still working on CDIAC at ORNL. For future sites, I’m going to try to get HTTrack to mirror directly to Amazon using one of the mounts.

I asked around for more storage space, and my request was kindly answer by Scott Maxwell. Scott lives in Pasadena California and he used to work for NASA: he even had a job driving a Mars rover! He is now a site reliability engineer at Google, and he works on Google Drive. Scott is setting up a 10-terabyte account on Google Drive, which Jan and others will be able to use.

Meanwhile, Jan noticed some interesting technical problems: for some reason WebDrive is barely using the capacity of his network connection, so things are moving much more slowly than they could in theory.

Most recently, Sakari Maaranen offered his assistance. Sakari is a systems architect at Ubisecure, a firm in Finland that specializes in identity management, advanced user authentication, authorization, single sign-on, and federation. He wrote:

I have several terabytes worth in Helsinki (can get more) and a gigabit connection. I registered my offer but they [the DataRefuge people] didn’t reply though. I’m glad if that means you have help already and don’t need a copy in Helsinki.

I replied saying that the absence of a reply probably means that they’re overwhelmed by offers of help and are struggling to figure out exactly what to do. Scott said:

Hey, Sakari! Thank you for the generous offer!

I’m setting these guys up with Google Drive storage, as at least a short-term solution.

IMHO, our first order of business is just to get a copy of the data into a location we control—one that can’t easily be taken away from us. That’s the rationale for Google Drive: it fits into Jan’s existing workflow, so it’s the lowest-friction path to getting a copy of the data that’s under our control.

How about if I propose this: we let Jan go ahead with the plan of backing up the data in Drive. Then I’ll look evaluate moving it from there to whatever other location we come up with. (Or copying instead of moving! More copies is better. :-) How does that sound to you?

I admit I haven’t gotten as far as thinking about Web serving at all—and it’s not my area of expertise anyway. Maybe you’d be kind enough to elaborate on your thoughts there.

Sakari responded with some information about servers. In late January, U. C. Riverside may help me with this—until then they are busy trying to get more storage space, for wholly unrelated reasons. But right now it seems the main job is to identify important data and get it under our control.

There are a lot of ways you could help.

Computer skills. Personally I’m not much use with anything technical about computers, but the rest of the Azimuth Data Backup gang probably has technical questions that some of you out there could answer… so, I encourage discussion of those questions. (Clearly some discussions are best done privately, and at some point we may encounter unfriendly forces, but this is a good place for roaming experts to answer questions.)

Security. Having a backup of climate data is not very useful if there are also fake databases floating around and you can’t prove yours is authentic. How can we create a kind of digital certificate that our database matches what was on a specific website at a specific time? We should do this if someone here has the expertise.

Money. If we wind up wanting to set up a permanent database with a nice front end, accessible from the web, we may want money. We could do a Kickstarter campaign. People may be more interested in giving money now than later, unless the political situation immediately gets worse after January 20th.

Strategy. We should talk a bit about what to do next, though too much talk tends to prevent action. Eventually, if all goes well, our homegrown effort will be overshadowed by others, at least in sheer quantity. About 3 hours ago Eric Holthaus tweeted “we just got a donation of several petabytes”. If it becomes clear that others are putting together huge, secure databases with nice front ends, we can either quit or—better—cooperate with them, and specialize on something we’re good at and enjoy.


Under2 Coalition

24 November, 2016

I’ve been thinking hard about climate change since at least 2010. That’s why I started this blog. But the last couple years I’ve focused on basic research in network theory as a preliminary step toward green mathematics. Basic research is what I’m best at, and there are plenty of people working on the more immediate, more urgent aspects of climate change.

Indeed, after the Paris Agreement, I started hoping that politicians were taking this issue seriously and that we’d ultimately deal with it—even though I knew this agreement was not itself enough to keep warming below 2° C:

There is a troubling paradox at the heart of climate policy. On the one hand, nobody can doubt the historic success of the Paris Agreement. On the other hand, everybody willing to look can see the impact of our changing climate. People already face rising seas, expanding desertification and coastal erosion. They take little comfort from agreements to adopt mitigation measures and finance adaptation in the future. They need action today.

That is why the Emissions Gap Report tracks our progress in restricting global warming to 1.5 – 2 degrees Celsius above pre-industrial levels by the end of this century. This year’s data shows that overall emissions are still rising, but more slowly, and in the case of carbon dioxide, hardly at all. The report foresees further reductions in the short term and increased ambition in the medium term. Make no mistake; the Paris Agreement will slow climate change. The recent Kigali Amendment to the Montreal Protocol will do the same.

But not enough: not nearly enough and not fast enough. This report estimates we are actually on track for global warming of up to 3.4 degrees Celsius. Current commitments will reduce emissions by no more than a third of the levels required by 2030 to avert disaster. The Kigali Amendment will take off 0.5 degrees Celsius, although not until well after 2030. Action on short-lived climate pollutants, such as black carbon, can take off a further 0.5 degrees Celsius. This means we need to find another one degree from somewhere to meet the stronger, and safer, target of 1.5 degrees Celsius warming.

So, we must take urgent action. If we don’t, we will mourn the loss of biodiversity and natural resources. We will regret the economic fallout. Most of all, we will grieve over the avoidable human tragedy; the growing numbers of climate refugees hit by hunger, poverty, illness and conflict will be a constant reminder of our failure to deliver.

That’s from an annual report put out by the United Nations Environment Programme, or UNEP:

• United Nations Environment Programme, The Emissions Gap Report 2016.

As this report makes clear, we can bridge the gap and keep global warming below 2° C, if we work very hard.

But my limited optimism was shaken by the US presidential election, and especially by the choice of Myron Ebell to head the ‘transition team’ for the Environmental Protection Agency. For the US government to dismantle the Clean Power Plan and abandon the Paris Agreement would seriously threaten the fight against climate change.

Luckily, people already recognize that even with the Paris Agreement, a lot of work must happen at the ‘subnational’ level. This work will go on even if the US federal government gives up. So I want to learn more about it, and get involved somehow.

This is where the Under2 Coalition comes in.

The Under2 Coalition

California, Connecticut, Minnesota, New Hampshire, New York, Oregon, Rhode Island, Vermont and Washington have signed onto a spinoff of the Paris Climate Agreement. It’s called the Under2 Memorandum of Understanding, or Under2 MOU for short.

“Under 2” stands for two goals:

• under 2 degrees Celsius of global warming, and
• under 2 tonnes of carbon dioxide emitted per person per year.

These states have agreed to cut greenhouse gas emissions to 80-95% below 1990 levels by 2050. They’ve also agreed to share technology and scientific research, expand use of zero-emission vehicles, etc., etc.

And it’s not just US states that are involved in this! A total of 165 jurisdictions in 33 countries and six continents have signed or endorsed the Under2 MOU. Together, they form the Under2 Coalition. They represent more than 1.08 billion people and $25.7 trillion in GDP, more than a third of the global economy:

Under2 Coalition.

I’ll list the members, starting with ones near the US. If you go to the link you can find out exactly what each of these ‘sub-national entities’ are promising to do. In a future post, I’ll say more about the details, since I want Riverside to join this coalition. Jim Stuttard has already started a page about a city in the UK which is not a member of the Under2 Coalition, but has done a lot of work to figure out how to cut carbon emissions:

• Azimuth Wiki, Birmingham Green Commission.

This sort of information will be useful for other cities.

UNITED STATES

Austin
California
Connecticut
Los Angeles
Massachusetts
Minnesota
New Hampshire
New York City
New York State
Oakland City
Oregon
Portland City
Rhode Island
Sacramento
San Francisco
Seattle
Vermont
Washington

CANADA

British Columbia
Northwest Territories
Ontario
Québec
Vancouver City

MEXICO

Baja California
Chiapas
Hidalgo
Jalisco
Mexico City
Mexico State
Michoacán
Quintana Roo
Tabasco
Yucatán

BRAZIL

Acre
Amazonas
Mato Grosso
Pernambuco
Rondônia
São Paulo City
São Paulo State
Tocantins

CHILE

Santiago City

COLOMBIA

Guainía
Guaviare

PERU

Loreto
San Martín
Ucayali

AUSTRIA

Lower Austria

FRANCE

Alsace
Aquitaine
Auvergne-Rhône-Alpes
Bas-Rhin
Midi-Pyrénées
Pays de la Loire

GERMANY

Baden-Württemberg
Bavaria
Hesse
North Rhine-Westphalia
Schleswig-Holstein
Thuringia

HUNGARY

Budapest

ITALY

Abruzzo
Basilicata
Emilia-Romagna
Lombardy
Piedmont
Sardinia
Veneto

THE NETHERLANDS

Drenthe
North Brabant
North Holland
South Holland

PORTUGAL

Azores
Madeira

SPAIN

Andalusia
Basque Country
Catalonia
Navarra

SWEDEN

Jämtland Härjedalen

SWITZERLAND

Basel-Landschaft
Basel-Stadt

UNITED KINGDOM

Bristol
Greater Manchester
Scotland
Wales

AUSTRALIA

Australian Capital Territory (ACT)
South Australia

CHINA

Alliance of Peaking Pioneer Cities (represents 23 cities)
Jiangsu Province
Sichuan
Zhenjiang City

INDIA

Telangana

INDONESIA

East Kalimantan
South Sumatra
West Kalimantan

JAPAN

Gifu

NEPAL

Kathmandu Valley

KENYA

Laikipia County

IVORY COAST

Assemblée des Régions de Côte d’Ivoire (represents 33 subnationals)

NIGERIA

Cross River State

MOZAMBIQUE

Nampula

SENEGAL

Guédiawaye


Bleaching of the Great Barrier Reef

22 April, 2016


The chatter of gossip distracts us from the really big story, the Anthropocene: the new geological era we are bringing about. Here’s something that should be dominating the headlines: Most of the Great Barrier Reef, the world’s largest coral reef system, now looks like a ghostly graveyard.

Most corals are colonies of tiny genetically identical animals called polyps. Over centuries, their skeletons build up reefs, which are havens for many kinds of sea life. Some polyps catch their own food using stingers. But most get their food by symbiosis! They cooperate with single-celled organism called zooxanthellae. Zooxanthellae get energy from the sun’s light. They actually live inside the polyps, and provide them with food. Most of the color of a coral reef comes from these zooxanthellae.

When a polyp is stressed, the zooxanthellae living inside it may decide to leave. This can happen when the sea water gets too hot. Without its zooxanthellae, the polyp is transparent and the coral’s white skeleton is revealed—as you see here. We say the coral is bleached.

After they bleach, the polyps begin to starve. If conditions return to normal fast enough, the zooxanthellae may come back. If they don’t, the coral will die.

The Great Barrier Reef, off the northeast coast of Australia, contains over 2,900 reefs and 900 islands. It’s huge: 2,300 kilometers long, with an area of about 340,000 square kilometers. It can be seen from outer space!

With global warming, this reef has been starting to bleach. Parts of it bleached in 1998 and again in 2002. But this year, with a big El Niño pushing world temperatures to new record highs, is the worst.

Scientists have being flying over the Great Barrier Reef to study the damage, and divers have looked at some of the reefs in detail. Of the 522 reefs surveyed in the northern sector, over 80% are severely bleached and less than 1% are not bleached at all. The damage is less further south where the water is cooler—but most of the reefs are in the north:



The top expert on coral reefs in Australia, Terry Hughes, wrote:

I showed the results of aerial surveys of bleaching on the Great Barrier Reef to my students. And then we wept.

Imagine devoting your life to studying and trying to protect coral reefs, and then seeing this.

Some of the bleached reefs may recover. But as oceans continue to warm, the prospects look bleak. The last big El Niño was in 1998. With a lot of hard followup work, scientists showed that in the end, 16% of the world’s corals died in that event.

This year is quite a bit hotter.

So, global warming is not a problem for the future: it’s a problem now. It’s not good enough to cut carbon emissions eventually. We’ve got to get serious now.

I need to recommit myself to this. For example, I need to stop flying around to conferences. I’ve cut back, but I need to do much better. Future generations, living in the damaged world we’re creating, will not have much sympathy for our excuses.