Carbon Dioxide Puzzles

I like it when people do interesting calculations and help me put their results on this blog. Renato Iturriaga has plotted a graph that raises some interesting questions about carbon dioxide in the Earth’s atmosphere. Maybe you can help us out!

The atmospheric CO2 concentration, as measured at Mauna Loa in Hawaii, looks like it’s rising quite smoothly apart from seasonal variations:



However, if you take the annual averages from here:

• NOAA Earth System Laboratory, Global Monitoring Division, Recent Mauna Loa CO2.

and plot how much the average rises each year, the graph is pretty bumpy. You’ll see what I mean in a minute.

In comparison, if you plot the carbon dioxide emissions produced by burning fossil fuels, you get a rather smooth curve, at least according to these numbers:

• U. S. Energy Information Administration Total carbon dioxide emissions from the consumption of energy, 1980-2008.

Renato decided to plot both of these curves and their difference. Here’s his result:



The blue curve shows how much CO2 we put into the atmosphere each year by burning fossil fuels, measured in parts per million.

The red curve shows the observed increase in atmospheric CO2.

The green curve is the difference.

The puzzle is to explain this graph. Why is the red curve roughly 40% lower than the blue one? Why is the red curve so jagged?

Of course, a lot of research has already been done on these issues. There are a lot of subtleties! So if you like, think of our puzzle as an invitation to read the existing literature and tell us how well it does at explaining this graph. You might start here, and then read the references, and then keep digging.

But first, let me explain exactly how Renato Iturriaga created this graph! If he’s making a mistake, maybe you can catch it.

The red curve is straightforward: he took the annual mean growth rate of CO2 from the NOAA website I mentioned above, and graphed it. Let me do a spot check to see if he did it correctly. I see a big spike in the red curve around 1998: it looks like the CO2 went up around 2.75 ppm that year. But then the next year it seems to have gone up just about 1 ppm. On the website it says 2.97 ppm for 1998, and 0.91 for 1999. So that looks roughly right, though I’m not completely happy about 1998.

[Note added later: as you’ll see below, he actually got his data from here; this explains the small discrepancy.]

Renato got the blue curve by taking the US Energy Information Administration numbers and converting them from gigatons of CO2 to parts per million moles. He assumed that that the atmosphere weighs 5 × 1015 tons and that CO2 gets well mixed with the whole atmosphere each year. Given this, we can simply say that one gigaton is 0.2 parts per million of the atmosphere’s mass.

But people usually measure CO2 in parts per million volume. Now, a mole is just a certain large number of molecules. Furthermore, the volume of a gas at fixed pressure is almost exactly proportional to the number of molecules, regardless of its composition. So parts per million volume is essentially the same as parts per million moles.

So we just need to do a little conversion. Remember:

• The molecular mass of N2 is 28, and about 79% of the atmosphere’s volume is nitrogen.

• The molecular mass of O2 is 32, and about 21% of the atmosphere’s volume is oxygen.

• By comparison, there’s very little of the other gases.

So, the average molecular mass of air is

28 × .79 + 32 × .21 = 28.84

On the other hand, the molecular mass of CO2 is 44. So one ppm mass of CO2 is less than one ppm volume: it’s just

28.84/44 = 0.655

parts per million volume. So, a gigaton of CO2 is about 0.2 ppm mass, but only about

0.2 × 0.655 = 0.13

parts per million volume (or moles).

So to get the blue curve, Renato took gigatons of CO2 and multiplied by 0.13 to get ppm volume. Let me do another spot check! The blue curve reaches about 4 ppm in 2008. Dividing 4 by 0.13 we get about 30, and that’s good, because energy consumption put about 30 gigatons of CO2 into the atmosphere in 2008.

And then, of course, the green curve is the blue one minus the red one:



Now, more about the puzzles.

One puzzle is why the red curve is so much lower than the blue one. The atmospheric CO2 concentration is only going up by about 60% of the CO2 emitted, on average — though the fluctuations are huge. So, you might ask, where’s the rest of the CO2 going?

Probably into the ocean, plants, and soil:



But at first glance, the fact that only 60% stays in the atmosphere seems to contract this famous graph:



This shows it taking many years for a dose of CO2 added to the atmosphere to decrease to 60% of its original level!

Is the famous graph wrong? There are other possible explanations!

Here’s a non-explanation. Humans are putting CO2 into the atmosphere in other ways besides burning fossil fuels. For example, deforestation and other changes in land use put somewhere between 0.5 and 2.7 gigatons of carbon into the atmosphere each year. There’s a lot of uncertainty here. But this doesn’t help solve our puzzle: it means there’s more carbon to account for.

Here’s a possible explanation. Maybe my estimate of 5 × 1015 tons for the mass of the atmosphere is too high! That would change everything. I got my estimate off the internet somewhere — does anyone know a really accurate figure?

Renato came up with a more interesting possible explanation. It’s very important, and very well-known, that CO2 doesn’t leave the atmosphere in a simple exponential decay process. Imagine for simplicity that carbon stays in three boxes:

• Box A: the atmosphere.

• Box B: places that exchange carbon with the atmosphere quite rapidly.

• Box C: places that exchange carbon with the atmosphere and box B quite slowly.

As we pump CO2 into box A, a lot of it quickly flows into box B. It then slowly flows from boxes A and B into box C.

The quick flow from box A to box B accounts for the large amounts of ‘missing’ CO2 in Renato’s graph. But if we stop putting CO2 into box A, it will soon come into equilibrium with box B. At that point, we will not see the CO2 level continue to quickly drop. Instead, CO2 will continue to slowly flow from boxes A and B into box C. So, it can take many years for the atmospheric CO2 concentration to drop to 60% of its original level — as the famous graph suggests.

This makes sense to me. It shows that the red curve can be a lot lower than the blue one even if the famous graph is right.

But I’m still puzzled by the dramatic fluctuations in the red curve! That’s the other puzzle.

49 Responses to Carbon Dioxide Puzzles

  1. Hudson Luce says:

    Since I’m in Kansas, where we get a fair amount of snow from time to time, some winters being more snowy than others – and thus the following spring, there’s more of a recharge for the groundwater, which means that more grass and crops grow as opposed to drier winters in which less snow falls, maybe there’s a correlation between snowfall and CO2 levels in the atmosphere. In addition, ice crystals (and snowflakes) trap air and keep it close to the ground – maybe CO2 gets trapped in greater quantities than N2 and O2 in the H-O-H lattice…

    • John Baez says:

      Since I’m in Kansas, where we get a fair amount of snow from time to time…

      You got a huge snowstorm there recently, right? So that would be on your mind. Year-to-year variations in weather worldwide might affect the CO2 concentration at Mauna Loa, both for reasons related to plant growth but also many other reasons. Just for fun I’d like to compare the CO2 concentrations to the El Niño Southern Oscillation, since that affects Pacific Ocean surface temperatures and warmer water might release CO2. I have no idea how significant this effect could be, but it would be amusing to check.

      I have trouble believing that significant amounts of CO2 gets trapped in falling snow: even if a bit gets trapped, the total volume of the world’s snow must be quite puny compared to that of the atmosphere.

      Methane clathrates, on the other hand, are a big deal!

  2. DavidTweed says:

    This doesn’t have a big enough effect to explain the graphs, but getting energy use data is quite non-trivial. It’s most probably being done primarily by tracking fuel being sold and assuming that’s being used. This has two big problems:

    1. There’s a risk of fuel being sold illicitly, or getting confused and counting repeated sales of the same load of fuel.

    2. Various nations strategic fossil fuel reserves may not be fully transparent.

    So the blue curve may not be as smooth as shown. But not by anything like the magnitude needed to reconcile the two curves.

  3. Thomas says:

    I took a quick look at chapter 10 (the perturbed carbon cycle) of David Archer’s book (which I think is a very useful reference for people with at least some training in the sciences). He claims that we put 7 Gtons C/yr in, of which 3 Gtons C/yr stays in the atmosphere, and 4 Gtons go into the land and the oceans (about 50/50). Uptake by the oceans imvolves multiple time scales (a short time scale for warm shallow water, a longer time scale for mixing with deep water, and an even longer time scale for conversion to CaC03).

  4. Hudson Luce says:

    Tracking fuel sold and treating it as all being used is a pretty fair assumption, given that storage capacity is well known, is fairly constant and stays that way, and figures for fuel in transit (usually by ship for oil and train for coal) are easily obtained.

    Fuel use should increase as population increases, and that’s a fairly smooth curve (or has been).

    Strategic fuel “reserves” are usually a classified or top-secret number, the public numbers are made-up nonsense, and they don’t matter anyway, because the fuel has to be extracted, first, mined and broken up into fine pieces as in coal, or produced at the wellhead as in crude oil, dewatered (usually), and desulfured (sometimes). What’s left in the ground is for all extents and purposes largely unknown, but it doesn’t enter the marketplace until extracted.

    If you’re talking about things like the Strategic Ready Reserve or whatever that salt dome down in Louisiana is called, then that fuel isn’t on the market and probably won’t go on the market unless the armed forces allow it, which is unlikely. No oil and the military machine grinds to a halt, as Patton found in the winter of 1944/45.

    As for public storage of fuel, that is easily estimated by counting coal piles, which you can do by looking at Google Earth, looking for power plants (they’re pretty generic) and then estimating the size of the coal pile(s) which will obey certain physical constraints usually being conical with a given limit to height vs diameter; and by counting crude oil tankers and tank farms, also using Google Earth.

    As for using wood as a fuel, remember that burning green wood is difficult and that you’ve got to let it dry out for six months to a year, otherwise you can look to the rate of deforestation for this figure because most people don’t store appreciably sized woodpiles and it tend to rot away to compost.

    • DavidTweed says:

      I would have assumed an agency like the EIA would be looking at certain reported oil transactions, but to my understanding there’s some margin for unreported transactions.

      The use of, eg, the US’s Strategic Petroleum Reserve has been used many times to smooth out local supply difficulties (so that oil that comes out eventually gets replaced). Even if the US has been fully transparent about its behaviour, it’s unclear whether reserves such as China’s strategic petroleum reserve have been used in similar ways.

      So the point I was making was that just pulling the headline figures on fossil fuel sales can oversmooth the actual true behaviour. However, I wouldn’t think the magnitudes of the fluctuations would remotely be large enough to match the reported carbon dioxide concentations.

  5. I’d guess the fluctuations are due to the oceans, just like temperature fluctuations. (Might be interesting to compare the two.) Estimates of anthropogenic carbon uptake by oceans seem difficult and sketchy – until that recent study: Can Ocean Carbon Uptake Keep Pace with Industrial Emissions?

    The human perturbation to ocean carbon is notoriously difficult to measure, despite the ocean’s large role in buffering the build-up of atmospheric CO2. The difficulty arises from the inhomogeneity of ocean carbon and from the fact that anthropogenic carbon has increased ocean carbon by only 1-2%, even while it is has increased atmospheric carbon by about 38%. The only global observational estimate previously made of anthropogenic carbon in the ocean was a snapshot in time for 1994 made by Sabine et al. (2002). In the new study, we used novel indirect techniques to tease out the signal over the entire industrial era.

    [Emph. mine]

    Graph:

  6. John Furey says:

    Looks like jitter noise to me, for all the familiar reasons. Some of it real, some of it due to NOAA’s clumping, resampling, and detrending: “monthly mean”, “centered on the middle of each month”, “after correction”. Some oversampled (but still processed, not raw) data are available
    ftp://ftp.cmdl.noaa.gov/ccg/co2/trends/co2_weekly_mlo.txt

    The single best ad hoc annualizing of trend would be to take annual differences between corresponding pairs of smoothed points near each of the zero crossings of the known annual cycle – looks like around July and January.

    A better analysis procedure would be to Butterworth filter out the annual contribution
    http://en.wikipedia.org/wiki/Butterworth_filter
    using an extremely narrow (hence the advantage of oversampling) annual bandpass. Then use the 5 point trapezoid filter (1 2 2 2 1) to eliminate most remaining jitter. Unfortunately I don’t have time today, maybe this weekend, but hopefully someone else can also do.

  7. Nathan Urban says:

    There’s nothing wrong with your atmospheric mass calculation; see this comment for a reference with a more precise number.

    There isn’t really a contradiction between short-term interannual variability and the longer-term average response to an atmospheric carbon pulse. As you say, there are multiple time scales at work here.

    I’m sure you’ve seen the large seasonal fluctuations in the Keeling CO2 curve, due mostly to terrestrial vegetation in the Northern Hemisphere. Likewise, you can get some similarly significant variability year-to-year due to vegetation dynamics (and also ocean dynamics).

    The multidecadal response time in the “famous” graph (is it famous?) is due, for example, to the export of carbon from fast (“labile”) carbon pools to slower (“recalcitrant”) pools — i.e., litter carbon moving into the soil, more isolated from the atmosphere. This does not preclude seasonal and interannual variability, as (for example) gross photosynthesis grows and wanes. You just end up with a multidecadal decay curve with short-term fluctuations superimposed. Many of the vegetation fluctuations are climatic in origin (temperature and precipitation variability), or due to disturbance (fire, insect invasion, etc.). Of course disturbance itself is related to climate.

    One good starting point is Le Quéré et al. (2009), Trends in the sources and sinks of carbon dioxide. You can also see Le Quéré et al. (2003), Two decades of ocean CO2 sink and variability; Schimel et al. (2001), Recent patterns and mechanisms of carbon exchange by terrestrial ecosystems; Schaefer et al. (2002), Effect of climate on interannual variability of terrestrial CO2 fluxes; Zeng et al. (2005), Terrestrial mechanisms of interannual CO2 variability. These aren’t necessarily the best references, but the first I turned up.

    • John Baez says:

      Thanks, Nathan! What you say makes sense. I’ll look at these references. It was fun trying to figure things out without reading anything, but now that my curiosity is picqued, I’m eager to see what the literature says.

      I know it’s a bit risky to blog about climate science before I really understand it, but everyone’s doing it these days , and I thought I’d try to set a good example by:

      1) presenting data and calculations in a way that’s very easy to check and criticize,

      and:

      2) raising questions, rather than claiming to draw earth-shaking conclusions.

      I’m sure you’ve seen the large seasonal fluctuations in the Keeling CO2 curve, due mostly to terrestrial vegetation in the Northern Hemisphere.

      Yes, they’re visible in this blog entry. Anyone who doesn’t know what Nathan is talking about, just look at the red wiggles here, and the blowup in the lower right:

    • John Baez says:

      Renato wants to read the papers Nathan cited, but he doesn’t have access to the journals! Luckily, some are available for free if you just type the titles into Google and look around. For example:

      • Le Quéré et al., Trends in the sources and sinks of carbon dioxide, Nature Geoscience, 2009.

      Efforts to control climate change require the stabilization of atmospheric CO2 concentrations. This can only be achieved through a drastic reduction of global CO2 emissions. Yet fossil fuel emissions increased by 29% between 2000 and 2008, in conjunction with increased contributions from emerging economies, from the production and international trade of goods and services, and from the use of coal as a fuel source. In contrast, emissions from land-use changes were nearly constant. Between 1959 and 2008, 43% of each year’s CO2 emissions remained in the atmosphere on average; the rest was absorbed by carbon sinks on land and in the oceans. In the past 50 years, the fraction of CO2 emissions that remains in the atmosphere each year has likely increased, from about 40% to 45%, and models suggest that this trend was caused by a decrease in the uptake of CO2 by the carbon sinks in response to climate change and variability. Changes in the CO2 sinks are highly uncertain, but they could have a significant influence on future atmospheric CO2 levels. It is therefore crucial to reduce the uncertainties.

      There are some nice graphs in this paper, including figure a, which shows the rate of increase of CO2 concentration. This graph is jagged like Renato’s, but different, because it’s based on different data, also provide by NOAA:

      We used the global mean data after 1980 and the Mauna Loa data between 1959 and 1980.

      There’s a lot of good information here, but they note that it would be good to have more:

      Progress has been made in monitoring the trends in the carbon cycle and understanding their drivers. However, major gaps remain, particularly in our ability to link anthropogenic CO2 emissions to atmospheric CO2 concentration on a year-to-year basis; this creates a multi-year delay and adds uncertainty to our capacity to quantify the effectiveness of climate mitigation policies. To fill this gap, the residual CO2 flux from the sum of all known components of the global CO2 budget needs to be reduced, from its current range of ±2.1 Pg C yr−1, to below the uncertainty in global CO2 emissions, ±0.9 Pg C yr−1. If this can be achieved with improvements in models and observing systems, geophysical data could provide constraints on global CO2 emissions estimates.

    • John Baez says:

      Here’s another:

      • Le Quéré et al., Two decades of ocean CO2 sink and variability, Tellus 55B (2003), 649–656.

      The abstract:

      Atmospheric CO2 has increased at a nearly identical average rate of 3.3 and 3.2 Pg C yr−1 for the
      decades of the 1980s and the 1990s, in spite of a large increase in fossil fuel emissions from 5.4 to 6.3 Pg C yr−1. Thus, the sum of the ocean and land CO2 sinks was 1 Pg C yr−1 larger in the 1990s than in the 1980s. Here we quantify the ocean and land sinks for these two decades using recent atmospheric inversions and ocean models. The ocean and land sinks are estimated to be, respectively, 0.3 (0.1 to 0.6) and 0.7 (0.4 to 0.9) Pg C yr−1 larger in the 1990s than in the 1980s. When variability less than 5 yr is removed, all estimates show a global oceanic sink more or less steadily increasing with time, and a large anomaly in the land sink during 1990–1994. For year-to-year variability, all estimates show 1/3 to 1/2 less variability in the ocean than on land, but the amplitude and phase of the oceanic variability remain poorly determined. A mean oceanic sink of 1.9 Pg C yr−1 for the 1990s based on O2 observations corrected for ocean outgassing is supported by these estimates, but an uncertainty on the mean value of the order of ±0.7 Pg C yr−1 remains. The difference between the two decades appears to be more robust than the absolute value of either of the two decades.

      (It says O2 there and I don’t think that’s a typo, though I don’t quite understand it — look at the end of section 4.)

      There are also nice graphs here, but the story they tell seems to differ significantly from Le Quéré et al‘s more recent paper.

    • John Baez says:

      And another… it seems pretty easy to find all these papers for free:

      • Schimel et al. Recent patterns and mechanisms of carbon exchange by terrestrial ecosystems, Nature 414 (8 November 2001), 169-172.

      Knowledge of carbon exchange between the atmosphere, land and the oceans is important, given that the terrestrial and marine environments are currently absorbing about half of the carbon dioxide that is emitted by fossil-fuel combustion. This carbon uptake is therefore limiting the extent of atmospheric and climatic change, but its long-term nature remains uncertain. Here we provide an overview of the current state of knowledge of global and regional patterns of carbon exchange by terrestrial ecosystems. Atmospheric carbon dioxide and oxygen data confirm that the terrestrial biosphere was largely neutral with respect to net carbon exchange during the 1980s, but became a net carbon sink in the 1990s. This recent sink can be largely attributed to northern extratropical areas, and is roughly split between North America and Eurasia. Tropical land areas, however, were approximately in balance with respect to carbon exchange, implying a carbon sink that offset emissions due to tropical deforestation. The evolution of the terrestrial carbon sink is largely the result of changes in land use over time, such as regrowth on abandoned agricultural land and fire prevention, in addition to responses to environmental changes, such as longer growing seasons, and fertilization by carbon dioxide and nitrogen. Nevertheless, there remain considerable uncertainties as to the magnitude of the sink in different regions and the contribution of different processes.

      Again, more good graphs and information!

  8. WebHubTel says:

    I believe that the starting point in this analysis is that the atmospheric CO2 level comes about from the impulse response of the CO2 uptake with the forcing function of the CO2 fossil fuel input combined with seasonal variations.

    So if g(t) is the impulse response and f(t) is the forcing function, then the atmospheric content is the time convolution of f(t) with g(t).

    The reason that the seasonal variations are still observed is that the convolution is essentially a low-band-pass filter and though it does filter the periodic signal, it doesn’t do it completely, and so we end up seeing the residual noisy oscillations in the Mauna Loa data.

    There is also a time lag on the output of a convolution, and since g(t) has a significant fat-tail component, the convolved output keeps on accumulating, long after the forcing function is turned off.

    I tried experimenting with the fossil fuel forcing function with a fat-tail CO2 impulse response function in a couple of blog posts last year:
    http://mobjectivist.blogspot.com/2010/04/fat-tail-in-co2-persistence.html
    http://mobjectivist.blogspot.com/2010/05/how-shock-model-analysis-relates-to-co2.html

    I also combined these in a chapter of The Oil ConunDrum online book. I got interested in this topic because the oil production process can be described as a series of convolutions as well, and the CO2 residual is just another convolution stage in this process.

    As William Feller said: “It is difficult to exaggerate the importance of convolutions in many branches of mathematics” from “An Introduction to the Probability Theory and its Applications”.

    BTW, I found out that climate scientists understand convolutions very well but the knowledge of this technique amongst oil depletion analysts is very small.

    • John Baez says:

      Thanks, WebHubTel! I hadn’t wanted to bring convolutions into my already long blog post, but thinking about convolutions is precisely what made me so puzzled by the jaggedness of the red curve here:

      I don’t think I can get that red curve by convolving the blue curve with any function g of the general sort shown here:

      That is, some function g with with g(t) = 0 for t \le 0 and monotone decreasing for t \ge 0.

      WebHubTel wrote:

      The reason that the seasonal variations are still observed is that the convolution is essentially a low-band-pass filter and though it does filter the periodic signal, it doesn’t do it completely, and so we end up seeing the residual noisy oscillations in the Mauna Loa data.

      The variations in the red curve aren’t what I’d call ‘seasonal’: it’s wiggling around drastically at the 1-5 year scale. It would be nice to plot a graph of monthly averages, to see more detail.

      But I like this aspect of your idea: if the production of CO2 by natural (as opposed to human) agents were very noisy, a low-pass filter might leave us with a curve like the red one. And it’s always worth remembering that natural processes produce and consume a lot more atmospheric CO2 than the human processes produce. So there’s potentially a lot of natural noise, with the blue curve as a small but significant signal buried in this natural noise.

      • William T says:

        Here’s another suggestion to think about (although sorry I’m feeling too lazy tonight to do it myself…)

        The annual fluctuation that Nathan raised above is much larger than the annual mean increment. So the “rapid flux” into and out of the biosphere over the year is in effect the most rapid process. However, it’s likely that this has some variability from year to year, so that when you do the 12 month averaging you are going to end up with a signal containing effects from this variability. Another way of looking at it, if you Fourier transform theCO2 after removing the exponential growth you will get more than a simple 1 year peak – there will be a broader spread of energy. You might actually want to filter it out with a slightly better filter than a simple 12 month moving average in order to compare to the annual reported emissions – they are most likely estimates with some built-in smoothing from year to year anyway.

        I read somewhere recently that the latest drought in the Amazon released as much CO2 as all the cars in the world. So presumably that will end up giving an upwards glitch in the red graph.

        • WebHubTel says:

          William T. said:

          so that when you do the 12 month averaging you are going to end up with a signal containing effects from this variability.

          I think it has something to do with this. The averaging process is a low-band-pass filter and will suppress the noise, e.g. the classic Mauna Loa graph which is a cumulative averager. However, when we switch over to looking at year-over-year variations as in the incremental graph shown by Renato, we are essentially dealing with a derivative, which is a high-band-pass filter. In that case, any noise is accentuated and it starts looking more jagged.

          OTOH, the energy production increments are likely based on data that is so filtered over time that the year-over-year increments turn very smooth. The derivative of this accentuates very little noise.

          So I think this may be partially an artifact of how the CO2 data is collected and possible aliasing leading to derivative spikes and noise accentuation. We would really need to look at the original data.

          This does require some thought as I now understand the concerns and see why John labelled it a “CO2 puzzle”.

      • Phil Henshaw says:

        Yes, it does have to do with the difference between natural source variation and equations, but you’ve missed the main reason for the difference.

        The main reason is that an environment houses numerous simultaneously emerging and evolving systems, so best thought of as a kind of big pot of “pop corn” going off. It’s not “random” in that the large variations in locally developing events are just not connected at that scale. You don’t make progress with this subject unless you start asking questions about what animates these local developmental processes…

        phil

        • WebHubTel says:

          Phil, Interesting to bring up the popcorn analogy. Many people presume that the popcorn going off is in some ways predictable. Yet, when food science researchers carefully measure the time it takes to pop for individually cooked kernels, they find that it generates a spread in times that is not even normally distributed, with obviously fatter tails. That’s why you find lots of unpopped kernels, as the variability is so large. I actually have a section on the phenomenon in The Oil ConunDrum. I looked into this because the popping of popcorn mimics both the temporal dynamics of searching for stuff like oil and of predicting the reliability of components. These are complements in the sense that success is the complement of failure. When you find a success (like an oil reservoir) or when you expose a failure is stochastic in mathematically similar ways.

          I guess it further points to the great variability in natural processes.

        • John Furey says:

          It turns out that the popcorn hazard function
          http://en.wikipedia.org/wiki/Survival_analysis
          follows an extreme value (i.e. Gumbel) distribution. The reason is that the kernels that pop in a given interval (of increasing temperature) are simply the ones that were least likely to survive the interval. The details of the local processes (some slightly hotter, some slightly cracked) are irrelevant if reproducible.

        • John Baez says:

          Though I knew convolution was related to kernels, I never expected that popcorn has some relevance to climate change. Cool!

        • Phil Henshaw says:

          Well, actually, there’s a special kind of “kernel” that is especially useful for exposing the “pop corn” events hidden in the confusion of time series data. It doesn’t work automatically everywhere, but works astoundingly well some places. It involves the careful use of a smoothing kernel with a hole in the middle. A smoothing kernel with a hole in the middle preferentially reduces fluctuation for higher derivative rates, and so minimizes scalar distortion. It means you can find the true shape of the natural phenomenon, making the data more differentiable, and so expose their dynamics more graphically. It’s one of several mathematical tools I developed in the 80’s & 90’s for investigating locally emergent systems phenomena. http://www.synapse9.com/drwork.htm

          I’d be happy to discuss is anyone is interested. Use the side bar to navigate or scroll down the page to the table of contents. I haven’t touched the thing in 10 years really as not a soul ever understood what it was about it seems.

  9. Speed says:

    A tiny detail …

    John Baez, you say:
    Box C: places that exchange carbon with the atmosphere and box A quite slowly.

    Box A is the atmosphere.

  10. Renato Iturriaga says:

    The explanation “I came up with” was really what I understood from the article:

    • Mason Inman, Carbon is forever, Nature Reports Climate Change, 20 November 2008.

    The complete system of the CO2 consists in a lot cycles with different time scales, so more or less decoupled, each subcycle should have an equilibrium point which shifts as you pump CO2. So there is a subtlety difference between how may years will be in the air the CO2 that we just throw away and how many years will be the excess of CO2 once we stop pumping !

    Just for the record, the data of the red curves comes from the data of December of each year, not from the average of the year.

  11. WebHubTel says:

    Renato wrote:

    Just for the record, the data of the red curves comes from the data of December of each year, not from the average of the year.

    Data for fossil fuel combustion is usually integrated over an entire year, so that the noise excursions in CO2 levels might be reduced by around a factor of 3 if you followed the same procedure and integrated over a yearly cycle. I believe that noise reduction would occur if the noise was IID and the improved counting statistics would reduce it by the square root of 12.

    This will put the two sets of data on a more even footing at least.

    • John Baez says:

      Renato wrote:

      Just for the record, the data of the red curves comes from the data of December of each year, not from the average of the year.

      Because the details seem to matter now:

      If you got your data from here, your red curve does not exactly show the difference in CO2 concentration at Mauna Loa between one December and the previous one. It’s a difference of ‘corrected’ four-month averages:

      The annual mean rate of growth of CO2 in a given year is the difference in concentration between the end of December and the start of January of that year. If used as an average for the globe, it would represent the sum of all CO2 added to, and removed from, the atmosphere during the year by human activities and by natural processes. There is a small amount of month-to-month variability in the CO2 concentration that may be caused by anomalies of the winds or weather systems arriving at Mauna Loa. This variability would not be representative of the underlying trend for the northern hemisphere which Mauna Loa is intended to represent. Therefore, we finalize our estimate for the annual mean growth rate of the previous year in March, by using the average of the most recent November-February months, corrected for the average seasonal cycle, as the trend value for January 1. Our estimate for the annual mean growth rate (based on the Mauna Loa data) is obtained by subtracting the same four-month average centered on the previous January 1.

      I wish they explained the ‘correction’ method.

      And while we are thinking about small but perhaps important issues: you could make me happier if you’d carefully check the 1998 data on your red curve:

      Let me do a spot check to see if he did it correctly. I see a big spike in the red curve around 1998: it looks like the CO2 went up around 2.75 ppm that year. But then the next year it seems to have gone up just about 1 ppm. On the website it says 2.97 ppm for 1998, and 0.91 for 1999. So that looks roughly right, though I’m not completely happy about 1998.

      Is it just some inaccuracy in the graphing program, or something else?

      • Nathan Urban says:

        When you say “‘correction’ method”, do you refer to where they “corrected for the average seasonal cycle”?

        I don’t know quite what that means. But I did look up what they do to remove the seasonal cycle. Maybe their ‘correction’ has something to do that. Thoning et al. (1989) describes the seasonal removal method (Sections 4.1-4.3). I don’t know if they’ve made any tweaks to the method since that paper was published.

        They linearly detrend the gap-filled daily data, then apply a zero-padded fast Fourier transform. To remove the seasonal cycle, they apply a low-pass filter which is a decaying exponential of the fourth power of frequency (Eq. 2).

        The filter has a “cutoff frequency” of 667 days (0.55 cycles/year), meaning the power is attenuated by half at a period of 667 days. 667 days was chosen so that the filter transfer function drops to almost zero right at a period of 1 year.

        (They also talk about a 50-day filter to remove subseasonal variability, and it’s not entirely clear whether they apply that first before the seasonal filter, or whether that’s for a separate analysis.)

        After filtering, they perform an inverse FFT back to the time domain, and add back the linear trend.

      • John Baez says:

        Nathan wrote:

        When you say “‘correction’ method”, do you refer to where they “corrected for the average seasonal cycle”?

        Yes. Thanks for shedding some light on this! The passage I quoted is not very clearly written, but I’m betting that they’re talking about what you just explained.

  12. Hank Roberts says:

    There’s some speculation here (I mentioned it at his site; he pointed to this older thread)

    • John Baez says:

      Interesting! They took the CO2 concentration, subtracted off a seasonal cycle and a linear trend, and graphed what was left: the ‘anomaly’.

      Then they compared the growth rate of this anomaly to the El Niño–Southern Oscillation, and got some intriguing correlations:

      Let’s compare all this to the red curve on Renato’s graph:

      It’s a bit hard to see what’s going on, but the 1997 peak in CO2 production stands out.

      I guess my desire to compare Renato’s data to the El Niño–Southern Oscillation wasn’t completely silly!

      • WebHubTel says:

        John, I have to say that this is a most impressive bit of data forensics and scientific sleuthing that I have seen in a while.

        The take away message has to be that the natural cycles and variations in CO2 can be momentarily large but as long as they don’t accumulate above the long-term average, they still pale in comparison to the relentless, almost monotonic, advance of man-made CO2 emissions.

      • Phil Henshaw says:

        Have you used a accumulative variance test to see if it’s a random walk? Or a variance suppression test to see if it’s an accumulative process?

      • John Baez says:

        Phil wrote:

        Have you used a accumulative variance test to see if it’s a random walk? Or a variance suppression test to see if it’s an accumulative process?

        I haven’t really done anything except explain what Renato Iturriaga did. The CO2 data is here — have at it!

        But I’m not sure what you mean by ‘it’. The anomaly, I guess. For that, I think the most exciting thing is its apparent correlation with the “Niño-3 SST index”, as shown above. This is the sea surface temperature in a patch of the ocean fairly close to Hawaii:

        Data for this region are available here — monthly since 1950 and weekly since 1990.

        • Phil Henshaw says:

          Oh, that’s easy, the “it” in this case is the physical process you are trying to describe, using the data recorded from it as a guide. The question of whether it is a statistical process or an overlay of many kinds of unrelated dynamic systems, or a single large scale system with small scale fluctuations.. etc. That would be important for knowing how to construct your mathematical description of it, wouldn’t it?

          Those two tests described on my drstats.htm page would help answer those questions based on whether the trends visible have flowing change in their continuities, to being developing a case for it being one or another kind of natural phenomenon creating the data.

          One can make a perfectly good statistical model of a dynamic system, but then it’s largely meaningless isn’t it, due to the mismatch in kind, right?.

      • Darin says:

        I wonder if algal blooms are causing the difference. They should coincide with any significant influx of iron from stuff like upwelling during el nino, coastal runoff, dust from the Saharan and Mongolian deserts, and so on.

    • Thanks, Hank! I’ve almost forgotten about tamino’s great blog (being too thrilled about this one). It’s actually no surprise he tackled similar analysis. There’s a lot to learn from tamino’s statistical data analysis blog posts.

  13. Renato Iturriaga says:

    I took the data from here

    ftp://ftp.cmdl.noaa.gov/ccg/co2/trends/co2_mm_mlo.txt

    The data for the red curve in 1998 is 2.76 this is from the difference of december 1998 and december 1997. There is no error inaccuracy in the graphing program. Only that the data is coming from december to december, not from the average. Since the data of the blue line comes on what happened on a given year I thought it was reasonable to take the same time intervals for the observed increment.

  14. Renato Iturriaga says:

    I don’t know, I used microsoft excel, but I am no expert I think this is the first graph I make and it took me some time to figure out how excel works.

    • WebHubTel says:

      As far as free is concerned, OpenOffice works much like Excel. For graphing a list of numbers it’s pretty simple.

      • John Baez says:

        I use OpenOffice, avoiding software that I need to pay for, so I’ll give that a try. Now that I’m gradually ceasing to be a pure mathematician I need to learn how to draw pretty graphs — not just pretty commutative diagrams!

  15. Use Sage then you can do both :-)

    But seriously there is a good example by Marshall Hampton that uses exactly the same series.

  16. John F says:

    For x-y graphs, Dplot plots well. The original version was free, but is now licensed etc. The Jr version might fit the bill.
    http://download.cnet.com/DPlot-Jr/3000-2056_4-10784396.html

  17. Concerning landuse flux (flows) Houghton has a series available at the CDIAC cdiac.ornl.gov site 1850-2005.
    (they also have ocean data series.

    diagram

    here is the series URL for Houghton’s paper

    http://cdiac.esd.ornl.gov/trends/landuse/houghton/houghton.html

You can use Markdown or HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s