The 1990 IPCC Climate Projections

Over on Google+, Daniel Lemire writes:

The IPCC predictions from 1990 went for a probable rise of the global mean temperature of 0.3 °C per decade and at least 0.2 °C per decade. See the IPPC assessment overview document, Section 1.0.3. (It is available online here.) We have had two decades. Fact: they predicted more warming than what actually materialized. Quite a bit more. See:

Instrumental temperature record, Wikipedia.

What’s the full story here? Here are some basic facts. The policymaker summary of the 1990 report estimates:

under the IPCC Business-as-Usual (Scenario A) emissions of greenhouse gases, a rate of increase of global-mean temperature during the next century of about 0.3 °C per decade (with an uncertainty range of 0.2 °C to 0.5 °C per decade); this is greater than that seen over the past 10,000 years. This will result in a likely increase in global-mean temperature of about 1°C above the present value by 2025 and 3°C before the end of the next century. The rise will not be steady because of the influence of other factors…

I believe we are going along with the ‘business-as-usual’ emissions of greenhouse gases. On the other hand, Wikipedia shows a figure from Global Warming Art:

based on NASA GISS data here. In 1990, the 5-year mean Global Land-Ocean Temperature Index was .27, meaning .27 °C above the mean temperature from 1961 to 1990. By 2006, the 5-year mean was .54.

So, that’s a .27 °C increase in 16 years, or about .17 °C per decade. This is slightly less than the bottom-end estimate of 0.2 °C per decade, and about half the expected rise of 0.3 °C per decade.

Is there any official story on why the 1990 IPCC report overestimated the temperature rise during this time? In 2007 the IPCC estimated a temperature rise of about 0.2 °C per decade for the next two decades for all the scenarios they considered. So, it seems somehow 0.3 °C went down to 0.2 °C. What was wrong with the original estimate?

37 Responses to The 1990 IPCC Climate Projections

  1. lharris says:

    It is my understanding that the 1990 IPCC report’s models did not include a dynamic ocean. The ocean is able to absorb a lot of the heat trapped by increasing greenhouse gases, slowing the rate of global warming.

    • WebHubTelescope says:

      That’s my take as well. James Hansen was looking at the transient effects of the ocean as early as 1981. This is my attempt at updating his chart with some numbers applied.

      Note the figure caption that describes the thermal diffusion coefficient used.

      In general terms this involves solving the heat equation at the diffusive earth/atmosphere boundary. The heat going into the ocean is essentially a random walk (governed by eddy diffusion, conventional diffusion, etc.) so the transient solution to the master equation have these long fat tails to a steady-state.

      I tried to express that behavior in the following chart:

      Notice the curves that asymptotically appear to follow a Fick’s law square root growth law. The atmosphere goes to a steady state much more quickly than the ocean, which keeps on absorbing the heat. That is the “heat in the pipeline” or “unrealized warming” argument used by Hansen and others to justify a slowing down of the warming (temperature), though not necessarily the heat. Scientists measure the heat being absorbed by the ocean and it is not showing any signs of slowing down.

      I am mixing the concepts of heat and temperature a bit but the reality is that the oceans do have about 1000 times the heat capacity of the atmosphere, and the balance of the heat flows between the compartments needs to reflect this.

      My hope is that the huge thermal heat sink of the ocean defers the temperature increase long enough for the CO2 to be sequestered out of the environment. Both the heat flow and CO2 sequestering are governed by diffusion with the same fat-tail response, so it may be that the heat diffusion can outlast the time it takes for the CO2 to sequester. If it turns into CAGW, then we are out of luck, I would think.

  2. lharris says:

    To be more precise, here is the statment from the 1995 assessment (page 23):

    For the mid-range IPCC emission scenario, IS92a, assuming the “best estimate” value of climate sensitivity4 and including the effects of future increases in aerosol, models project an increase in global mean surface air temperature relative to 1990 of about 2°C by 2100. This estimate is approximately one-third lower than the “best estimate” in 1990. This is due primarily to lower emission scenarios (particularly for CO2 and the CFCs), the inclusion of the cooling effect of sulphate aerosols, and improvements in the treat- ment of the carbon cycle.

  3. RW says:

    Actually the business as usual scenarios in 1990 did not foresee the collapse of the Soviet Union which led to a significant reduction in emissions. They also anticipated only partial implementation of the Montreal protocols.

    Reading roughly from some graphs in the IPCC’s 1990 reports, it seems that the BAU projection anticipated about 410ppm of CO2 and 2250ppb of CH4 by now; the actual values are about 390ppm for CO2, and 1825ppb for CH4. The greenhouse gas forcings have thus been quite a bit lower than anticipated by the 1990 BAU scenario.

    The 1990 Scenario B seems to have been more similar to how the real world actually evolved, and in this scenario, warming was projected at a rate of about 0.2°C/decade.

  4. nameless37 says:

    There are no major disagreements between the 1990 report and the 2007 report.

    The 1990 report forecasts an average rate of increase of 0.3°C per decade between 1990 and 2100, for a total of about 3°C (with wide error bars), under the “business-as-usual” scenario.

    The 2007 report forecasts temperature changes from 1990 to 2095 of 1.8°C, 2.4°C, 2.4°C, 2.8°C, 3.4°C, and 4.0°C, depending on the route of the economic development over the next century.

    The 2007 report also forecasts warming of 0.64°C to 0.69°C from 1990 through 2020, regardless of the economic scenario – slightly higher than “0.2°C/decade” from Wikipedia, and slightly lower than “0.3°C/decade” from the 1990 summary.

  5. nameless37 says:

    I just typed up a response, but I don’t see it any more. Did it disappear?

    Anyway, the gist of it was that the 1990 IPCC report was forecasting 0.3°C/decade warming, averaged over the entire 21st century, and slightly slower warming in the first 10-20 years. The 2007 report made a forecast of 0.64°C to 0.69°C warming from 1990 through 2020, and gave a range of values (from 1.8°C to 4.0°C) for the value of warming from 1990 through 2095, depending on the economic path taken by the human civilization.

    I wouldn’t consider these forecasts substantially divergent, especially considering wide error bars attached to all these numbers (for example, a key parameter called “climate sensitivity” that quantifies the response of the climate to a given change in the concentration of CO2 is still quoted by the 2007 IPCC report as “1.5°C to 4.5°C” – a 3x range.)

  6. Dr J R Stockton says:

    It is necessary to consider the stated (or otherwise) uncertainty in the original predictions when examining consistency with later work.

  7. Speed says:

    William Happer in the WSJ:

    What is happening to global temperatures in reality? The answer is: almost nothing for more than 10 years. Monthly values of the global temperature anomaly of the lower atmosphere, complied [sic] at the University of Alabama from NASA satellite data, can be found at the website http://www.drroyspencer.com/latest-global-temperatures/.

    […]

    The most important component of climate science is careful, long-term observations of climate-related phenomena, from space, from land, and in the oceans. If observations do not support code predictions—like more extreme weather, or rapidly rising global temperatures—Feynman has told us what conclusions to draw about the theory.

    […]

    “In general we look for a new law by the following process. First we guess it. Then we compute the consequences of the guess to see what would be implied if this law that we guessed is right. Then we compare the result of the computation to nature, with experiment or experience; compare it directly with observation, to see if it works. If it disagrees with experiment it is wrong.”

  8. Nathan Urban says:

    My first guess is that Scenario A overestimated the radiative forcing, either by overestimating the rate of CO2 emissions growth, or (more likely) by neglecting the negative aerosol forcing.

    The reason why I guess this is because the most recent projections average about 0.2 °C per decade using a climate sensitivity of ~3 °C, whereas the 1990 report used a climate sensitivity of 2.5 °C (see the box on page xxv here. To get faster warming with a lower climate sensitivity, you probably need a greater forcing. (But another possibility is a lower ocean heat uptake in the 1990 report.)

    The BaU curve in Fig. 6.11b (here) shows upwelling-diffusive (energy balance) model projections. It projects slightly over 4 °C of warming above pre-industrial by 2100 using a climate sensitivity of 2.5 °C. That’s a little above the current projections for the SRES A2 high emissions scenario. Hard to say how well that supports my hypothesis.

    I believe that the second IPCC report (1995) revised their projections downward to 0.2 °C per decade. That report probably explains why they revised their projections, but my Internet is slow right now and I can’t download the 50 MB report.

    The third assessment report (2001) has a recap of some of what went into the projections in the earlier to report (here).

    • Nathan Urban says:

      The 1995 report says (pp. 5-6):

      “For the mid-range IPCC emission scenario, IS92a, assuming the ‘best estimate’ value of climate sensitivity and including the effects of future increases in aerosol, models project an increase of global mean surface air temperature relative to 1990 of about 2 °C by 2100. [NU: Note this is again a century average] This estimate is approximately one third lower than the ‘best estimate’ in 1990. This is due primarily to lower emission scenarios (particularly for CO2 and the CFCs), the inclusion of the cooling effect of sulphate aerosols, and improvements in the treatment of the carbon cycle.”

      This lends support to my original hypothesis that the 1990 report projected more warming than subsequent reports due to its forcing assumptions.

      When talking about whether or not the models are “wrong”, it’s important to disentangle their assumptions about the physical climate from their assumptions about biogeochemistry (e.g., the carbon cycle) and, importantly, their assumptions about socioeconomics (i.e., emissions scenarios).

  9. Nathan Urban says:

    Forget what I said above: I’m not doing an apples-to-apples comparison. The recent (AR4, 2007) projections are for 0.2 °C per decade near term (next two decades), accelerating through the course of the century. The 1990 projections above are for the 21st century, and are higher than their near-term projections. I don’t know what their near-term projections are. This is one problem with Daniel Lemire’s comparison (treating a century-long average as if it’s representative of what was projected for the next two decades).

  10. Speed says:

    Freeman Dyson in The New York Review of Books wrote:

    The fringe of physics is not a sharp boundary with truth on one side and fantasy on the other. All of science is uncertain and subject to revision. The glory of science is to imagine more than we can prove. The fringe is the unexplored territory where truth and fantasy are not yet disentangled.

    IPCC predictions are at the fringe.

    • Please, Happer is enough from the geriatrics dept. I won’t do your homework and dig out comments on Dyson’s BS (=bad science). He doesn’t even care about what he’s musing about.

  11. Re the “it has not warmed” myth (e.g. William “CO2 is not a cause for alarm and will be good for mankind” Happer…)

    • Grant Foster, Stefan Rahmstorf: Global temperature evolution 1979–2010, Environ. Res. Lett. 6 (2011).

    All five series show consistent global warming trends ranging from 0.014 to 0.018 K/yr

    The warming rate is a bit higher for the northern hemisphere, cf. loc. link, fig. 2. Their fig. 5 is a must-eyeball graph of the temp series with natural variations filtered out:

  12. Speed says:

    Florifulgurator wrote, All five series show consistent global warming trends ranging from 0.014 to 0.018 K/yr

    Which is in rough agreement with the 0.17 degrees C per decade noted in the post and less than the 0.2 degrees C per decade to 0.5 degrees C per decade quoted from the IPCC projections. You are in agreement that the IPCC projection was/is wrong.

  13. Nathan Urban says:

    Ok, I crudely digitized part of the “best” BaU curve from the figure at the top of page 74 of the AR1 report. The data are:

    1990.113 0.986
    1992.243 1.032
    1994.566 1.094
    1997.277 1.149
    1999.407 1.195
    2001.731 1.242
    2003.667 1.296
    2009.476 1.444
    2011.219 1.498
    2013.349 1.568

    A linear fit indicates that their projection for the last two decades was about 0.246 °C/decade. So, not quite 0.3 °C/decade as implied by their century average. But not as low as the current (2007) AR4 estimate of ~0.2 °C/decade.

    A similar exercise with their “low” curve gives:

    1988.564 0.683
    1991.275 0.698
    1993.598 0.737
    1996.309 0.768
    1999.794 0.823
    2002.699 0.885
    2005.603 0.931
    2008.508 0.993
    2010.832 1.040
    2013.349 1.087

    which corresponds to a lower bound of 0.169 °C/decade.

  14. GFW says:

    To post a direct answer to the posed question, rather than debate talking points …

    The reason the original prediction was a little high is twofold:
    1. The degree to which aerosols (e.g. sulphates from burning coal) would temporarily counteract warming (by reflecting sunlight, aka global dimming) was slightly underestimated.
    2. The time-lag in warming due to ocean overturning (i.e. the rate at which surface warming can be buried in the depths) was also underestimated.

    Unfortunately, neither of these is of permanent help – the eventual temperature increase from an increase in greenhouse gasses is not affected. Sulphates rain out if not constantly replenished. They cause acid rain and the global dimming isn’t good for agriculture either … so sulphate emissions have to diminish eventually, and when they do, they’ll stop masking the greenhouse effect of the much longer lived CO2.

    Hope this helps.

  15. Hank Roberts says:

    For decades, energy use projections were too high:

    In the most recent past, that’s changed: http://environmentalresearchweb.org/cws/article/news/30073

  16. Hank Roberts says:

    http://www.pnas.org/content/104/24/10288.long is the study described in the environmentalresearchweb news article.

    Global and regional drivers of accelerating CO2 emissions
    doi: 10.1073/pnas.0700609104
    PNAS June 12, 2007 vol. 104 no. 24 10288-10293

  17. Speed says:

    A picture is worth a thousand words so I direct your attention to the latest graph of global temperature trends from John Christie and Roy Spencer at the University of Alabama. Satellite data.

    It’s much less frightening than the chart above.

    • Nathan Urban says:

      The satellite data isn’t really any different than the GISTEMP data above, so I don’t know what point you’re trying to make here, other than “there’s less warming apparent if you cut out a lot of the warming from the historical record”. Note that both graphs show about 0.4 C of warming from 1979 to present.

      • Speed says:

        The graph of satellite data shows that in the last 30+ years, there have been two temperature regimes – the period from 1979 to about 1997, and from 2000 to present. During the first, the temperature anomaly varied widely around (by eye) -0.1 deg. C and during the second around (by eye) 0.2 deg. C. During each regime the nature of global temperature changed little in terms of average and variation around that average. This is very different from the chart based on GISS temperatures that shows a steady rise in temperature over the same period as well as a 2004 temperature higher than the 1998 El Nino anomaly.

        It could be argued that the satellite data is higher quality and more reliable than the thermometric data since one uniformly covers most (all?) of the earth’s surface and the other samples sparsely in some places (oceans, Africa), less sparsely in others and contains discontinuities in sampling locations, record keeping and methods. If they tell different stories over the last three decades, one has to be open to the possibility that if satellites had been available from 1880 the long term story would be far different.

        • Nathan Urban says:

          The graph of satellite data shows that in the last 30+ years, there have been two temperature regimes – the period from 1979 to about 1997, and from 2000 to present.

          That’s your interpretation of the graph, not some objective fact.

          It is interesting to consider the effects of ENSO on the time series (e.g., here and here).

          This is very different from the chart based on GISS temperatures that shows a steady rise in temperature over the same period as well as a 2004 temperature higher than the 1998 El Nino anomaly.

          They’re actually not that different (e.g. here):


          UAH does have a bigger ENSO response.

          It could be argued that the satellite data is higher quality and more reliable than the thermometric data

          I think the argument is moot, since the satellite and surface data are basically the same as far as warming trends are concerned.

          By the way, here is a comparison of GISTEMP to a different satellite reconstruction (they have their own subjectivities, you know):

        • dave tweed says:

          Here’s a naive question: suppose that one genuinely had a time series which had an increasing trend, but with a magnitude substantially less than the general stochastic changes in the time series. If one computes an average over the entire time series (ignoring if there’s some reason why Christie and Roy have apparently chosen a slight sub-range of the data to compute the average over), wouldn’t you expect it to fall into two “regimes”, one in the first half-ish of the time series where the data was mostly below its own average and one “regime” in the second half-ish of the time sereis where the data was mostly above its own average? (I’m using half-ish because if the trend isn’t linear, and because of stochastic effects, one wouldn’t expect the division to occur exactly the half way point.) In other words, why are these “regimes” anything more than an artifact of the decision of what to plot?

        • Frederik De Roo says:

          David Tweed wrote:

          If one computes an average over the entire time series, wouldn’t you expect it to fall into two “regimes”?

          What do you mean by “it”, the plotted data?

        • Nathan Urban says:

          There are statistical methods for “changepoint” or “breakpoint” problems (usually assuming that the underlying trend is discontinuous piecewise constant or discontinuous piecewise linear, although some assume continuity). I haven’t been too impressed by their performance on real time series, mostly because changepoints are confounded with the presumed error model. (Is that a “real” jump, or is it an autocorrelated noise fluctuation?) For climate time series, it would probably also be necessary to use a less naive trend model. Finally, a “regime change” in climate science is something physical (a reorganization of atmospheric-ocean circulation) and you’d want to look at more than a global average surface temperature, instead of a purely statistical time series approach.

        • dave tweed says:

          In reply to Frederik, yes I was simply referring to the time series plotted relative to its own average.

          What I was making was a rather trite point about there being two “regimes”, which sounds quite impressive, until you realise that when relating a time series to it’s own average, rather than some independently determined value, any time series with an monotonic trend will look like it’s got two “regimes”. In this case the fact that temporal range lies “below zero” and the other lies “above zero” means less than if the zero point has some independent physical meaning. (As a related point, the in the UK we sometimes make fun of politicians who loudly proclaim their plans will ensure that “in future our reforms will raise up every student so they’re all above average”.)

          As Nathan points out there are sophisticated ways of attempting to discern change-points, but they’re doing things that generally involve making stronger assumptions and require greater work.

      • John Baez says:

        Nathan had written:

        Hmm, I forgot you couldn’t do block quotes here.

        Hmm, I just had some trouble doing it myself. But lots of people have done it successfully for years here, and it’s working for me now, so I’m not sure what’s up.

        I tried to fix that comment of yours. Thanks for all you comments on this thread!

  18. John Baez says:

    Thanks for all the great comments and graphs, everyone. I’ve taken the liberty of making the graphs visible here. In a while I’ll try to post a comment summarizing what I’ve learned.

  19. Berényi Péter says:

    The main trouble is with the very concept of projection. It is a murky one with 8 or 9 separate dictionary entries of which only one (“a prediction or an estimate of something in the future, based on present data or trends” or “a prediction based on known evidence and observations”) matches more or less the thing we traditionally used to do in science.

    And these definitions do refer back to prediction, a much sharper concept. It is derived from the verb predict, which has both transitive and intransitive usage, the latter one involving prophesy, which is clearly outside the domain of science.

    Therefore we are left with “to state, tell about, or make known in advance, especially on the basis of special knowledge”, which is close enough.

    But in this case there is no legitimate reason whatsoever to invent a term other than prediction, is there?

    We have specific protocols and procedures in place to compare scientific predictions against results of future observations and/or measurements, which either match the prediction or not, thus either strengthening or falsifying the piece of special knowledge the prediction was based on.

    On the other hand, with projections we lack all these tools, so we are left in utter darkness.

    • dave tweed says:

      As I understand it, and supported by
      links like this blog post they are terms with precise technical meanings rather than just using the definitions from an English dictionary, namely,

      a prediction is a model based “forecast” of the future when you know all the relevant input data at the time the prediction is made.

      the need to use projections occurs when you believe you’ve got a model but some of the input data is only available after the time you’ve made the prediction. The current practice is to make a set of hypotheses about the future data (eg, human carbon emissions each future year) and then a prediciton for each of these cases.

      • Berényi Péter says:

        So, you effectively say a projection (in this sense of the word) is nothing else but a conditional prediction, don’t you? If so, why don’t we use simply the term “prediction”, even more so because essentially all scientific predictions are conditional?

        Or do you mean a projection is prediction over a set of several “scenarios”, but not over all reasonable scenarios, because we lack the knowledge to make such generalizations? In this case a projection, unlike proper scientific prediction, is virtually unfalsifiable, except in the rare cases one of the scenarios considered at the time of projection actually happen to unfold later on.

        • dave tweed says:

          There are two issues: what the terms denote, and how you assess the accuracy/utility of a set of projections. I’m not remotely expert enough to talk about how the assessment should be done for climate projections (or whether how it’s currently being done is adequate); I was just pointing out that these terms when used by climatologists have the meanings that climatologists, rather than colloquial usage, define them to have, in the same way that the term “function” means different things to mathermaticians, to computer programmers and people in general.

      • Berényi Péter says:

        I understand “projection” is a technical term in climate science, but I was not aware of the existence of an explicit definition.

        In fact there is one in the Glossary of IPCC TAR WG1:

        Climate projection
        A projection of the response of the climate system to emission or concentration scenarios of greenhouse gases and aerosols, or radiative forcing scenarios, often based upon simulations by climate models. Climate projections are distinguished from climate predictions in order to emphasise that climate projections depend upon the emission/concentration/ radiative forcing scenario used, which are based on assumptions, concerning, e.g., future socio-economic and technological developments, that may or may not be realised, and are therefore subject to substantial uncertainty”.

        There is a specific problem with it. We are actually not interested so much in confirming or falsifying projections based on old scenarios, but in the validity of the underlying computational climate model. However, source code is not archived, documented and published, therefore it is impossible to re-run code after two decades with actual data collected since then. It means we are not in a position to say anything meaningful about the 1990 IPCC Climate Projections, which entails a game played outside the realm of science.

You can use Markdown or HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it.

This site uses Akismet to reduce spam. Learn how your comment data is processed.