This Week’s Finds (Week 306)

This week I’ll interview another physicist who successfully made the transition from gravity to climate science: Tim Palmer.

JB: I hear you are starting to build a climate science research group at Oxford.  What led you to this point? What are your goals?

TP: I started my research career at Oxford University, doing a PhD in general relativity theory under the cosmologist Dennis Sciama (himself a student of Paul Dirac). Then I switched gear and have spent most of my career working on the dynamics and predictability of weather and climate, mostly working in national and international meteorological and climatological institutes. Now I’m back in Oxford as a Royal Society Research Professor in climate physics. Oxford has a lot of climate-related activities going on, both in basic science and in impact and policy issues. I want to develop activities in climate physics. Oxford has wonderful Physics and Mathematics Departments and I am keen to try to exploit human resources from these areas where possible.

The general area which interests me is in the area of uncertainty in climate prediction; finding ways to estimate uncertainty reliably and, of course, to reduce uncertainty. Over the years I have helped develop new techniques to predict uncertainty in weather forecasts. Because climate is a nonlinear system, the growth of initial uncertainty is flow dependent. Some days when the system is in a relatively stable part of state space, accurate weather predictions can be made a week or more ahead of time. In other more unstable situations, predictability is limited to a couple of days. Ensemble weather forecast techniques help estimate such flow dependent predictability, and this has enormous practical relevance.

How to estimate uncertainty in climate predictions is much more tricky than for weather prediction. There is, of course, the human element: how much we reduce greenhouse gas emissions will impact on future climate. But leaving this aside, there is the difficult issue of how to estimate the accuracy of the underlying computer models we use to predict climate.

To say a bit more about this, the problem is to do with how well climate models simulate the natural processes which amplify the anthropogenic increases in greenhouse gases (notably carbon dioxide). A key aspect of this amplification process is associated with the role of water in climate. For example, water vapour is itself a powerful greenhouse gas. If we were to assume that the relative humidity of the atmosphere (the percentage of the amount of water vapour at which the air would be saturated) was constant as the atmosphere warms under anthropogenic climate change, then humidity would amplify the climate change by a factor of two or more. On top of this, clouds — i.e. water in its liquid rather than gaseous form — have the potential to further amplify climate change (or indeed decrease it depending on the type or structure of the clouds). Finally, water in its solid phase can also be a significant amplifier of climate change. For example, sea ice reflects sunlight back to space. However as sea ice melts, e.g. in the Arctic, the underlying water absorbs more of the sunlight than before, again amplifying the underlying climate change signal.

We can approach these problems in two ways. Firstly we can use simplified mathematical models in which plausible assumptions (like the constant relative humidity one) are made to make the mathematics tractable. Secondly, we can try to simulate climate ab initio using the basic laws of physics (here, mostly, but not exclusively, the laws of classical physics). If we are to have confidence in climate predictions, this ab initio approach has to be pursued. However, unlike, say temperature in the atmosphere, water vapour and cloud liquid water have more of a fractal distribution, with both large and small scales. We cannot simulate accurately the small scales in a global climate model with fixed (say 100km) grid, and this, perhaps more than anything, is the source of uncertainty in climate predictions.

This is not just a theoretical problem (although there is some interesting mathematics involved, e.g. of multifractal distribution theory and so on). In the coming years, governments will be looking to spend billions on new infrastructure for society to adapt to climate change: more reservoirs, better flood defences, bigger storm sewers etc etc. It is obviously important that this money is spent wisely. Hence we need to have some quantitative and reliable estimate of certainty that in regions where more reservoirs are to be built, the climate really will get drier and so on.

There is another reason for developing quantitative methods for estimating uncertainty: climate geoengineering. If we spray aerosols in the stratosphere, or whiten clouds by spraying sea salt into them, we need to be sure we are not doing something terrible to our climate, like shutting off the monsoons, or decreasing rainfall over Amazonia (which might then make the rainforest a source of carbon for the atmosphere rather than a sink). Reliable estimates of uncertainty of regional impacts of geoengineering are going to be essential in the future.

My goals? To bring quantitative methods from physics and maths into climate decision making.  One area that particularly interests me is the application of nonlinear stochastic-dynamic techniques to represent unresolved scales of motion in the ab initio models. If you are interested to learn more about this, please see this book:

• Tim Palmer and Paul Williams, editors, Stochastic Physics and Climate Modelling, Cambridge U. Press, Cambridge, 2010.

JB: Thanks! I’ve been reading that book. I’ll talk about it next time on This Week’s Finds.

Suppose you were advising a college student who wanted to do something that would really make a difference when it comes to the world’s environmental problems.  What would you tell them?

TP: Well although this sounds a bit of a cliché, it’s important first and foremost to enjoy and be excited by what you are doing. If you have a burning ambition to work on some area of science without apparent application or use, but feel guilty because it’s not helping to save the planet, then stop feeling guilty and get on with fulfilling your dreams. If you work in some difficult area of science and achieve something significant, then this will give you a feeling of confidence that is impossible to be taught. Feeling confident in one’s abilities will make any subsequent move into new areas of activity, perhaps related to the environment, that much easier. If you demonstrate that confidence at interview, moving fields, even late in life, won’t be so difficult.

In my own case, I did a PhD in general relativity theory, and having achieved this goal (after a bleak period in the middle where nothing much seemed to be working out), I did sort of think to myself: if I can add to the pool of knowledge in this, traditionally difficult area of theoretical physics, I can pretty much tackle anything in science. I realize that sounds rather arrogant, and of course life is never as easy as that in practice.

JB: What if you were advising a mathematician or physicist who was already well underway in their career?  I know lots of such people who would like to do something "good for the planet", but feel that they’re already specialized in other areas, and find it hard to switch gears.  In fact I might as well admit it — I’m such a person myself!

TP: Talk to the experts in the field. Face to face. As many as possible. Ask them how your expertise can be put to use. Get them to advise you on key meetings you should try to attend.

JB: Okay.  You’re an expert in the field, so I’ll start with you.  How can my expertise be put to use?  What are some meetings that I should try to attend?

TP: The American Geophysical Union and the European Geophysical Union have big multi-session conferences each year which include mathematicians with an interest in climate. On top of this, mathematical science institutes are increasingly holding meetings to engage mathematicians and climate scientists. For example, the Isaac Newton Institute at Cambridge University is holding a six-month programme on climate and mathematics. I will be there for part of this programme. There have been similar programmes in the US and in Germany very recently.

Of course, as well as going to meetings, or perhaps before going to them, there is the small matter of some reading material. Can I strongly recommend the Working Group One report of the latest IPCC climate change assessments? WG1 is tasked with summarizing the physical science underlying climate change. Start with the WG1 Summary for Policymakers from the Fourth Assessment Report:

• Intergovernmental Panel on Climate Change, Climate Change 2007: The Physical Science Basis, Summary for Policymakers.

and, if you are still interested, tackle the main WG1 report:

• Intergovernmental Panel on Climate Change, Climate Change 2007: The Physical Science Basis, Cambridge U. Press, Cambridge, 2007.

There is a feeling that since the various so-called "Climategate" scandals, in which IPCC were implicated, climate scientists need to be more open about uncertainties in climate predictions and climate prediction models. But in truth, these uncertainties have always been openly discussed in the WG1 reports. These reports are absolutely not the alarmist documents many seem to think, and, I would say, give an extremely balanced picture of the science. The latest report dates from 2007.

JB: I’ve been slowly learning what’s in this report, thanks in part to Nathan Urban, whom I interviewed in previous issues of This Week’s Finds. I’ll have to keep at it.



You told me that there’s a big difference between the "butterfly effect" in chaotic systems with a few degrees of freedom, such as the Lorenz attractor shown above, and the "real butterfly effect" in systems with infinitely many degrees of freedom, like the Navier-Stokes equations, the basic equations describing fluid flow. What’s the main difference?

TP: Everyone knows, or at least think they know, what the butterfly effect is: the exponential growth of small initial uncertainties in chaotic systems, like the Lorenz system, after whom the butterfly effect was named by James Gleick in his excellent popular book:

• James Gleick, Chaos: Making a New Science, Penguin, London, 1998.

But in truth, this is not the butterfly effect as Lorenz had meant it (I knew Ed Lorenz quite well). If you think about it, the possible effect of a flap of a butterfly’s wings on the weather some days later, involves not only an increase in the amplitude of the uncertainty, but also the scale. If we think of a turbulent system like the atmosphere, comprising a continuum of scales, its evolution is described by partial differential equations, not a low order set of ordinary differential equations. Each scale can be thought of as having its own characteristic dominant Lyapunov exponent, and these scales interact nonlinearly.

If we want to estimate the time for a flap of a butterfly’s wings to influence a large scale weather system, we can imagine summing up all the Lyapunov timescales associated with all the scales from the small scales to the large scales. If this sum diverges, then very good, we can say it will take a very long time for a small scale error or uncertainty to influence a large-scale system. But alas, simple scaling arguments suggest that there may be situations (in 3 dimensional turbulence) where this sum converges. Normally, we thinking of convergence as a good thing, but in this case it means that the small scale uncertainty, no matter how small scale it is, can affect the accuracy of the large scale prediction… in finite time. This is quite different to the conventional butterfly effect in low order chaos, where arbitrarily long predictions can be made by reducing initial uncertainty to sufficiently small levels.

JB: What are the practical implications of this difference?

TP: Climate models are finite truncations of the underlying partial differential equations of climate. A crucial question is: how do solutions converge as the truncation gets better and better?  More practically, how many floating point operations per second (flops) does my computer need to have, in order that I can simulate the large-scale components of climate accurately. Teraflops, petaflops, exaflops? Is there an irreducible uncertainty in our ability to simulate climate no matter how many flops we have? Because of the "real" butterfly effect, we simply don’t know. This has real practical implications.

JB: Nobody has proved existence and uniqueness for solutions of the Navier-Stokes equations. Indeed Clay Mathematics Institute is offering a million-dollar prize for settling this question. But meteorologists use these equations to predict the weather with some success.  To mathematicians that might seem a bit strange.  What do you think is going on here?

TP: Actually, for certain simplifications to the Navier-Stokes equations, such as making them hydrostatic (which damps acoustic waves) then existence and uniqueness can be proven. And for weather forecasting we can get away with the hydrostatic approximation for most applications. But in general existence and uniqueness haven’t been proven. The "real" butterfly effect is linked to this. Well obviously the Intergovernmental Panel on Climate Change can’t wait for the mathematicians to solve this problem, but as I tried to suggest above, I don’t think the problem is just an arcane mathematical conundrum, but rather may help us understand better what is possible to predict about climate change and what not.

JB:  Of course, meteorologists are really using a cleverly discretized version of the Navier-Stokes equations to predict the weather. Something vaguely similar happens in quantum field theory: we can use "lattice QCD" to compute the mass of the proton to reasonable accuracy, but nobody knows for sure if QCD makes sense in the continuum.  Indeed, there’s another million-dollar Clay Prize waiting for the person who can figure that out.   Could it be that sometimes a discrete approximation to a continuum theory does a pretty good job even if the continuum theory fundamentally doesn’t make sense?

TP: There you are! Spend a few years working on the continuum limit of lattice QCD and you may end up advising government on the likelihood of unexpected consequences on regional climate arising from some geoengineering proposal! The idea that two so apparently different fields could have elements in common is something bureaucrats find it hard to get their heads round.  We at the sharp end in science need to find ways of making it easier for scientists to move fields (even on a temporary basis) should they want to.

This reminds me of a story. When I was finishing my PhD, my supervisor, Dennis Sciama announced one day that the process of Hawking radiation, from black holes, could be understood using the Principle of Maximum Entropy Production in non-equilibrium thermodynamics. I had never heard of this Principle before, no doubt a gap in my physics education. However, a couple of weeks later, I was talking to a colleague of a colleague who was a climatologist, and he was telling me about a recent paper that purported to show that many of the properties of our climate system could be deduced from the Principle of Maximum Entropy Production. That there might be such a link between black hole theory and climate physics, was one reason that I thought changing fields might not be so difficult after all.

JB: To what extent is the problem of predicting climate insulated from the problems of predicting weather?  I bet this is a hard question, but it seems important.  What do people know about this?

TP: John Von Neumann was an important figure in meteorology (as well, for example, as in quantum theory). He oversaw a project at Princeton just after the Second World War, to develop a numerical weather prediction model based on a discretised version of the Navier-Stokes equations. It was one of the early applications of digital computers. Some years later, the first long-term climate models were developed based on these weather prediction models. But then the two areas of work diverged. People doing climate modelling needed to represent lots of physical processes: the oceans, the cryosphere, the biosphere etc, whereas weather prediction tended to focus on getting better and better discretised representations of the Navier-Stokes equations.

One rationale for this separation was that weather forecasting is an initial value problem whereas climate is a "forced" problem (e.g. how does climate change with a specified increase in carbon dioxide?). Hence, for example, climate people didn’t need to agonise over getting ultra accurate estimates of the initial conditions for their climate forecasts.

But the two communities are converging again. We realise there are lots of synergies between short term weather prediction and climate prediction. Let me give you one very simple example. Whether anthropogenic climate change is going to be catastrophic to society, or is something we will be able to adapt to without too many major problems, we need to understand, as mentioned above, how clouds interact with increasing levels of carbon dioxide. Clouds cannot be represented explicitly in climate models because they occur on scales that can’t be resolved due to computational constraints. So they have to be represented by simplified "parametrisations". We can test these parametrisations in weather forecast models. To put it crudely (to be honest too crudely) if the cloud parametrisations (and corresponding representations of water vapour) are systematically wrong, then the forecasts of tomorrow’s daily maximum temperature will also be systematically wrong.

To give another example, I myself for a number of years have been developing stochastic methods to represent truncation uncertainty in weather prediction models. I am now trying to apply these methods in climate prediction. The ability to test the skill of these stochastic schemes in weather prediction mode is crucial to having confidence in them in climate prediction mode. There are lots of other examples of where a synergy between the two areas is important.

JB: When we met recently, you mentioned that there are currently no high-end supercomputers dedicated to climate issues.  That seems a bit odd.  What sort of resources are there?  And how computationally intensive are the simulations people are doing now?

TP: By "high end" I mean very high end: that is, machines in the petaflop range of performance. If one takes the view that climate change is one of the gravest threats to society, then throwing all the resources that science and technology allows, to try to quantify exactly how grave this threat really is, seems quite sensible to me. On top of that, if we are to spend billions (dollars, pounds, euros etc.) on new technology to adapt to climate change, we had better make sure we are spending the money wisely — no point building new reservoirs if climate change will make your region wetter. So the predictions that it will get drier in such a such a place better be right. Finally, if we are to ever take these geoengineering proposals seriously we’d better be sure we understand the regional consequences. We don’t want to end up shutting off the monsoons! Reliable climate predictions really are essential.

I would say that there is no more computationally complex problem in science than climate prediction. There are two key modes of instability in the atmosphere, the convective instabilites (thunderstorms) with scales of kilometers and what are called baroclinic instabilities (midlatitude weather systems) with scales of thousands of kilometers. Simulating these two instabilities, and their mutual global interactions, is beyond the capability of current global climate models because of computational constraints. On top of this, climate models try to represent not only the physics of climate (including the oceans and the cryosphere), but the chemistry and biology too. That introduces considerable computational complexity in addition to the complexity caused by the multi-scale nature of climate.

By and large individual countries don’t have the financial resources (or at least they claim they don’t!) to fund such high end machines dedicated to climate. And the current economic crisis is not helping! On top of which, for reasons discussed above in relation to the "real" butterfly effect, I can’t go to government and say: "Give me a 100 petaflop machine and I will absolutely definitely be able to reduce uncertainty in forecasts climate change by a factor of 10". In my view, the way forward may be to think about internationally funded supercomputing. So, just as we have internationally funded infrastructure in particle physics, astronomy, so too in climate prediction. Why not?

Actually, very recently the NSF in the US gave a consortium of climate scientists from the US, Europe and Japan, a few months of dedicated time on a top-end Cray XT4 computer called Athena. Athena wasn’t quite in the petaflop range, but not too far off, and using this dedicated time, we produced some fantastic results, otherwise unachievable, showing what the international community could achieve, given the computational resources. Results from the Athena project are currently being written up — they demonstrate what can be done where there is a will from the funding agencies.

JB: In a Guardian article on human-caused climate change you were quoted as saying "There might be a 50% risk of widespread problems or possibly only 1%.  Frankly, I would have said a risk of 1% was sufficient for us to take the problem seriously enough to start thinking about reducing emissions."

It’s hard to argue with that, but starting to think about reducing emissions is vastly less costly than actually reducing them.  What would you say to someone who replied, "If the risk is possibly just 1%, it’s premature to take action — we need more research first"?

TP: The implication of your question is that a 1% risk is just too small to worry about or do anything about. But suppose the next time you checked in to fly to Europe, and they said at the desk that there was a 1% chance that volcanic ash would cause the aircraft engines to fail mid flight, leading the plane to crash, killing all on board. Would you fly? I doubt it!

My real point is that in assessing whether emissions cuts are too expensive, given the uncertainty in climate predictions, we need to assess how much we value things like the Amazon rainforest, or of (preventing the destruction of) countries like Bangladesh or the African Sahel. If we estimate the damage caused by dangerous climate change — let’s say associated with a 4 °C or greater global warming — to be at least 100 times the cost of taking mitigating action, then it is worth taking this action even if the probability of dangerous climate change was just 1%. But of course, according to the latest predictions, the probability of realizing such dangerous climate changes is much nearer 50%. So in reality, it is worth cutting emissions if the value you place on current climate is comparable or greater than the cost of cutting emissions.

Summarising, there are two key points here. Firstly, rational decisions can be made in the light of uncertain scientific input. Secondly, whilst we do certainly need more research, that should not itself be used as a reason for inaction.

Thanks, John, for allowing me the opportunity to express some views about climate physics on your web site.

JB: Thank you!


The most important questions of life are, for the most part, really only problems of probability. – Pierre Simon, Marquis de Laplace

47 Responses to This Week’s Finds (Week 306)

  1. Tim van Beek says:

    TP said:

    …we can try to simulate climate ab initio using the basic laws of physics (here, mostly, but not exclusively, the laws of classical physics).

    What laws that are not classical are important? Classical means to me including Newtonian physics, Maxwellian electrodynamics and classical (equillibrium) thermodynamics up to the black body radiation law of Planck.

    Actually, very recently the NSF in the US gave a consortium of climate scientists from the US, Europe and Japan, a few months of dedicated time on a top-end Cray XT4 computer called Athena.

    What are the most advanced, top-end computers used for when they are not running climate models?

    Clouds cannot be represented explicitly in climate models because they occur on scales that can’t be resolved due to computational constraints. So they have to be represented by simplified “parametrisations”. We can test these parametrisations in weather forecast models.

    Sidenote: I’ve been reading a little bit in the book “Parameterization Schemes: Keys to Understanding Numerical Weather Prediction Models” by David J. Stensrud lateley: The author explains that there are relevant cloud phenomena occuring on a scale of less than 5 kilometers (far smaller than the cell spacing of any grid model), so that good parameterizations of these processes are essential to weather prediction.

    • Nathan Urban says:

      I’d guess that the relevant non-classical physics is the quantum molecular physics involved in atmospheric radiative transfer (e.g., the greenhouse effect).

    • John Baez says:

      Tim wrote:

      What laws that are not classical are important?

      The infrared absorption spectrum of the atmosphere, and how it changes as we pump more greenhouse gases into the air, is incredibly important for global warming. We need quantum theory to compute this spectrum:

      (Click for details on who does these calculations.)

      I can also imagine that quantum theory might be important in understanding the nucleation and growth of very small water droplets or ice crystals — but I’m just guessing here.

      What are the most advanced, top-end computers used for when they are not running climate models?

      I don’t know. I forwarded this question to Tim (who is busy at some conference, maybe the one at the Isaac Newton Institute).

  2. John F says:

    The book sounds good. I’ve been working on an excuse to expense it. But the butterfly effect certainly exists in low order chaotic systems, especially in and near attractors. In general, solutions to partial differential equations have the same complexity as difference equations of the next higher order (and same dimensions). I would call third order difference equations in three dimensions low order, or maybe low-ish order.

    FWIW the solutions to a *first* order time differential equation have the same kinds of behaviors as the solutions to a single *first* order delay difference equation, even though a fixed delay doesn’t necessarily well approximate any given solution anywhere. I think that may be true of higher orders too.

    One application of finite truncation aka discretization of chaotic dynamics is to generate one time pads. One extra order adds like a few extra nanoseconds of computation.

    • John Baez says:

      Tim was trying to make a distinction between the butterfly effect exhibited by physical systems with a few degrees of freedom, and the “real butterfly effect” exhibited by systems with physical systems with infinitely many degrees of freedom, like the weather.

      (If you don’t like infinity, replace “infinitely” by “very” in the sentence above. It makes no practical difference.)

      In the butterfly effect exhibited by systems with few degrees of freedom, like the Lorenz attractor shown above, a small change in the initial conditions can grow exponentially as time passes. So, to predict the future to a certain accuracy for one more day, we may need to know the initial conditions twice as accurately (for example).

      In the “real butterfly effect” exhibited by systems with infinitely many degrees of freedom, the difficulty in making accurate predictions is much worse.

      After all, it’s not just finitely many numbers you need to know to describe the initial conditions, it’s infinitely many (or at least vastly many).

      If we’re lucky, the numbers we care about — e.g. the average wind velocities in cubic-kilometer cells — can be computed pretty well starting from just the numbers we care about. But more commonly, the numbers we care about will be highly affected by numbers we don’t care about — e.g. the average wind velocities in cubic-millimeter cells. There are lots more of these numbers we don’t care about.

      In this situation, it’s possible that to predict the future to a certain accuracy for one more day, we not only need to know some numbers twice as accurately — we also need to know vastly more numbers.

      And I believe Tim is saying that things can get even worse.

      When we’ve got a turbulent fluid, information percolates up from short distance scales (for example those millimeter-sized cubes) to large distance scales (those kilometer-sized cubes). That’s the problem I already mentioned. But how long does it take for information at arbitrarily small distance scales to significantly affect what’s going on at large distance scales? It may be a finite amount of time!

      If this is the case, we’re really screwed: we need an essentially infinite amount of information to predict the future past a certain amount of time.

      That’s how I interpreted his comments, anyway:

      TP: If you think about it, the possible effect of a flap of a butterfly’s wings on the weather some days later, involves not only an increase in the amplitude of the uncertainty, but also the scale. If we think of a turbulent system like the atmosphere, comprising a continuum of scales, its evolution is described by partial differential equations, not a low order set of ordinary differential equations. Each scale can be thought of as having its own characteristic dominant Lyapunov exponent, and these scales interact nonlinearly.

      If we want to estimate the time for a flap of a butterfly’s wings to influence a large scale weather system, we can imagine summing up all the Lyapunov timescales associated with all the scales from the small scales to the large scales. If this sum diverges, then very good, we can say it will take a very long time for a small scale error or uncertainty to influence a large-scale system. But alas, simple scaling arguments suggest that there may be situations (in 3 dimensional turbulence) where this sum converges. Normally, we thinking of convergence as a good thing, but in this case it means that the small scale uncertainty, no matter how small scale it is, can affect the accuracy of the large scale prediction… in finite time. This is quite different to the conventional butterfly effect in low order chaos, where arbitrarily long predictions can be made by reducing initial uncertainty to sufficiently small levels.

      JB: What are the practical implications of this difference?

      TP: Climate models are finite truncations of the underlying partial differential equations of climate. A crucial question is: how do solutions converge as the truncation gets better and better? More practically, how many floating point operations per second (flops) does my computer need to have, in order that I can simulate the large-scale components of climate accurately. Teraflops, petaflops, exaflops? Is there an irreducible uncertainty in our ability to simulate climate no matter how many flops we have? Because of the "real" butterfly effect, we simply don’t know. This has real practical implications.

      When he said “low order chaos”, I’m pretty sure he did not mean “order” in the sense of the order of a differential equation — first derivatives, second derivatives etc. I’m pretty sure he meant the number of degrees of freedom.

      • John F says:

        I don’t mind infinity. Completely aside: I met Finkelstein as part of a scholarship tour of GaTech in high school, and he told me he was a finitist. He claimed infinity only existed in our imaginations, and I said our imaginations must be bigger than I thought. Later as an undergraduate I sent a Technicolor submission to IJTP he hated (IOW, no they don’t publish everything).

        Anyway, this scale mixing stuff even happens under Feigenbaum taffy-pulls etc, low order however low order may be defined.

        • John Baez says:

          John F wrote:

          Anyway, this scale mixing stuff even happens under Feigenbaum taffy-pulls etc, low order however low order may be defined.

          The phenomenon Tim Palmer is describing cannot possibly happen in a physical system with a low-dimensional phase space. It is quite distinct from the ‘taffy-pulling’ phenomenon you mention, where a small region in phase space gets stretched out over time.

          I agree that this ‘taffy-pulling’ phenomenon is quite common in systems with a low-dimensional phase space, such as the Lorenz model. The rate at which a small region of phase space gets stretched out is governed by the Lyapunov exponent.

          But the ‘real butterfly effect’ Tim Palmer describes is something else. It involves information propagating from short distance scales in physical space to longer distance scales in physical space — where ‘physical’ space means the 3d space we actually live in, quite distinct from phase space.

          It’s the sort of effect you might expect to find in a partial differential equation, not an ordinary differential equation.

          For a partial differential equation, the phase space is infinite-dimensional. This allows for a novel feature: not just a positive Lyapunov exponent, but an infinite Lyapunov exponent.

          Tim Palmer phrased this a bit differently, speaking of different Lyapunov exponents for different length scales, but it amounts to the same thing.

          I could say more, and I’ll be glad to if it helps…

      • Tim van Beek says:

        John said:

        Tim was trying to make a distinction between the butterfly effect exhibited by physical systems with a few degrees of freedom, and the “real butterfly effect” exhibited by systems with physical systems with infinitely many degrees of freedom, like the weather

        That’s how I understood Tim P’s remark, too. From what he said I gather that he sees this phenomenon as relevant to the question where the boundary of knowledge lies, in principle, with regard to climate models. My first reaction to this, was, however: “That’s probably a mathematical artefact, that cannot be observed in nature”. Is there some evidence from observations etc. that shows that he is right and I am wrong, or is this still a pure math discussion at the negotiating table?

        • John F says:

          I had posted a comment about butterflies dancing on a pin, but it seems to have evaporated. I think clearly the physics changes a lot depending on the system, but the math (or maybe proto-math) of predictable behavior is what Wolfram has devoted a couple of decades to studying. He concludes low order chaos has the same behaviors as high order chaos, regardless of details of definitions of any of those words are defined. In particular, the principle of computational equivalence incorporates the *observation* that the quantitative information required and generated do *not* depend on order etc.

        • DavidTweed says:

          I’ve only read his tome A New Kind of Science, not any of the “journals” he’s started, but one of the annoyances about Wolfram is that he’s doesn’t seem at all concerned about questions of actual predictability. He seems more interested in showing that a simple model “has behaviours that look like the range of behaviours of a detailed model” to “understand it”, without considering the (engineering-ish) case that you want to see how much you can make actual predictions about a specific physical system.

          That seems to be relevant at this point: it seems plausible that few-variable chaos has the same formal computability class as huge-variable chaos, but that doesn’t necessarily rule out there being significant differences in how quickly precise prediction becomes computationally infeasible.

        • John Baez says:

          Tim van Beek wrote:

          My first reaction to this, was, however: “That’s probably a mathematical artefact, that cannot be observed in nature”.

          On what basis do you have that reaction?

          On the one hand, I agree that you can’t do an experiment where you run the Earth’s weather twice, and in the second run add an extra butterfly, and see what effect it causes. So, there’s a sense in which this question is mathematical.

          But I wouldn’t call it ‘pure mathematics’. I’d call it applied mathematics. Predicting the weather is a big business and people would pay big money for better forecasts. Also, there’s been intensive study of turbulence for many years.

          I don’t how firmly the questions have been settled, and it would be great if Tim Palmer could tell us about that.

          But I don’t see why you think the “real butterfly effect” is likely to be a mere artifact. The Navier-Stokes equations seem to be a pretty reasonable model of fluid flow, and the difficulties people have in proving existence of solutions seems quite plausibly related to a scenario where changes in initial data at ultra-short distance scales rapidly lead to changes at long distance scales. It’s always these short-distance pathologies that make it hard to prove existence of solutions for PDE.

          Do you think the Navier-Stokes equations are an inadequate model, or do you think they don’t really display the “real butterfly effect”? Or something else?

      • Giampiero Campa says:

        Well, the way i see it, if you add degrees of freedom to a system, the dimension (the order) of its phase space increases accordingly. Right ? Now if the system is unstable along the dimensions you keep adding, then the Lyapunov exponent will get bigger and bigger. But this does not look (to me) to be fundamentally different from “low order chaos”.

        As for scale, imagine that the system under consideration is some kind of undamped mechanical structure with many links. Or maybe a number of separated objects interacting gravitationally. Then an initial uncertainty that is confined to just position or velocity of just one body somewhere, will naturally propagate to the scale of the whole structure, because the phase space is stretched along all dimensions (and therefore affects position and velocity of ALL the objects, located far in the physical 3D space, at some point).

        So overall, i don’t see why the “real butterfly effect” is different from what happens for systems with lower degrees of freedom. What am I missing ?

      • John Baez says:

        Giampero wrote:

        So overall, I don’t see why the “real butterfly effect” is different from what happens for systems with lower degrees of freedom. What am I missing ?

        I don’t know! Does everything Tim Palmer said here make perfect sense to you?

        If we think of a turbulent system like the atmosphere, comprising a continuum of scales, its evolution is described by partial differential equations, not a low order set of ordinary differential equations. Each scale can be thought of as having its own characteristic dominant Lyapunov exponent, and these scales interact nonlinearly.

        If we want to estimate the time for a flap of a butterfly’s wings to influence a large scale weather system, we can imagine summing up all the Lyapunov timescales associated with all the scales from the small scales to the large scales. If this sum diverges, then very good, we can say it will take a very long time for a small scale error or uncertainty to influence a large-scale system. But alas, simple scaling arguments suggest that there may be situations (in 3 dimensional turbulence) where this sum converges. Normally, we thinking of convergence as a good thing, but in this case it means that the small scale uncertainty, no matter how small scale it is, can affect the accuracy of the large scale prediction… in finite time. This is quite different to the conventional butterfly effect in low order chaos, where arbitrarily long predictions can be made by reducing initial uncertainty to sufficiently small levels.

        If it all makes perfect sense, you’re not missing anything. If not, ask me to explain something, and I will.

        Here’s a test of your understanding:

        I hope you see that in the “real” butterfly effect as described above, there’s a certain time such that after that, it would take an infinite amount of information about the initial data to predict the future to within a certain fixed accuracy. This is completely different from the usual butterfly effect, where the amount of information needed grows linearly with time — for example, one more decimal place of initial data for each extra day of prediction.

        So, they’re drastically different!

        • Phil Henshaw says:

          Well, like there are more than one mode of evolution or more than one kind of state change, there are more than one kind of butterfly effect isn’t there? I study ones that develop as cells of organization, like storms or ecologies, starting from the connection of a resource with a seed of organization for using it (i.e. an innovation as slight as the flutter of a butterfly wing)that sets off a process of multiplying developments. There seems to be quite a wide variety of cell like systems that evolve that way when you go to catalog them.

        • Tim van Beek says:

          John said:

          But I don’t see why you think the “real butterfly effect” is likely to be a mere artifact. The Navier-Stokes equations seem to be a pretty reasonable model of fluid flow, and the difficulties people have in proving existence of solutions seems quite plausibly related to a scenario where changes in initial data at ultra-short distance scales rapidly lead to changes at long distance scales. It’s always these short-distance pathologies that make it hard to prove existence of solutions for PDE.

          This remark is far more advanced than my first reaction to Tim P’s explanation of the “real” butterfly effect :-)

          First of all, I’m sure there is a certain length scale and a certain time scale, where the Navier-Stokes equation breaks down and is not applicable any more. Both scales may be far to small to have any practical impacts, i.e. the Navier-Stokes model may still tell us that we cannot predict the weather, for more than, say, two weeks, at a precision that is sufficient for every day life (“do I have to carry an umbrella if I spend the afternoon in the English garden in Munich?”). Further, the statement “we would need an infinite amount of information” is already a simplification, if you believe in the Bekenstein bound. So there certainly are certain boundaries where the Navier-Stokes equations don’t apply anymore. But that’s not what I meant, I mean: Climate is about physical quantities that are averages, over time and space, of microscopic quantities. Realistic boundary and initial conditions should lead to macroscopic variables that are stable against microscopic perturbations of the initial and boundary conditions. For example, the average temperature of a 100 m^3 cube of air on the surface in New York in 2100 should be almost independent of the number of butterflies in Sydney, on the 12.12.2010, in the sense that doubling or halving the number would lead to an effect that is too low to be measurable.

          This is just a naive speculation of someone who has not learned much about the Navier-Stokes equations :-)

        • John Baez says:

          Tim wrote:

          First of all, I’m sure there is a certain length scale and a certain time scale, where the Navier-Stokes equation breaks down and is not applicable any more.

          Sure: for example, the Navier-Stokes equations treat fluids as a continuum, and this is only approximately true. There are little things called atoms. But this is not a very strong objection to Tim Palmer’s ideas, in my opinion. After all, suppose the Navier-Stokes equations predict that arbitrarily short distance scales affect medium-sized distance scales in a short amount of time. And suppose this prediction breaks down only because fluids are not a continuum at arbitrarily small distance scales, but rather made of atoms. Then we still can conclude that fluid behavior at scales just a bit larger than atoms (where the Navier-Stokes equations become a good approximation) can affect medium-sized scales in a short amount of time.

          Realistic boundary and initial conditions should lead to macroscopic variables that are stable against microscopic perturbations of the initial and boundary conditions.

          “Should”? I agree that this would be very useful if true, but nature may not share our tastes.

        • Graham says:

          Here’s some rather bizarre evidence that you might all be worrying too much. Les Hatton, writing in his book Safer C, says that in 1973-4 he was working for the UK Met Office, and discovered a gross error in the existing model (an accidental transpostion of two assembler statements) which zeroed the nonlinear terms in the Navier-Stokes equations at every other time step. After correction, a 72-hour forecast was rerun: the differences were almost undetectable. He adds: “To this day the author does not understand why.”

        • Tim van Beek says:

          John said:

          “Should”? I agree that this would be very useful if true, but nature may not share our tastes.

          Should as in “I should learn more about Navier-Stokes equations, but this is what I would like from my mathematical model”. It is always helpful to have a concrete objective when one starts to learn about a huge topic like Navier-Stokes equations, in this case that could be “find evidence that there is a model where arbitrarily small perturbations lead to significant changes of macroscopic variables in finite time”. Something like the theorem that says that one can find a polynom such that an arbitrary small perturbation of the coefficients leads to an arbitrary change of the zeroes.

          Graham said:

          Here’s some rather bizarre evidence that you might all be worrying too much…

          Interesting! Steve Easterbrook wrote on his blog serendipity that coding errors in climate models usually don’t affect the qualitative statements that climate scientists infer from the model runs. I am suspicious that, maybe, people don’t easily find coding errors that make the model look right, instead of wrong, in the eye of the beholders, but far easier find bugs that have the opposite effect.

        • Tim van Beek says:

          Added continuum hypothesis to the Azimuth project.

        • John Baez says:

          Tim wrote:

          It is always helpful to have a concrete objective when one starts to learn about a huge topic like Navier-Stokes equations, in this case that could be “find evidence that there is a model where arbitrarily small perturbations lead to significant changes of macroscopic variables in finite time”.

          Ok, great! Yes, I think that’s an excellent goal.

          When some people say nature “should” act a certain way, it means they’ve made up their minds that’s how things must be. Your approach sounds a lot more productive and interesting.

          It might be nice to find a reference for this remark of Tim Palmer’s:

          But alas, simple scaling arguments suggest that there may be situations (in 3 dimensional turbulence) where this sum converges.

          I don’t know these “simple scaling arguments”, but I can’t help but suspect that they’re related to Kolmogorov’s theory of turbulence. Using a couple of unproved assumptions together with a ‘simple scaling argument’ he derived a power spectrum for turbulent flow in 3 dimensions:

          E(k) = c k^{-5/3}

          which means that the kinetic energy contained in eddies with length scale 1/k is proportional to k^{-5/3}. This is supposed to be true for eddies that are:

          • much smaller than the size of the container (for fluid in a container)

          but

          • much larger than the Kolmogorov length scale.

          Apparently this prediction is fairly well confirmed by experiment, but not exactly correct, and current research is focused on getting a more accurate theory.

          Note that Kolmogorov did not derive his results from the Navier-Stokes equations! He just assumed that ‘fully developed turbulence’ was isotropic and self-similar in a large range of length scales — much smaller than the size of the container and much larger than the Kolmogorov length scale.

          What’s the Kolmogorov length scale? It’s the unique length scale that you can construct using:

          • the viscosity of your fluid, \nu

          and

          • the rate at which your turbulence dissipates energy, \epsilon

          I think the intuitive idea is that below the Kolmogorov length scale, viscosity smooths out the fluid flow and prevents the crazy velocity vector fields we see in turbulence. In other words, the fluid’s velocity vector field looks essentially constant as a function of position, below this length scale.

          Also, I have a sad admission to make. I now suspect that all my previous remarks about ‘arbitrarily small length scales’ were wrong!

          I feel I should have said ‘length scales down to the Kolmogorov length scale’. The Kolmogorov theory says that 3d turbulence is self-similar, but only in a range of length scales between the size of your container and the Kolmogorov length scale. I now bet that viscosity smothers chaos below the Kolmogorov length scale.

          This may make you happier! Thanks for pushing me on this point.

        • John Baez says:

          Graham wrote:

          Here’s some rather bizarre evidence that you might all be worrying too much…

          Very interesting!

          Tim wrote:

          Steve Easterbrook wrote on his blog Serendipity that coding errors in climate models usually don’t affect the qualitative statements that climate scientists infer from the model runs. I am suspicious that, maybe, people don’t easily find coding errors that make the model look right, instead of wrong, in the eye of the beholders, but far easier find bugs that have the opposite effect.

          That seems like the only reasonable explanation of this effect. It would seem odd to have results that depend very sensitively on the initial data, but very weakly on which program you use to compute the results.

          Here’s some evidence that we should be worrying more. There’s a mathematician David Orrell who apparently stirred up a controversy by claiming that the main limitation on weather (not climate) prediction is not the butterfly effect but errors in the models. I’m having trouble finding a scientifically reputable discussion of this controversy, so here’s something from the (Australian) ABC News:

          With Chaos Theory, the error starts small and then gets bigger with time and then gets huge. But this is not, repeat NOT, what happens wih the weather.

          In weather forecasts, the error becomes very large very rapidly, and then begins to tail off – so most of the error in the weather forecasts is not related to Chaos Theory.

          This really bothered David Orrell, a mathematician at the University College in London. He and his fellow mathematicians started thinking about what would happen if the actual mathematical models that the meteorologists use to predict the weather were wrong. They proved a mathematical theorem that predicted exactly how, if a model really was wrong, its errors would grow as time progressed. In fact, these errors should follow a ‘Square Root Law’ – growing very rapidly at first, and then slowing down after a few days. And believe it or not, this is how the errors in the weather forecasts behave.

          In other words, according to David Orrell, the main thing stopping us from getting accurate weather forecasts three days down the line is not the Butterfly Effect (which is real), but the errors in the models.

          His theory can’t say where the errors are, only that there are errors. And once the mathematicians and the meteorologists get together and come up with better models of the weather, they should be able to make dead accurate forecasts up to three days down the line. The chaos effects will then begin to kick in after about a week or so.

        • DavidTweed says:

          My experience in research engineering programming (which may differ to research science programming) is that there’s somewhat of a difference compared to industrial programming. In industrial programming you often have a good enough specification that you can write unit tests and thus direct one’s attention away from those that pass onto those that fail, so Tim’s idea is quite plausible there.

          The programs I wrote didn’t have precise unit tests for most of the interesting stuff and exact models tended to get tweaked often enough that it wasn’t unusual to spot a problem in a bit of code that appeared to be “working”, although spotting issues in bits that weren’t still was still more common. As an aside, there’s nothing more crushing than having a program that’s “working well”, spotting that some routine was written completely wrong, fixing it and having the program now perform abysmally.

          It’s also not completely surprising that a program might be sensitive to input data but not large sections of program code in that it is possible to make a mistake and not in practice “call” a large amount of program code all and not notice unless you explicitly verify the run-time call chains. (Most programmers learn this lesson from experience and are paranoid about checking, but it’s still possible.)

          One of the problems with the “computer-using sciences”, IMO, is that often anyone senior is too busy with grants and high-level issues to have time to work on the programs, with the actual detailed work being done only by graduate students or post-docs. This has the side effect that the empirical heuristic knowledge of the “that output looks suspicious” and “that kind of output is symptomatic of a problem with …” gets thrown away every couple of years, and it wouldn’t surprise me if the same issues get repeatedly reintroduced.

        • Giampiero Campa says:

          Hi John, yes i can see the fact that it would take infinite information to predict the future within a certain accuracy.

          If this is the difference between the “real” butterfly effect and the other one, then OK i get it.

          I understand less the fact that you can assign lypapunov exponents to scales, but i think it is just a different way of expressing them.

          If the most prominent characteristic of the “real” butterfly effect is that the uncertainty propagates from small scales to larger ones, then it seems to me that the same would happen if you have N objects interacting e.g. gravitationally (with N not even too large) distributed in the 3D space. Or in a large undamped 3D structure with many links.

          But maybe i am mistaken …

  3. David says:

    These interviews are so incredibly enlightening to me. I only wish that they could be exposed to a greater audience.

    • John Baez says:

      Thanks! Tell your friends about them!

      Anyone have ideas for how to get more people to read ’em?

      • Charlie C says:

        Twitter?

      • Eugene says:

        Have Sierra Club advertise them?

        I doubt that Sarah Palin would want to promote them, otherwise that would be an interesting audience to reach.

      • Charlie C says:

        or Azimuth on Facebook?

      • John Baez says:

        I never use Twitter or Facebook, and I don’t really want to start… but because I don’t use them, I can’t estimate how helpful they’d be.

        So, for you people out there who do use these inventions of Satan: do you think lots more people would find out about Azimuth if it had some sort of presence on Facebook and/or Twitter? And if so, what is the bare minimum amount of work I could do that would create a useful effect along these lines? Or is the effect roughly proportional to the work? I.e., would I have to keep tweeting over and over to let people know that I’m still here — sort of like a songbird?

        • Frederik De Roo says:

          would I have to keep tweeting over and over to let people know that I’m still here

          which reminds me of the following passage in the Extended Phenotype:

          Today we would recognize that if receiver sensitivity increases, signal strength does not need to increase but, instead, is more likely to decrease owing to the attendant costs of conspicuous or loud signals. This might be called the Sir Adrian Boult principle […] In those cases where animal signals really are of mutual benefit, they will tend to sink to the level of a conspiratorial whisper: indeed this may often have happened, the resulting signals being too inconspicuous for us to have noticed them. If signal strength increases over the generations this suggests, on the other hand, that there has been increasing sales resistance on the side of the receiver

  4. Phil Henshaw says:

    Share the idea of the question being asked rather than promotion? Aren’t most contagious ideas a matter of asking the right question? (…and blind spots from not?)

  5. Phil Henshaw says:

    Thanks for posting the “Continuum hypothesis” link. http://www.azimuthproject.org/azimuth/show/Continuum+hypothesis, “the assumption that fluids can be modelled with functions of real numbers”

    Now that’s a very curious hypothesis…!! since what makes my approach different is some time ago I mathematically proved somewhat the opposite. It’s only a very short scheme of repeatedly differentiating a standard equation to then re-integrate it, to discover the real polynomial form implied.

    I call it the general “the natural law of continuity”, but the math seems to say (and observation confirms) that the continuity of physical process has to take special kinds of radical “jumps” to comply with energy conservation. It’s because of the limits of organization for any scale of physical process combined with the restraint on having infinite energy density. Now that I’m learning how to write in a better formal stile paper I should rewrite it entirely… (sorry), but http://www.synapse9.com/drtheo.pdf has the nuts and bolts.

    • Tim van Beek says:

      Sorry, I’m not quite sure I understand what your approach in this context is, so I’ll simply talk a little bit of my understanding of the preceding discussion. In the context of continuum mechanics the hypothesis is quite simple:

      1. we know that matter is built from atoms, which are particles,

      2. we ignore this and use continuous functions of coordinates in space, x and time, t, to describe e.g. the pressure of a fluid with a function \rho(x, t).

      This function does not make sense, physically, on a length scale shorter than, for example, the radius of an atom. But we pretend that it does and assume that this will result in a useful approximation to natural phenomena. Now it could happen that signals/effects from a length scale below the length of an atom grows and influence phenomena on a larger scale, on a scale that we would actually like to describe.

      And that’s what I called a “mathematical artefact”. Maybe I should have said: In a model where this happens the continuum hypothesis is not valid.

      • Phil Henshaw says:

        Well, the concept we are both referring to is “scale”.

        I’m talking about the “length scales” of the processes that have to develop for macroscopic energy transfers to satisfy energy conservation. I’m looking at the implied necessity of having multiple scales of organization to do it, starting from a prohibition of infinite rates of change, using it something like the prohibition of infinite charge density was used to prompt the need to find atoms.

        Using that the conservation laws imply there need to be bridge processes of various kinds, for physical mechanisms of macroscopic scale (perhaps not yet identified) to develop, that don’t have infinite derivatives of energy flow rates of change. Statistical physics, as I understand it, ignores that and is only concerned with the probability of outcomes, not the intimate details of what physical process mechanisms need to develop to get there.

        So, I’m not defining an equation to “result in a useful approximation of a natural phenomena”. I’m defining an envelope of possibilities, defined by the conservation laws, within which macroscopic processes can develop. That is where I can then expect to find phenomena satisfying the complex requirements. I seem to fairly readily find them, knowing what to look for, in that changes of state seem to generally occur by means of complex local processes that begin with nucleation leading to the development of the physical systems required for the energy flows that observably occur.

        So I use it as a map to help find how the instrumental mechanisms of energy flow for macroscopic processes will occur, often used in circumstances where other means of prediction might suggest one is likely to develop.

        • Frederik De Roo says:

          Comparing Tim’s and Phil’s posts has actually clarified for me the relation between Kolmogorov complexity and Shannon information. The message that provides the most new information is indeed the most complex ;-)

        • Phil Henshaw says:

          Thanks, that helps clarify that. Then the system with observable properties beyond the ability of your actual or potential information to describe, then cant’t even be considered to be an information construct at all.

          That’s what I look at, these intense concentrations of complexly organized systems that fairly clearly exceed to potential of information to describe. So I just construct information definitions for the functional boundaries that uniquely locate them, and more or less wait for them to “misbehave” so I can study them.

  6. Graham says:

    I remembered a bit of jargon: “closure problem”. Quoting from

    http://amsglossary.allenpress.com/glossary/search?id=closure-problem1

    “closure problem – A difficulty in turbulence theory caused by more unknowns than equations. The closure problem of turbulence is alternately described as the requirement for an infinite number of equations, which would also be impossible to solve. This problem is apparently associated with the nonlinear nature of turbulence, and the traditional analytical approach of Reynolds averaging the governing equations to eliminate linear terms while retaining the nonlinear terms as statistical correlations of various orders (i.e., consisting of the product of multiple dependent variables). The closure problem is a long-standing unsolved problem of classical (Newtonian) physics. While no exact solution has been found to date, approximations called closure assumptions can be made to allow approximate solution of the equations for practical applications.”

    • John Baez says:

      That’s a helpful bit of jargon, Graham.

      I don’t find that explanation quite as crystalline in its clarity as I would desire. But I guess the idea is that if we try to predict the weather using a discretization where our variables are quantities (e.g. wind velocities) averaged over kilometer-sized cubes, we find that it’s hard to predict what these variables will do in the future without knowing averaged quantities in smaller cubes — say, meter-sized cubes. And if we decide to work with quantities averaged over meter-sized cubes, we find that those are hard to predict without knowing averaged quantities in even smaller cubes. And so on, at least down to the Kolmogorov length scale, which is unfortunately very small, less than a centimeter — leaving us with an impractically large number of cubes.

      So, we throw up our hands in disgust, work with the smallest cubes our computer can handle, and make some guess about what’s going on at smaller scales: a “closure assumption”.

  7. John Baez says:

    Tim wrote:

    Added continuum hypothesis to the Azimuth project.

    Using some jargon taken from that page and from my remarks above:

    I had thought the ‘real butterfly effect’ caused the following problem in weather prediction:

    • Small changes in initial data will propagate up from length scales close to the mean free path to lengths scale close to a kilometer in roughly a week.

    However, after revisiting Kolmorogov’s theory of turbulence, I’m guessing the idea is:

    • Small changes in initial data will propagate up from length scales close to the Kolmogorov length scale to length scale close to a kilometer in roughly a week.

    I’m not sure how big the Kolmogorov length scale is for air near the Earth’s surface — it’s the size of the smallest turbulent eddies — but since it’s smaller than a centimeter, the ‘real butterfly effect’ would still be a disaster if you wanted to predict the weather accurately for more than a week.

    (I’m picking the figure of ‘a week’ somewhat randomly, based on things I’ve read.)

  8. Tim van Beek says:

    Heads up: Steve Easterbrook’s latest post on serendipity is about Tim Palmer’s talk at the AGU conference: Tim Palmer on Building Probabilistic Climate Models.

  9. […] John Baez has a great in-depth interview with Tim over at […]

  10. Bill Richter says:

    John, it sounds like you & Tim Palmer are relating two different-sounding things: the unpredictability of the weather and the unproved short-time existence and uniqueness for solutions of the Navier-Stokes equations. That sounds cool, but I didn’t understand the reasoning (involving summing up all the Lyapunov timescales). I understood this much from you about weather unpredictability:

    we need an essentially infinite amount of information to predict the future past a certain amount of time.

    Can you explain how this (your too much “information percolating up from short distance scales”) makes NS existence & uniqueness hard to prove? I don’t suppose this is related to chaos in the n-body problem, which I think is caused by possible singularities caused by collisions.

    • John Baez says:

      Hi, Bill!

      It’s hard for me to explain this stuff with crystalline clarity, for three reasons:

      1) I’m not a real expert on it. (That’s actually a sufficient reason, but I’ll give two more.)

      2) As far as I can tell, nobody knows that we need an infinite amount of information about the initial data to compute a solution of the Navier-Stokes equation to within a given desired accuracy past a certain time.

      3) Of course nobody knows if the Navier-Stokes equations have solutions past a certain time, given some smooth initial data. This is the million-dollar Clay Prize puzzle.

      But the idea is that that issues 2) and 3) are related.

      Of course they’re related in an obvious way. If given some initial data the Navier-Stokes don’t have a solution past a certain time, it’s silly to ask about predicting what the solution will be after that time.

      But there’s also a more interesting relation. Very roughly, I think it goes like this.

      Say we have a box of fluid that’s one meter across. We can describe the velocity vector field of this follows. First, say the average velocity of the entire box. Second, divide the box into 23 little boxes, and for each of these, give the average velocity in the little box, minus the average in the bigger box. Third, divide each of these little boxes into 23 smaller boxes and repeat the procedure. Fourth… etcetera.

      This is a simple way to take our information about the velocity vector field and break it up into pieces corresponding to different length scales: the one-meter scale, the half-meter scale, the quarter-meter scale, and so on.

      But now suppose we’re trying to approximately solve a discretized version of the Navier-Stokes equation. Then we give up after the nth step: for example, we take our cube and chop it up into lots of little cubes that are each 2-n meters across, and hope that’s good enough.

      If we do this, we can approximate the Navier-Stokes equations by an ordinary differential equation that says how the velocities at length scales above 2-n evolve in time.

      However, this ordinary differential equation could be chaotic.

      Let’s suppose it is. Then it has some Lyapunov time Tn, meaning roughly that any error in our initial data will double after time Tn.

      Even worse, it could be true that Tn gets shorter as n increases!

      This might not be so bad if information at short distance scales didn’t affect the information at long distance scales very much. You could say “I only care about the wind velocity at the 1-meter scale, so I don’t care if it’s extremely hard to predict the velocity at the 1/1024-meter scale after a very short time.”

      However, it might be true that the information at short distance scales affects the distance at longer distance scales quite a lot, quite soon.

      Then we could be in real trouble. As we subdivide our cube more finely to describe our velocity vector field more accurately, we’re punished by a shorter Lyapunov time. Even worse, any error in our initial data at the shortest distance scale we consider quickly affects what’s going on at the 1-meter scale.

      I hope you can see that in this situation, it could be impossible to use a discretization to compute solutions of the Navier-Stokes equation to within a desired accuracy past a certain short time.

      And if you can’t figure out a way to compute the solution, you should worry that maybe the solution doesn’t even exist! After all, a great way to prove the existence of a solution would be to construct it as a limit of better and better approximate solutions. But here that strategy seems to be failing.

      Whew. This took a long time to write, and there are still lots of places where it could be improved. Please read it three times before asking questions!

      • Bill Richter says:

        Thanks, John! I hope your trip to Vietnam went well. I have a few simple questions, but first a retraction: Charles Feffererman says they have proved short-term existence for NS. Fefferman says the long-term solution problem (top p. 3) is that we would get infinite velocity and vorticity at the blowup time T. I don’t think you or Tim Palmer discussed infinite velocity and vorticity.
        Both you & Tim say that when you discretize at a 2-n timescale, you get an ODE. That’s probably pretty basic, but I don’t understand that.

        I liked your description of the problem if Tn get shorter. That really seems to mean you couldn’t predict the weather on a computer. Is that Tim’s nonlinear interaction between the timescales? He said “that there may be situations (in 3 dimensional turbulence)” where we’re in real trouble. Can you argue that in Tim’s turbulence situation, your Tn do get shorter? BTW I don’t understand Tim’s convergence & divergence at all.

      • John Baez says:

        Bill wrote:

        I don’t think you or Tim Palmer discussed infinite velocity and vorticity.

        No. He probably understands how that’s related to what he said. I don’t.

        In fact, it’s quite possible that a lot of what I’ve said so far is somewhat inaccurate — so if you really want to understand this stuff, take everything I say with a mongo grain of salt!

        I wish someone working on fluid dynamics would join this conversation and help us out. But barring that, I suggest taking a look at this:

        • Terry Tao, Why global regularity for Navier-Stokes is hard.

        You’ll see that he makes crucial use of a certain scale-invariance property of the Navier-Stokes equations. This is presumably somehow related to the multi-scale analysis I was sketching.

        However, I was talking about how unknown short-scale information causes unpredictability at large distance scales: e.g., how the flap of a butterflies wings today can cause a typhoon next year.

        He, on the other hand, is talking about how energy spread over large distance scales can concentrate down to shorter and shorter distance scales, potentially causing a singularity at some point! This is what he wants to rule out, to prove global existence for Navier-Stokes:

        In order to prevent blowup, therefore, we must arrest this motion of energy from coarse scales (or low frequencies) to fine scales (or high frequencies).

        So maybe I’m upside down and backwards — or quite possibly both effects are very important, but in different ways.

        Both you & Tim say that when you discretize at a 2-n timescale, you get an ODE.

        No! We’re saying that if you discretize at any fixed spatial distance scale, you get an ODE.

        For example, if you chop a unit cube up into 8n little cubes, and keep track of the average velocity, average pressure and average density in all these little cubes, that’s 5 × 8n variables to keep track of. Think of all these variables as components of a single vector

        x \in \mathbb{R}^{5 \times 8^n}

        Then, you can discretize the Navier-Stokes equations to obtain a multivariable ODE, sort of like this:

        \frac{d x}{d t} = F(x)

        where F is some horribly complicated function.

        I’m not an expert on this, so I could be making even more mistakes, but I’m pretty sure this is the right general idea.

        I should warn you, however, that doing this discretization is not a straightforward business: it requires intelligent choices, which basically amount to guesses about what’s going on at length scales shorter than the shortest length scale we’re explicitly considering!

        I should also add that there’s nothing sacred about the number 8n here; I’m using it merely because in my original story I imagined repeatedly chopping the cubes we had into 8 smaller cubes, to study shorter and shorter length scales.

You can use Markdown or HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s