Mathematical Economics

My student Miguel Carrión Álvarez got his Ph.D. in 2004. This is when I was still working on loop quantum gravity. So, he decided to work on a rigorous loop quantization of the electromagnetic field. I like his thesis a lot:

• Miguel Carrión Álvarez, Loop Quantization versus Fock Quantization of p-Form Electromagnetism on Static Spacetimes.

However, he decided to leave mathematical physics when he got his degree… and he switched to finance.

There’s a lot of math in common between quantum field theory and mathematical finance. When you take quantum fluctuations in quantum fields, and replace time by imaginary time, you get random fluctuations in the stock market!

Or at least in some models of the stock market. One difference between quantum field theory and mathematical finance is that the former is famous for predicting certain quantities with many decimal digits of accuracy, while the latter is famous for predicting certain quantities with no digits of accuracy at all! I’m talking about the recent financial crisis.

Miguel and I share an interest in the failures of neoclassical economics. My interest comes from the hope — quite possibly a futile hope — that correcting some mistakes in economic theory could help show us a way out of some problems civilization now finds itself in. In fact, the Azimuth Project has its origins in my old economics diary.

Right now we’re having a little conversation about mathematical economics. Maybe you’d like to join in!

Miguel wrote:

I’ll be teaching parts of two courses on mathematical finance and financial risk management in an ‘Mathematical Engineering’ MSc course at the Universidad Complutense here in Madrid.

I replied:

Cool! Has the way people teach these subjects changed any since the economic crisis? I would hope so…

Miguel replied:

I don’t think it has.

First of all, these courses are mostly technical, as part of a Master’s programme intended to teach people what people do in practice. I don’t think criticizing the foundations is part of the program.

But you may have noticed (for instance, if you follow Krugman in the NYT) that the economic establishment has been very resistant to recognizing that the crisis is an empirical refutation of neoclassical economics. This crisis doesn’t fit within the conceptual framework of NCE, but that’s not a problem as they can just call it an “external shock” and continue pretending that the economy will trend to equilibrium from its new perturbed position. Related jokes of mine include that the recession part of the economic cycle is considered an outlier.

And this is not to speak of mathematical finance, where the situation is even worse. Academics still think that the best way to manage a new risk is to quantify it, define an index, and then create derivatives markets to trade it. In other words, turn all risk into market price risk and then push it all to the least solvent participants on the periphery of the financial system.

I think there is good progress being made by non-mainstream economists. Notably Steve Keen – see here:

• ARGeezer, Steve Keen’s dynamic model of the economy, European Tribune, 23 September 2009.

One of the problems with economics that Steve Keen complains about is that economists generally don’t know much about dynamical systems. I doubt they know what the Lotka-Volterra equation is, let alone understanding it (if you discretize it, the predator-prey model displays chaos like the logistic equation). I also doubt economists know about chaos in the logistic equation:

even if they know about logistic growth models which may not be generally the case either. Their model of the economy seems to be basically the Clausius-Clapeyron equation.

I believe the proper way to look at macroeconomics is as a flow network and as such ideas from category theory may be useful at least to organize one’s thinking.

The economy is a network whose nodes are “agents” and links are “economic relations”. Economic relations are flows, of goods and services in one direction and of money in the other direction (opposite categories via arrow reversal?).

Each node also has a balance sheet: assets and liabilities, and it’s the liabilities that are the key here, because they are mostly monetary. But they are also intertemporal. When you buy on credit, you get a physical asset today in exchange for the promise of cash at a later date. Presumably, the discounted present value of that future cash covers the value of the asset today, and that’s what you book as an asset or a liability. But the asset/liability accrues interest over time so its value changes on the balance sheet, and every so often you still need to make an actual transfer of cash along the an edge of the network. And when IOUs become tradeable (you issue me a bearer-IOU which I can then give to someone else who trusts your credit) they become money, too. And the relative variations in the prices of all these different kinds of money, their liquidity, etc, are key in understanding a recession like the one we’re in, or the growth (ehm, bubble) phase of a business cycle.

I don’t have a theory of this, but I keep thinking in these terms and one thing that seems to come out of it is a sort of “fluctuation-dissipation” relation between market fluctuations in prices and trade volumes, the creation of money-as-debt, and inflation. Because nodes abhor insolvency, fluctuations in cash flows lead to the creation of IOUs, which inflate the money mass. With the analogy inflation ~ dissipation, you get a sense that the more intense the market-induced cash flow fluctuations, the higher the inflation rate of the money mass, through the creation of money-as-tradeable-credit.

But none of this is mainstream economics, though there is an active community around “modern monetary theory”, “chartalism” or “monetary circuit theory” working on similar ideas.

But “Dynamic Stochastic General Equilibrium” is not going to cut it.

Anyway, maybe I could do a guest post on your blog about these things… It’s all very inchoate as you can see.

I also think the parallels between these economic flow networks and ecosystems and climates as matter/energy flow networks are important and such dynamical systems are a relatively unexplored area of mathematical physics – it’s just too difficult to say anything general about them!

Best,
Miguel

Miguel allowed me to post this exchange, noting that he could fill in gaps or moderate excessive claims in the comments. It would be nice if together we could all figure out how to take his thoughts and make them a bit more precise.

In week309 I plan to give an explanation of the Lotka-Volterra equation, based on work Graham Jones has done here:

Quantitative ecology, Azimuth Project.

I’m also dying to talk about flow networks in ecology, so it’s nice to hear that Miguel has been thinking about them in economics.

But here’s a basic question: in what sense do economists model the economy using the Clausius-Clapeyron equation? Is the idea that we can take this equation and use it to model economic equilibrium, somehow? How, exactly?

165 Responses to Mathematical Economics

  1. Robert Smart says:

    Nice stuff…
    1. There are also flows of “sentiment”, which influence saving v spending, stock v bonds, etc.

    2. Please consider Liebig’s law, recently linked to Economics by Gregor MacDonald. After 200 years where the thing in short supply was skill-weighted labour, we may be entering a period where the thing in short supply is fuel for transport. At any rate I think flows in various things need to be considered and the interaction between them.

    3. [Off topic] My New Year’s resolution is to have another go to sell the idea that “The subject matter of Mathematics is how to think clearly about problems (mostly excluding human interaction issues like culture)”. Teachers and students are hopelessly confused by an education system that treats mathematics as a collection of facts (about Platonic entities) which is sometimes useful in the real world. My definition will give Mathematics its rightful place in the core of a modern education. I’m not going to make any progress until I can find a real Mathematician to endorse the idea. I’m open to discussion on the subject. My email is robert.kenneth.smart@gmail.com. Or comment on my blog post on this at Mathematics is “Thinking clearly about problems”.

    • John Baez says:

      Robert wrote:

      I’m not going to make any progress until I can find a real Mathematician to endorse the idea.

      I hereby endorse your idea.

      When I go back to UC Riverside in the fall of 2012 and start teaching math again, I’m going to teach it in a new way, informed by everything we’ve been discussing on this blog. I think the kids will enjoy it. I never taught math as a collection of ‘facts’, and that’s probably why the students liked my classes, but now I’m more keen on real-world examples that illustrate the big problems facing our civilization, rather than examples of the sort that pure mathematicians (like my former self) most enjoy.

      Sometime before that, I plan to write a paper with the mild-mannered title “How Mathematicians Can Save the Planet”. I’ll put drafts here, and I’d appreciate your comments.

      • Curtis Faith says:

        I hope you are going to address the macro-level issues.

        For example, one of the major ways that mathematicians can save the planet is by highlighting the ways that our (by “our,” I mean democracies in the Western Style) current form of representative democracy no longer fits the informational realities of 21st-century society.

        If you model informational complexity and then show how this complexity increases with increasing communication between decisional nodes and relatively modest increases in the informational complexity of the individual nodes, you can show why some problems that used to be tenable no longer are.

        There is too much complexity in today’s world of hyper-specialization by experts for any one representative of even a small region or district to understand what they are voting for and what it means most of the time. They have to rely on experts and too often these experts are so biased their decisions are wrong or worse.

        A governmental hierarchy fails as it concentrates the requirements for understanding instead of distributing them. We end up with single points of failure for major problems. This is stupid from a risk management perspective. It also concentrates all the power as you move up the hierarchy.

        These are both concepts that are easy to show quantitatively and to model. It is an idea that has real-world potential and would be relatively easy to explain.

        Some solutions to the hierarchical complexity problem are easily understood in terms of ideas from computer science, network theory, and biology. One potential solution that comes from this is the splitting of democratic representation. I, for one, would love to be able have you, John Baez, and a few other select scientists represent me for science-related policy issues instead of the local bonehead Senator or Congressman. For economics, I’d love to be able to pick one to five representatives as well. Same with healthcare, or commerce.

        Pushing down the decisions to more localized nodes is another way—and perhaps a better one even if more politically unlikely—to reduce the complexity. Good companies are not run hierarchically, nor are complex organisms. We can’t consciously shut down our hearts, for example. We don’t have to tell individual cells how to more our fingers to type. We don’t have a massive committee-authored policy manual for typing that has been duly approved by nine levels of management.

        I am very much looking forward to your paper. I hope that you keep it simple enough that non-mathematicians will be able to grasp the points in it and that it will be something I can pass on to others who are smart but not mathematicians.

        • John Baez says:

          Curtis wrote:

          I am very much looking forward to your paper. I hope that you keep it simple enough that non-mathematicians will be able to grasp the points in it and that it will be something I can pass on to others who are smart but not mathematicians.

          I want to go around giving talks on this subject, and get good at it, and then write a paper about it — or even better, several papers, aimed at different audiences.

          It’s a very nice subject for a math colloquium talk, of the sort that university math departments tend to have once a week — the kind of talk that’s supposed to be comprehensible to all the professors and grad students.

          What appeals most to professional mathematicians could be drastically different than the ideas you just suggested. The ideas you just suggested are incredibly important, but when you talk to most mathematicians about ‘government’, or ‘representative democracy’, they tend to quickly fall asleep unless you can show them a simple model that raises interesting math questions. It sounds like you have ideas for some, but I’d need to see them in detail and understand them well to get mathematicians excited about them.

          What pure mathematicians really want to know is what are some cool conjectures that I could turn into theorems? They would put endless energy into saving the planet if you could save the planet by proving a theorem.

          There are also many mathematicians who care a lot about teaching, and many of them want to know: What are some new things I could teach in my classes? But not too new, since that can be a lot of work and hard to do right.

          There are also industrial and applied mathematicians — but alas, I’ve spent less time with them, so I don’t know as much about what gets them excited. I can imagine it, but I don’t know it from experience.

          In America there’s the American Mathematical Society, which focuses mainly on mathematicians who love to prove theorems. There’s the Mathematical Association of American, which focuses mainly on mathematicians who care about teaching. And there’s the Society of Industrial and Applied Mathematicians, which is pretty self-explanatory. I can imagine wanting to write a separate paper for each of these groups.

          It’s easy for me to imagine how industrial and applied mathematicians can do things that could help save the planet.

          It’s even easier for me to imagine ways that teaching math better could help save the planet. Robert Smart is excited about this. When I get back to teaching I’m going to try some experiments.

          It’s a bit harder to imagine really good ways that proving theorems can help save the planet. However, I believe that getting pure mathematicians interested in questions that are even slightly related to practical problems is a good thing. Pure math is never at the front line of any practical battle, but over time it can have an effect.

          This is one reason I’m starting to talk a bit about stochastic differential equations and very simple models of ecosystems, weather and climate on This Week’s Finds.

      • Robert Smart says:

        Planning of this project needs some thought. Experience suggests I won’t get it right without some feedback and discussion. Of course the project will have its own discussion forum(s) eventually, but before getting that far: Would it be OK to start a discussion thread in the Azimuth Forum?

        While I may have my own ideas on the implications for what to teach and how to teach it, it will be wise for me to leave that to others. What I want to focus on is the need to get teachers and students to understand what they are trying to teach/learn, and for educational administrators, parents and government to understand where Mathematics fits in to modern education and relates to other subjects.

        I certainly don’t want to downplay the value of professional mathematicians following their instincts to investigate problems in Mathematics itself, such as generalizing so that results in one area have expanded application. This has a long history of having unexpected value for real world problem solving. And even when it doesn’t, Mathematics is one of humanity’s great cultural activities.

        • John Baez says:

          Robert wrote:

          Would it be OK to start a discussion thread in the Azimuth Forum?

          Sure!

          Everyone should visit the Azimuth Forum every day and read the incredibly cool conversations we’re having over there, and join the forum, too!

          But since not everyone does yet, it would be even better if I took what you wrote on the Azimuth Forum and — if it seems sufficiently well-written and attractive — post it over here, too!

      • Miguel says:

        I never taught math as a collection of ‘facts’, and that’s probably why the students liked my classes, but now I’m more keen on real-world examples that illustrate the big problems facing our civilization, rather than examples of the sort that pure mathematicians (like my former self) most enjoy.

        If my experience as a TA at UCR is any guide, that is unfortunately not going to work.

        I remember working with linear-algebra-for-business students on matrix algebra exercises involving input-output models with 2×2 Leontieff matrices. I think I was the only one that understood what a Leontieff matrix was. And in fact, I have that course to thank for introducing me to an important concept in descriptive macroeconomics.

        In the 3+ years I taught there, almost every time an application was introduced in a math class in order to “motivate” or “illustrate” a topic, the students had more trouble with the application than with the math. Because the students had heterogeneous backgrounds, what constituted a useful “motivating example” differed from student to student. In that sense, office hours were great. And when students came to my office hours I did ask them what their major was, so I could adapt my examples to their existing knowledge. There’s limited time and little use developing an entirely new qualitative topic in order to illustrate a minor point of theory.

        If you want to illustrate/motivate first-order differential equations, the harmonic oscillator works for a physics student, but for a chemist you may want to use chemical kinetics, and for a biologist population dynamics.

        • John Baez says:

          Miguel wrote:

          If my experience as a TA at UCR is any guide, that is unfortunately not going to work.

          If my experience as a teacher at UCR is any guide, it’s gonna work. When I get excited about something, I can get most of the students excited about it. I prance around the stage and make a fool of myself and they love it.

          Whenever I teach differential equations I teach them about population growth and logistic curves, and they like that. Heck, it involves bottles of flies, and sex, and even flies having sex: what could be better than that?

          Q: How many flies does it take to screw in a lightbulb?

          A: Only two.

          But recently some mathematician here on Azimuth — or maybe it was the n-Café sorry, I forget who! — wrote that some colleagues of his were shocked to realize that overfishing could drive a fish population completely extinct. Meaning, to take a really simplified model, that if

          y' = k y - c

          and k, y(0) > 0 but c is sufficiently large, y(t) will eventually hit zero.

          And he was shocked that they were shocked! These colleagues understood ordinary differential equations, but somehow hadn’t absorbed this point.

          And I realized: wow, I should talk about this in class!

          You see, I’ve been talking about stupid equations like

          y' = - k y + c

          with k, c > 0, and giving stupid examples like pouring radioactive waste at a constant rate into a dump, which is not something my students are likely to need to know about, when I could be telling them about cool equations like

          y' =  k y - c

          with k, c > 0, and giving cool examples like overfishing! I could tell them about bluefin tuna, and how ICCAT — the International Commission for the Conservation of Atlantic Tuna, also known as the International Conspiracy to Catch All Tunas, has repeatedly ignored its own scientists’ pleas to ban the catching of bluefin tuna!

          And so on.

        • Florifulgurator says:

          John Baez wrote:

          I prance around the stage and make a fool of myself and they love it.

          Yeah. Making a fool out of oneself is something not many math teachers can do. Getting “stuck” and asking the audience for help – a horror for most teachers, but a great tool to make the students relax and realize the prof is a mortal, too. Another thing I enjoyed when teaching is abbreviating stuff as e.g. \sum Bla_{i,j,k}.

        • Web Hub Tel says:

          Another recent example is in how “Mathematical model shows how groups split into factions”
          http://www.news.cornell.edu/stories/Jan11/factions.html

          Like John’s overfishing equation, at the root this is a very simple example. Looking at the paper it goes as
          dX/dt = X*X

          The solution bifurcates and the Cornell researchers claim that this explains group factionating. Quite a claim but intriguing in its simplicity.

    • Graham says:

      Some more support for Robert Smart’s idea.

      • Robert Smart says:

        Thank you! As he said “Math makes sense of the world”. So I just want to take that a bit further, and make “clear problem solving” the definition. Now that there is a finite chance this will take off, I’m trying to do some (non-mathematical, human interaction related) thinking about how to do it, including the “why wasn’t I consulted” problem.

  2. Rod Carvalho says:

    My interest comes from the hope — quite possibly a futile hope — that correcting some mistakes in economic theory could help show us a way out of some problems civilization now finds itself in.

    Although this would be a most noble effort, I believe it is somewhat naive, as it is based on unrealistic assumptions. My opinion is that economic theory serves to legitimize policy more than it serves to guide it. And economists have less power than people tend to think.

    When physicists / engineers (among others) look at social systems for the first time, they see inefficiency everywhere. The temptation to suggest ways of correcting what appears to be “wrong” is quite high. If natural systems attain a level of perfection that is often amazing, why do social systems seem so awfully “broken”?

    My belief is that the reason is the following. We, outsiders, assume that social systems are trying to maximize a certain utility function, when, in fact, they are maximizing another utility function. Social systems, when one looks closely, are actually near-optimal: they are designed to maximize the utility of those who have power, money, and influence.

    A new economic theory will emerge. But it will take time. If China’s model proves to be more successful than the American model, then economics will change, as interest groups will have new priorities and new goals.

    • John Baez says:

      John wrote:

      My interest comes from the hope — quite possibly a futile hope — that correcting some mistakes in economic theory could help show us a way out of some problems civilization now finds itself in.

      Rod wrote:

      Although this would be a most noble effort, I believe it is somewhat naive, as it is based on unrealistic assumptions. My opinion is that economic theory serves to legitimize policy more than it serves to guide it.

      Yes, that’s why I called my hope “quite possibly futile”, and that’s why I quit my economics diary after a while.

      At first I was hoping that economics could be a kind of ‘Achilles heel’ of the current world system: a place where a theorist like me could make a significant impact.

      But after I saw how many people had already made the criticisms of neoclassical economics that I was beginning to dream up, I became less optimistic. I realized that economics is inevitably warped by a powerful force field: its role in enhancing the wealth and power of the already wealthy and powerful.

      Yet hope springs eternal. There are economists trying to really understand what’s going on, so I think we might make a little progress here in doing that, even if changing what’s going on is too hard, like trying to make water flow uphill.

      Clearly part of understanding economics is understanding the economic forces that motivate economists.

      I urge people to check out the Post-Autistic Economics Movement. Their site lists some relevant quotes from winners of the Bank of Sweden Prize in economics. It’s often misleadingly called the ‘Nobel prize’ in economics, but it was not one of the prizes Nobel included in his will. The fact that this prize, funded by a bank, managed to attach itself to the other Nobel prizes is a great example of what we’re talking about! Nonetheless, some of the winners show a certain discontent with economics as currently practiced. Even some you might not expect:

      “…economics has become increasingly an arcane branch of mathematics rather than dealing with real economic problems”

      Milton Friedman

      “[Economics as taught] in America’s graduate schools… bears testimony to a triumph of ideology over science.”

      Joseph Stiglitz

      “Existing economics is a theoretical system which floats in the air and which bears little relation to what happens in the real world”

      Ronald Coase

      “We live in an uncertain and ever-changing world that is continually evolving in new and novel ways. Standard theories are of little help in this context. Attempting to understand economic, political and social change requires a fundamental recasting of the way we think”

      Douglass North

      “Page after page of professional economic journals are filled with mathematical formulas […] Year after year economic theorists continue to produce scores of mathematical models and to explore in great detail their formal properties; and the econometricians fit algebraic functions of all possible shapes to essentially the same sets of data”

      Wassily Leontief

      “Today if you ask a mainstream economist a question about almost any aspect of economic life, the response will be: suppose we model that situation and see what happens… modern mainstream economics consists of little else but examples of this process”

      Robert Solow

      • Phil Henshaw says:

        What physical scientists could greatly contribute to economics is an understanding of economies as physical processes rather than information systems. I’ve studied the difference between physical processes and information systems for many years. Information models can project anything at all, only when you drive a physical system that way the norms your information model relied on don’t hold… and everyone is *so surprised* and tends to blame someone else or the deity, rather than the natural difference between physical systems and models.

        I don’t hear others chiming in on my repeated mention of this problem, and that economists and financial planners don’t know anything about the conservation of energy, or recognize the mountain of useful explanatory principles for material processes that come with that. For one, energy conservation gives you response time limits for every stimulus-response process. System’s can’t go any faster than “just in time” is one general way to say that one.

        Some of these environmental variables that only a physical scientist would understand seem to have critical value to present day economic decision making, *IF* someone were to drive home the point that the economy is a physical system…

      • Ivan Sutoris says:

        John Baez wrote:

        I realized that economics is inevitably warped by a powerful force field: its role in enhancing the wealth and power of the already wealthy and powerful.

        To the extent that economics is about controversial issues in the society, there will always be some ideology, especially when discussing policy issues (just like there is ideology in discussing/denying climate change). However, I think that among academic economists, you would find much less ideology than you seem to think. And on the other hand, it’s not like the critics are ideology-free either – many simply dislike mainstream economics because of their leftist and anti-free-market convictions.

        Nonetheless, some of the winners show a certain discontent with economics as currently practiced. Even some you might not expect:

        Aside from some of these guys being on quite different sides of political spectrum (like Friedman / Stiglitz), they all seem to say that there is too much math and formal modeling in economics (I don’t think they’re right, but that’s beside the point). But most of the discussion here is about applying even more sophisticated mathematics (nonlinear dynamics, complexity, chaos theory etc.) – so which one is it then? You can’t have it both ways :)

        • Phil Henshaw says:

          Ivan, I’m not sure who you’re quoting, but the reality far stranger than fiction seems to be that, all-be-it unintentionally:

          I realized that economics is inevitably warped by a powerful force field: its role in enhancing the wealth and power of the already wealthy and powerful.

          …is quite literally true. It’s for amazingly simple and obvious procedural reasons.

          Why the system in fact works that way does not come from value judgments on anyone’s part, it seems. It’s also peculiar that so few others have zeroed in on it before. The central motive of economic regulation is to stabilize the interest rates, financial markets and prices, not have them rise or fall too much. That provides guarantee of positive returns on investment and that people who want work can find it.

          The queer thing is that in guaranteeing stability in the financial markets also provides an assurance that those with money will make more in proportion to how much they have, at the stable rate of financial returns. The savings of people who can afford to leave it invested may doubling every 10 years, or 15 or 20, but it guarantees that their earnings will be continually growing exponentially.

          What the regulators try to do for other people, the ones who don’t have enough to accumulate wealth by compound investing, is guarantee them a job, and a rate of earning in proportion to the time they spend at work. That’s a linear rate of earning.

          There are some more details, but what amounts to “the social contract” of modern society is that the rich are guaranteed earnings that grow exponentially and labor is guaranteed earnings, but that don’t grow unless the economy is consuming more of the earth exponentially…

          The assurance that part of the system will have continual exponential earnings, for simply giving others the use of their money, has to be fulfilled from the earnings generated by the creativity of people inventing and working technology and finding ways to be creative together, being paid linearly for it. That means that there is an inherent instability built into the system that regulators try to stabilize.

          The best analysis seems to be that the historically repetitious crashes of the economy that regulatory keep desperately searching for a way to prevent, are necessary to relieve the strains that naturally accumulate. The cause is the deep conceptual errors in the basic social contract the economy represents, just not noticed as the economy developed.

          Keynes and Boulding complained about it a lot, and are the only two I know who also saw it as procedural defect in our self-interest system design, rather than political, but failed to persuade those around them. To me the social dimension is not pretty, but also not a matter of one social class having contempt for another. It’s one social class having contempt for nature.

          The mistaken thinking of people with the most at stake seems to be that if they pay enough money they can make nature do what whatever they like, and that’s just not so.

        • streamfortyseven says:

          Phil, you write: “What the regulators try to do for other people, the ones who don’t have enough to accumulate wealth by compound investing, is guarantee them a job, and a rate of earning in proportion to the time they spend at work. That’s a linear rate of earning.”

          “Having a job” means having the opportunity to sell your labor and creativity to those who have sufficient capital to start a business. When figuring profits, the business owner looks to minimize raw material cost and the cost of labor. Most businesses try to get at least 75% more profit from each employee than they pay to the employee, hence you produce (say) $400,000 in profit, and get a pay package (including benefits, unemployment and workers comp, and FICA taxes) of $100,000 in value. In other words, employees are a profit center for the employer as long as their productivity is maximized.

          Government action is almost always geared to help reduce the cost of labor by any means possible, including allowing immigration, “illegal” if necessary, breaking unions, setting a minimum wage as low as possible, helping companies to outsource labor, setting up anti-tariff “free trade agreements” and other such things. Most economists are paid to help provide a “theoretical” basis for this process, and to serve as advocates for its continuation and general acceptance.

          Corporations have a legal duty to their shareholders to work for their benefit, and part of this is to show sufficient growth in earnings to continually increase the value of their stock. The general expectation has been expressed in the “rule of eight”, that you should see your capital double every 8 years. Of course, to continue this is impossible given the resource scarcities we are beginning to see, but the economists have the idea that the supply of oil will increase to fulfill the demand, and so the laws which regulate corporate duties to shareholders, amongst many other things, have not changed. If we continue to go along this path, then revolution or at least grave societal instability will be the inevitable result due to the great imbalances of wealth and political power that must result.

        • Ivan Sutoris says:

          Phil Henshaw wrote:

          The queer thing is that in guaranteeing stability in the financial markets also provides an assurance that those with money will make more in proportion to how much they have, at the stable rate of financial returns.
          […]
          What the regulators try to do for other people, the ones who don’t have enough to accumulate wealth by compound investing, is guarantee them a job, and a rate of earning in proportion to the time they spend at work. That’s a linear rate of earning.
          […]
          The mistaken thinking of people with the most at stake seems to be that if they pay enough money they can make nature do what whatever they like, and that’s just not so.

          So you are saying that richer people can get richer by investing? But that’s always been the case. You seem to be implying that without regulation, there would not be exponential compounding – I don’t see reason why.

          Also, wages increase as well (at least in nominal terms). If you were right, we would observe that over time, increasingly larger share of total income is attributable to capital, with labor share of income steadily dropping. But labor share is relatively constant over time.

          Finally, is exponential growth unsustainable with finite resources? That depends on where is the growth coming from – if it is based on increasing productivity, new more efficient technologies, etc., then it may be sustainable. And from the data we know that productivity is increasing over time (although I’m not saying that there are no problems with environment ahead).

        • Phil Henshaw says:

          Streamfourtyseven, your description of what government tries to do is more complete in several ways than mine, but generally fits well with being a sincere effort to maximize profits like I described. Employees are still earning by regular units (linearly) and investors by regular %’s (exponentially).

          What first made me extremely suspicious of that was noticing that a % is a measure achieved by resetting the unit of measure to 1 every time you take a measurement. That’s not a proper way to measure a quantity of anything. It means comparing a labor wage with an investor wage the comparing of a quantity to an escalator.

          The question is then, where does the multiplying money, and the real value that it represents, come from. It’s the value of a whole system of things that is in excess to the value of its parts. Does it come from the investor giving someone else permission to use the investor’s savings as capital, to pay for putting together a diverse network of talents and resources? Or in the end does it come mostly from the self-organization of the talents learning how to do their jobs and work with the resources together?

        • Phil Henshaw says:

          Ivan, Just to address the first part of that, you’re saying:

          So you are saying that richer people can get richer by investing? But that’s always been the case. You seem to be implying that without regulation, there would not be exponential compounding – I don’t see reason why.

          Your help bring out the real point. Thinking of money saved in the past as being owed an ever multiplying real rate of return in the future, is indeed an inherited practice from long ago. It has no basis in science except as a possible feature of a temporary system. Economic science is built on it, as a permanent feature, seemingly because it was common practice, not because perpetual multiplying machines made any sense. They’re actually impractical as a design for organizing a whole planet…

          Compounding is not required by regulation, but seemed to need regulation of markets and prices to partly stabilize it. It appears that regulators, from the start however many hundred years you might look back, assumed it was the bad judgements that precipitated collapses that were the problem rather than the escalating expectations without any basis it so agreeably provided everyone.

          The problem with bubbles popping, as I see it, is not with the weak spot on the containment. Its also not the rate at which you put band-aids on them. It seems to be the pump inflating the pressure you fail to turn off when the containment becomes rigid and inflexible, and then rips to shreds systemically, beginning as some unguarded fault.

          You probably know the feature of exponential math, that the closest linear approximation *no matter how small* the positive exponent, is a vertical line at the point in time that you ask the question. That seems to be why economies have endlessly had crises, because of a natural conflict between the handy units of measure and the finite limits of rates of change for anything physical.

        • streamfortyseven says:

          @PhilHenshaw: You ask: “It’s the value of a whole system of things that is in excess to the value of its parts. Does it come from the investor giving someone else permission to use the investor’s savings as capital, to pay for putting together a diverse network of talents and resources? Or in the end does it come mostly from the self-organization of the talents learning how to do their jobs and work with the resources together?”

          I’d say it’s the latter, for the most part; the added value arises from the energetic input of talents working synergistically together to produce something greater than any individual could produce on his or her own. I speculate that any well-run business is dependent on people working well in teams and combining and discussing/arguing their mental output to evolve greater added value than could be attained through individual effort.

        • streamfortyseven says:

          @Ivan: You write: “Also, wages increase as well (at least in nominal terms). If you were right, we would observe that over time, increasingly larger share of total income is attributable to capital, with labor share of income steadily dropping. But labor share is relatively constant over time.”

          Figuring in 1980 dollars, it’s well-known (http://hdr.undp.org/en/reports/global/hdr2010/papers/HDRP_2010_36.pdf) that the labor share of income has actually dropped in recent years, from 1980 onwards to the present.

          Also, you write: “Finally, is exponential growth unsustainable with finite resources? That depends on where is the growth coming from – if it is based on increasing productivity, new more efficient technologies, etc., ”

          Common sense would suggest that this is not true, you come up against resource bottlenecks, “limiting reagents” so to speak, in a world with finite resources and where the cost of extracting those resources increases over time. Here are a few references, the first which gets at the problem in a rather circumspect way: http://arxiv.org/abs/0906.0568 and the second which addresses it directly: http://www.thesocialcontract.com/artman2/publish/tsc1401/article_1187.shtml

          Finally, you write “And from the data we know that productivity is increasing over time.”

          This really depends on how you define “productivity”; if it’s the result of minimization of direct labor cost by means of laying off employees, shifting greater workloads onto the remaining workforce, outsourcing to cheap labor countries, dropping benefits and suchlike, then yes, for the short term for those businesses which do such things, “productivity” does increase, for that quarterly earnings report and the stock price may go up as intended. The long term effect may be a drastic decrease in real productivity, as the quality of products produced declines with the inevitable increase in stress and possible outright destruction of the workforce, whose synergistic efforts are required to add value from which profit may be derived. See http://seekingalpha.com/article/192105-increased-national-productivity-better-for-investors-than-for-workers and comments.

        • Phil Henshaw says:

          Steamfourtyseven, So, we agree but economists and investors all disagree with us. They consider the value-added by the self-organization of upper, middle and lower level employees, figuring out on their own how to work creatively together in a complex environment, as exclusively owed to the investor for the passive service of permitting to use and return of their savings. The consequence is partly that the investor then assumes an absolute right to have ever multiplying passive earnings that way. The other big consequence is the assume they have no obligation of any kind to use the value created by the whole system of parts in its interests.

          Notably absent is any “tragedy of the commons” provision, that uses of growing investment should avoid at all cost bringing about the collapse of the economic environment generating the earnings. The commitment to maximize profit without exception only for people with passive earnings and not anyone else, is actually an assurance of the reverse. It means people and businesses can’t stay in business unless they are keeping up with the leaders in multiplying their resource impacts.

      • John Baez says:

        Phil wrote:

        Ivan, I’m not sure who you’re quoting, but the reality far stranger than fiction seems to be that, all-be-it unintentionally:

        I realized that economics is inevitably warped by a powerful force field: its role in enhancing the wealth and power of the already wealthy and powerful.

        …is quite literally true. It’s for amazingly simple and obvious procedural reasons.

        Ivan was quoting me; I wrote that remark earlier in the discussion here.

        Please, folks: cite the person you’re quoting. It gets confusing otherwise.

        I am pleased to hear that, “albeit unintentionally”, I said something that is “quite literally true”. Sometimes I even do it on purpose.

        • Phil Henshaw says:

          Checking the context you did seem to say it referring to the inherent design of the economic models and equations. That’s where I started to narrow it down eventually to the simple procedural reason I pointed to. The still deeper problem is why this simple error in logic isn’t noticed more often. I think it’s that economics is a pure information theory about money, so it presents no problem for managing limitlessly complex and changing systems, or funding exponential earnings from finite talents and resources, etc.

  3. Justin says:

    Have you read Johannes Voit’s book, The Statistical Mechanics of Financial Markets? I highly recommend it.

  4. Gen Zhang says:

    It seems that thinking about economics is a great past-time for physicists :-p Personally, I’ve been wondering about whether a kind of gauge symmetry exists in economics, and whether understanding it would help with reality. The gauge symmetry I’m proposing is literally the old Weyl R “gauge” symmetry, applied to the amount of money at each participant; essentially, we notice that currency exchange rates are really what matters, as opposed to the actual value amount that a participant holds. In addition, this would also apply temporally as well as spatially, and subsume the understanding of inflation (which is, if we’re honest, quite a weird concept in NCE). However, in practise we can see that a microscopic model based on these ideas might need to fully incorporate the underlying network of currency (and possible goods?) flow, which is clearly an infeasible state; in addition, this network clearly changes with time, in topologically non-trivial ways… (Time for LQG in economics!) Nevertheless, one might be able to argue that upon coarse graining there are universal properties which survive. In addition, this extra constraint might help to understand/solve some global properties problems.

    One question I haven’t really found the answer to, is whether economists already know this! I’m a physicist working in biology, so I have become accustomed to the idea that whatever I think of, the pretty clever people who have made the field have in fact thought of it already. I just need to know what they call it, and then a proper dialogue can begin!

    • John Baez says:

      Gen Zhang wrote:

      It seems that thinking about economics is a great past-time for physicists :-p

      Yes, we like to think we could do things much better than economists, due to our training, which doesn’t emphasize the virtue of modesty.

      Personally, I’ve been wondering about whether a kind of gauge symmetry exists in economics, and whether understanding it would help with reality.

      […]

      One question I haven’t really found the answer to, is whether economists already know this!

      This stuff might be relevant:

      • Samuel E. Vazquez, Simone Farinelli, Gauge invariance, geometry and arbitrage.

      • Simone Farinelli, Geometric arbitrage theory and market dynamics.

      Gauge transforming Black-Scholes, Phorgy Phynance.

      They’re all about a scale invariance in economics, just like Weyl’s original gauge symmetry with gauge group \mathbb{R}^+, the multiplicative group of positive real numbers. Arbitrage opportunities exist when the connection has nonzero curvature; inflation can be swept under the rug by a gauge transformation.

      • Rod Carvalho says:

        Here’s one more paper:

        • K. Young, Foreign exchange market as a lattice gauge theory [pdf]

        Whose abstract is as follows:

        A simple model of the foreign exchange market is exactly a lattice gauge theory. Exchange rates are the exponentials of gauge potentials defined on spatial links while interest rates are related to gauge potentials on temporal links. Arbitrage opportunities are given by nonzero values of the gauge-invariant field tensor or curvature defined on closed loops. Arbitrage opportunities involving cross-rates at one time are “magnetic fields”, while arbitrage opportunities involving future contracts are “electric fields”.

        As a bonus, here’s another funny paper: On Partita Doppia. Category theory meets Accounting, essentially.

        • John Baez says:

          Thanks for the extra references, Rod!

          The remark

          Arbitrage opportunities involving cross-rates at one time are “magnetic fields”, while arbitrage opportunities involving future contracts are “electric fields”.

          is another way of saying something that’s emphasized in those papers I mentioned by Farinelli: arbitrage opportunities arise when the connection describing the parallel transport of money has curvature.

          And, this in turn, is just an overeducated way of saying if you can carry money around a loop and wind up with more than you started with, you’re in luck!

          This could be a loop in space (“magnetic fields”), or a loop in spacetime (“electric fields”).

          It’s easy to imagine how to carry money around a loop in space, say from England to Germany to Spain to England. It’s a bit harder to imagine how to carry money from England to Germany today, then carry it a year forwards into the future, then carry it to England a year from now, and then carry it back in time to where you started in England today. But financiers have figured how, and they can make money this way.

          Speaking of overeducated ways of talking: “partita doppia” is another name for double-entry bookkeeping. I’d actually run across this paper once before:

          • Piergiulio Katis, N. Sabadini, and R.F.C. Walters, On partita doppia.

          because it claims to give some examples of compact closed symmetric monoidal bicategories — certain mathematical gadgets that my student Mike Stay just happens to be writing his thesis about!

          However, this paper doesn’t define the concept of ‘compact closed symmetric monoidal bicategory’, and it says the proof that the claimed examples really are examples will appear in a later paper.

          Mike Stay and I are interested in these gadgets because of their applications to topological quantum field theory and computer science, not double-entry bookkeeping.

        • Todd Trimble says:

          I think a compact symmetric monoidal bicategory is easy to guess the definition of (at least for people who know what these words mean apart from each other): a symmetric monoidal bicategory such for that each object X there is an object X^*, unit u: I \to X^* \otimes X, counit e: X \otimes X^* \to I, triangulators, etc. so that X^* \otimes - becomes right biadjoint to X \otimes -. The bicategory of small enriched profunctors provides an example.

        • John Baez says:

          Todd wrote:

          I think a compact symmetric monoidal bicategory is easy to guess the definition of (at least for people who know what these words mean apart from each other)…

          Hi, Todd! I really don’t want to make a federal case out of this, but Mike Stay had a bit of work sorting out the full story, so it’s been on my mind…

          My main problem is that the fully general definition of “symmetric monoidal bicategory” can’t be instantly guessed from the definitions of “symmetric”, “monoidal” and “bicategory”. There’s a less general definition that can easily be guessed, where the pentagon, hexagon, etc. identities hold on the nose. But if that’s what’s meant, it would be nice to say so.

          The paper we’re talking about was apparently written in 1998, but the first published source I know for the fully general definition of ‘symmetric monoidal bicategory’ is the appendix of Paddy McCrudden’s 2000 paper Balanced coalgebroids. I think the same material appears in his 1999 thesis — but anyway, nothing by McCrudden is cited by the paper we’re talking about.

          We could also define a braided monoidal bicategory to be a tetracategory — as defined by you in 1995 — with one object and one morphism. With further work, one can get to the symmetric case. But again, none of this is mentioned in the paper we’re talking about.

          So, maybe the authors meant that the pentagon, hexagon, etc. identities should hold on the nose. And that’s probably true in their examples (though I haven’t checked all of them).

          I think Mike also turned up some other papers that mention examples of symmetric monoidal bicategories without explaining what they mean. So, Mike he’s been trying to straighten things out a bit — and then he’s going on to more interesting things, namely applying symmetric monoidal bicategories to problems in computer science.

        • Todd Trimble says:

          You guys probably also know the paper by Day and Street, Monoidal bicategories and Hopf algebroids, where there’s some discussion of symmetric monoidal bicategories. But okay, I won’t make a federal case out of it, except to say that it should be a degenerate type of hexacategory. :-)

        • John Baez says:

          Yes, we know that paper by Day and Street. It’s great, but it doesn’t give a fully general definition of symmetric monoidal bicategory.

          We’ll be waiting for you to write up the definition of hexacategory.

        • John Baez says:

          John wrote:

          It’s a bit harder to imagine how to carry money from England to Germany today, then carry it a year forwards into the future, then carry it to England a year from now, and then carry it back in time to where you started in England today. But financiers have figured how, and they can make money this way.

          Miguel sent me an email explaining how this works:

          The answer here is that you can imagine loops in time in physics, and the way that happens is by turning a particle moving backwards in time into an antiparticle moving forwards, or in the case of, say, an Aharonov-Bohm type effect, taking two possible trajectories from A at t to B at t’ and realising that the phase difference is the same as a path integral around a loop going forwards from A to B along one path and backwards from B to A along the other.

          So, in economics, when we exchange two assets today against a back-exchange tomorrow, we can interpret it as a loop where we take my asset, give it to you, you take it to tomorrow, we exchange it tomorrow and I take it back to today.

          When calculating arbitrage conditions, this (path integrals around timelike loops) is precisely what happens, except it is unusual for people to think explicitly of a loop where part of the process goes back in time rather than the equality of two investment strategies both going forwards. However, there is a notion of going backwards in time. Assets “capitalize” forwards and get “discounted” backwards, most often “discounted to present value”.

        • Bruce Bartlett says:

          In Chris Schommer-Pries’s thesis, he writes down quite a general definition of symmetric monoidal bicategory. Not sure if this is what you’re looking for?

        • John Baez says:

          Hi, Bruce! I haven’t heard from you for a long time. Yes, Schommer-Pries has a completely general definition of ‘symmetric monoidal bicategory’, and Mike has found his paper very helpful. But Paddy McCrudden, a student of Street was the first to write down this definition. He did it in his thesis. You can see it in the appendix here:

          • Paddy McCrudden, Balanced coalgebroids, Theory and Applications of Categories 7 (2000), 71-147.

      • Gen Zhang says:

        That’s wonderful! Alas, I’m not versed in the sociology — is this kind of stuff actually of interest to practicing economists (the kind that make money/policy)? Or is this still considered too abstract to be useful?

        (I’m pretty happy though that this idea actually works out!)

    • DavidTweed says:

      One of the speculations I’ve got about large scale economics is that there might be a lot less symmetry than people expect. (Not directly related to Gen Zhang’s idea, just this seems the place to mention symmetry.) The reason for thinking that comes from behavioural economics, which is basically a big set of observed behaviours/valuations which don’t accord with classical economics foundational assumptions about “homo economicus”. For instance, there’s the loss aversion effect that predicts “that one who loses $100 will lose more satisfaction than another person will gain satisfaction from a $100 windfall”, or the denomination effect where “people are less likely to spend larger bills than their equivalent value in smaller bills” (see cognitive biases link). One presumption behind classical economics that’s more general than an assumption of equilibrium is that all the agent level “axioms” give a coherent system at the macro level, and it wouldn’t surprise me if that were true for classical economics partly because the axioms are chosen in order to be coherent. In contrast, it would be very interesting to see what range of things happens if you empirically simulate very large networks of agents behaving according to behavioural economics: it wouldn’t surprise me to find that it displayed lots of incoherent macro-level behaviour, and certainly I’d expect lots of “intuitive” symmetries to actually be absent. I haven’t actually got any kind of model using this, but here’s some wikipedia links: behavioural economics and cognitive biases.

      The final irony is that these days this kind of criticism may not apply to financial markets like the bond market or stock exchanges because so much trading is done based on algorithms, or at least by humans using explicit mathematical models, and the people who write down the maths aren’t going to put in things like loss aversion, preferring simpler things like symmetric loss/gain functions. So there may actually be much more symmetry in these “artificial” areas than in general.

      • DavidTweed says:

        Incidentally, I’m not precisely sure what I mean by “incoherent” above :-) . I’m thinking of something along the lines of coherence/incoherence in term rewriting systems (eg, lambda calculus).

      • DavidTweed says:

        D’oh: my memory was faulty and I mean “confluence in term rewriting systems” rather than coherence immediately above.

      • Gen Zhang says:

        Now this I also find interesting. I believe the criticism of the rational actors assumption has had a pretty good go now, and I don’t think (hope?) anyone really believe it. However, in macro-economics, I think it’s worth taking a step back and comparing with known macro-physics.

        Specifically, the idea of mean-field theories and phases. No-one thinks that a mean-field theory captures the microscopic details of a situation correctly — but it does (hopefully) get the macroscopic relationship correct. We now understand this to be a consequence of renormalisation and universality, and stability of phases.

        With the recent financial market shifts it’s easy to forget that actually, for a long time, the established theories did a pretty good job!

        My point is that actually, in the end, it might not matter if on the level of individuals the psychology comes into it. In fact, one could even say that the only trust-worthy macroscopic results would be the ones which were independent of those kinds of assumptions. My view is that this kind of universality is the only saving grace that regulatory oversight has — at the macroscopic level, there are sufficiently stable phases as to make things predictable. If we needed to depend on massive, detailed simulation to predict overall trends, then actually it will never be a feasible option (see our efforts in molecular dynamics…)

        • DavidTweed says:

          Gen Zhang wrote

          With the recent financial market shifts it’s easy to forget that actually, for a long time, the established theories did a pretty good job!

          Actually there’s two separate things:

          1. prior to the market shifts there were good economic/financial conditions of various sorts in various areas.

          2. whether the economic models were correctly predicting, understanding and shaping (1).

          1 is pretty much agreed, but I have nowhere near enough knowledge of economics predictions to know whether 2 is an accurate statement. (As is the common situation with machine learning, if a “severe downward movement” has a frequency of 1 in 25, then the constant “there is no severe downward movement” prediction is 96 percent correct.) That question is, to my mind, genuinely still open.

          at the macroscopic level, there are sufficiently stable phases as to make things predictable. If we needed to depend on massive, detailed simulation to predict overall trends, then actually it will never be a feasible option (see our efforts in molecular dynamics…)

          I think it depends on slightly different things: you only need to look on this blog to see how difficult and computationally intensive weather forecasting is, yet (even without it’s contribution to climate modeling) people agree it’s worthwhile. Likewise, I can imagine that if you could use it to show, for example, that a given tax change was unlikely to have the desired results that seems worthwhile. The key differences with weather forecasting are:

          1. We think we know the relevant physics for a correct model.

          2. We can measure (to some grid discretisation) the numerical inputs and outputs of the physical system.

          3. We can apply useful high-level labels to what the model says (“it’s raining in Manchester”).

          I can imagine if we had similar knowledge about “behavioural economics networks” then I can imagine a good case for trying simulation modeling. You’ve got a strong point that if we need incredibly detailed models of human psychology, let alone individual human psychology, to make any kinds of predictions then it looks completely intractable.

        • John Baez says:

          David wrote:

          As is the common situation with machine learning, if a “severe downward movement” has a frequency of 1 in 25, then the constant “there is no severe downward movement” prediction is 96 percent correct.)

          A friend of mine in the risk management business told me an interesting story about this.

          For certain kinds of investments they would assess the risk this way: see how the investment would do given the last decade’s economic data, and declare the risk to be the maximum amount of money you could lose in all but the very worst cases, say the worst 1% of all months.

          (I’m making up the numbers here, but that’s not important.)

          Then, people who worked at this firm would try to create investments that would maximize expected return while minimizing the risk — with risk defined this way.

          So, they would naturally dream up investments that had wonderful returns except in the worst 1% of all months. But in those exceptionally bad months, the investments would lose almost everything.

          And then they went out and sold these financial products, and people bought them… and in the financial meltdown, that’s exactly what happened: they lost almost everything.

      • WebHubTel says:

        because so much trading is done based on algorithms

        There is something called implied correlation that describes how much the financial markets move in unison. High correlations imply that individual equities show little variation as the market is played as a whole. In the past year the correlation has neared 80%. I believe this is due to more and more game-theory machines running the show.

        My argument is here:
        http://mobjectivist.blogspot.com/2010/10/stock-market-as-econophysics-toy.html

  5. Giampiero Campa says:

    Miguel said: “One of the problems with economics that Steve Keen complains about is that economists generally don’t know much about dynamical systems”.

    Yes, I couldn’t agree more ! That is why the “out of equilibrium framework” is lacking. For example i think that something simple like: tau * dp/dt = demand(p) – supply(p) (where p is price) could be introduced right away in the basic courses, and would go a long way towards demystifying what happens out of equilibrium.

    In general i have also seen a lot of qualitative talk (e.g. in blogs) about the pros and cons of different kind of models (Neoclassical, Keynesian, new Keynesian and everything in between), without anyone referring to a specific mathematical model (you know, a model with some numbers on it, that you can actually simulate), so that one can actually test and compare the various models against one another.

    But quite possibly i am still new to economics, and looking in the wrong places.

    Anyway, Doyne Farmer at Santa Fe has a research program in finance based on dynamical systems, (which i am sure most people here already know), and Ray Fair at Yale has a non-dsge US (and world) model in Fortran:

    http://fairmodel.econ.yale.edu/main3.htm

    which I am going to have a deep look at sometime.
    Anyway, it’s really great to collect all these interesting links over here !

    • WebHubTel says:

      I studied Doyne Farmer’s paper he did with Schwarzkopf and was intrigued enough to put my own spin on it a couple months ago:
      http://mobjectivist.blogspot.com/2010/10/bird-surveys.html

      The recurring theme in a lot of this work is to use cross-disciplinary analyses, in this case fluctuations in corporate and financial fund returns and in bird survey numbers. They do this because its really all about fundamental statistical phenomena and analyzing the ergodic state space.

    • Ivan Sutoris says:

      Fair model contains hundreds of equations, and has roots in large-scale keynesian macro models of 1960’s (which are not that popular in these days). What you study is of course your choice, but I could certainly imagine easier things to start with if you’re interested in economics :) For an easy introduction to modern DSGE models, a good book is “The ABCs of RBCs” by McCandless. Lots of links to lecture notes and list of commmonly used textbooks in graduate level economics can be found here.

      • Giampiero Campa says:

        Thanks again for the suggestions, Ivan. I actually have that book, but i was in two minds about what to read next between McCandless and this one which seems a little more advanced but perhaps more self contained. I’ll see.

        I’d also be interested in your opinion on why large-scale Keynesian macro models are “not that popular these days”, beyond what is known as Lucas Critique. Have they been proven in some sense worse than DSGE models ? Were they not able to predict accurately ? I’d love to know more about that !

        Thanks again, hope you are going to stick around here :)

        • Ivan Sutoris says:

          Glad to be helpful, although take into account that I’m only a student myself, and discount what I say accordingly :). Regarding books, I can recommend only some which I’m familiar with (which are mostly those used in first year grad courses, so maybe you know about them). Standard texts for macro are “Recursive Macroeconomic Theory” by Ljungqvist & Sargent (contains a lot of different applications, so maybe you’ll want to skip some parts), and “Recursive Methods in Economic Dynamics” by Stokey, Lucas & Prescott (which is more about mathematics, and more rigorous). And if you are interested in general equilibrium theory, there is a good introduction to it in “Microeconomic Theory” by Mas-Collel, Winston & Green. These books are relatively technical, so for intuition one may also want to read some undergraduate textbooks.

          Regarding Keynesian models, both reasons you mention were probably the cause (in addition to Lucas critique, the models failed to predict stagflation of 1970’s, and their predictions coud be outperformed by atheoretical time series models, like vector autoregressions). Most of what I know about this comes from readingthis paper (actually a review of Fair’s book), which you may find interesting.

  6. John Baez says:

    In our email correspondence, Miguel added:

    Oh, by the way, I said mainstream economics hasn’t caught up to the fact that the crisis is an empirical refutation of their pet theory, but in fact economic commentators in the Financial Times (of all places) have been making noises about this for a long time. Just today, in Eurointelligence (a group site by a group of German economists around the FT’s Wolfgang Münchau) they write the following:

    (my bold)

    Münchau on liberalism

    In his Financial Times Deutschland column, Wolfgang Münchau offers some reflections on the future of economic liberalism after the increasingly likely dethroning of Guido Westerwelle as leader of Germany’s FDP. The main problem for the liberals, however, is not its leader, but the failure to adjust market-liberal positions to a post-crisis environment. German liberals in particular are fundamentally microeconomic liberals, happy to subject macro policies to a set of rules on inflation, fiscal deficit, etc. The financial crisis is proof that macroeconomic neglect leads to unstable markets. If economic liberalism survives the 21st century, it will need to accept the notion that markets fail, and that international coordination is needed to repair them.

    Martin Wolf, the other leading economic commentator at the FT and also German, also has evolved to similar positions where 3 years ago he was more uncritical of the system.

    It’s the policy makers and their advisors that seem to be hopeless…

    • Tim van Beek says:

      Some background:

      It should be noted that the German economy is booming again, so that the crisis is truly over for Germans.

      German liberals in particular are fundamentally microeconomic liberals, happy to subject macro policies to a set of rules on inflation, fiscal deficit, etc.

      The Liberals are part of the current government, with Guido Westerwelle as the minister of foreign affairs. The big problem of the liberals is that they did not succeed in reducing the influence of the government, their philosophy being that “individuals make better use of their money than governments” (and as far as I understand it this philosophy applies to both micro and macro economics). The party of chancellor Merkel cites the economic crisis of course as the main reason why there cannot be any taxcuts, and the reason why social security contributions have increased, while the FDP has of course promised to lower them during the last campaign.

      I don’t see any conncetion to the crisis nor to the difficulty of the FDP to clearly state their philosophy, it’s rather simple: No politican has succeded in substantially cutting the spending of the government in the last decades, because the pressure of lobbyists is to big. And all governments have raised taxes where people cannot avoid them, namely income tax and social security contributions (both are paid directly by the employers before the employees get their salaries).

      In addition economic theory has always played a very minor role in German politics, so I’m very sure that we won’t learn anything interesting from German politics about economic theories :-)

  7. Frederik De Roo says:

    Interesting post! FYI, on his website Keen has a short article where they express some concerns to applying the theoretical approach of statistical physics.

    Gallegatti, Keen, Lux, Ormerod Worrying trends in econophysics Physica A 370 (2006) 1–6.

    • WebHubTel says:

      I would suggest not treating economics as ways of making money but as understanding how scores of various people make a living.

      Then it amounts to problem solving of how the population fills up the state space with ways to make a living.

      When I start thinking about state spaces I immediately go to probability arguments, and try to get by using as little information as possible. This in turn leads me to considering the use of maximum entropy arguments.

      So the new field describing this approach is called econophysics, which is definitely about understanding concepts with as little effort as possible.

      I recall reading that paper “Worrying trends in econophysics” a while ago, and it’s mainly sour grapes.

      I dabble in econophysics and have made enough progress to find it immensely aids in understanding, especially in areas such as labor productivity.

  8. Curtis Faith says:

    I’ve spent a lot of time in and around the world that caused the latest crisis starting back in the mid 1980s. I’ve made a lot of money by understanding the weaknesses in the typical perspectives.

    In 2002 and 2003 I was working in a hedge fund and I tried to get institutional investors to be a little smarter about risk. I even tried to get some of the guys I knew who were consultants to the state pension funds, and generally because they were smarter than the typical institutional manager, to look at a new risk measure we developed that was a variant of the idea behind the Sharpe Ratio except that it took into account differences in asset-class volatility, macro-level liquidity and leverage.

    The mindset of the institutional investors, these were guys controlling tens to hundreds of billions, was that volatility in returns equaled risk when it was clear to anyone who was a trader that this was idiocy. That you could get low volatility returns by doing things that in the long run were stupid. At the time, Long-Term Capital Management was the poster-child for this idea.

    The institutions also tended, and still do, to chase the prior three years returns, so they switched into new strategies invariably at the wrong times. They picked what had been working which tended not to work anymore when all the new money came in.

    They also didn’t really get the idea of leverage. I tried to convince many of them that a hedge fund that didn’t tell you what asset classes they invested in and that wasn’t transparent in their leverage was a time bomb waiting to go off, but they didn’t really care. The decisions were made, like in all bureaucracies, not on the basis of logic for the organization but on the basis of logic for the individual. How do I keep from getting fired? How do I make sure my butt is covered in the event of a problem. If Calpers invested in something then it was safe for other states, no matter what the underlying logic or actual inherent risks.

    I don’t think the macro problem lends itself to mathematical modeling. There are too many unknowns, too many hidden instabilities, and too much human emotion driving the risks. And the minute you get the models right, some smart group of traders will change the assumptions so they can make money and the model will no longer fit reality.

    I do think that modeling is good for exposing the inherent chaos and vulnerability of certain types of behavior, so there is value there, just not value that can be used to reliably set policy. So I think you can model to expose the character of the markets but not to predict (except on the very short term with small predictive accuracy).

    I would say however that, in general, mathematical economists need to get out and do some actual trading and see how the markets really behave. They need to do this over decades. Then they will understand how difficult the problem really is. I believe, for example, that the fact that John Maynard Keynes was also an excellent trader/investor is related to his insights as an economist. See:

    http://www.maynardkeynes.org/keynes-the-speculator.html

    • Web Hub Tel says:

      So I think you can model to expose the character of the markets but not to predict (except on the very short term with small predictive accuracy).

      I think that is the important point. There is some law of efficiency that says if every investor used the same model to predict the direction of the market, then no one would make any money because it would all average out in a zero-sum game. That’s why you see all the machine trading as the only way to get profit is to do the computations incrementally faster than the other guy. So the game theory proponents have suggested real prediction is futile. Daskalakis did a thesis on this
      http://www.physorg.com/news176978473.html

      • Miguel says:

        Isn’t qualitative description, even statistical characterisation, without predictive power the hallmark of deterministic chaos?

        Why should it not be the case that

        you can model to expose the character of the markets but not to predict (except on the very short term with small predictive accuracy)

    • Tim van Beek says:

      It is really great that an insider joins the discussion!

      Curtis Faith said:

      I don’t think the macro problem lends itself to mathematical modeling.

      I have been told that the success of the Black-Scholes formula lead to a boost in derivate trading, so this formula had at least a big effect on the market.

      By the way:

      The Black-Scholes formula is a formula for a fair price of an option and is derived from a linear stochastic differential equation. To me, all of financial mathematics is about stochastic differential equations, is there more?

      There are too many unknowns, too many hidden instabilities, and too much human emotion driving the risks. And the minute you get the models right, some smart group of traders will change the assumptions so they can make money and the model will no longer fit reality.

      That’s why I don’t understand the heavy use of stochastic differential equations in financial mathematics, and it would seem that the models are not completely nonsense. Of course the use of SDE is based on the assumption that one can model the irregular behaviour of humans with noise, which is possible only if no single player has asignificant influence on the market, and if the actions of the players are independent from each other.

      My current understanding of SDE models in mathematical finance is thus that these assumptions have to hold, and do hold, in a lot of situations, and that the biggest problem is to model “phase transitions”, that is rapid changes in the market, where the behaviour of the involved people tend to be very coherent. The last time I looked into the subject (about 10 years ago), people tried to model these phase transitions with highly correlated noise processes.

      • Curtis Faith says:

        That’s why I don’t understand the heavy use of stochastic differential equations in financial mathematics, and it would seem that the models are not completely nonsense.

        The models are not exactly nonsense but they don’t work all the time either. The problems come when they stop working because they tend to stop working in spectacular ways precisely because everyone assumes they are flawless.

        Of course the use of SDE is based on the assumption that one can model the irregular behaviour of humans with noise, which is possible only if no single player has asignificant influence on the market, and if the actions of the players are independent from each other.

        Noise is a decent model of irregular human market behavior except when it is not, then it is a really bad model. That’s how these models tend to work. They work well during normal times and then they stop working altogether because of something the model did not account for.

        One thing they don’t account for is psychological contagion. For many instruments, probably most, price is an almost entirely psychological phenomenon. If everyone wakes up tomorrow thinking that gold is worth only $800, then that is what the price will be. That is why the models don’t handle risk well.

        I like to say that the road to hell is paved with correlation. When things go wrong in trading, everything tends to correlate at once in ways that don’t show up in normal measurement. This also causes models to underestimate systemic risk because diversification no longer helps. When panic sets in this is especially true.

        To me, it seems like the modelers don’t have any common sense, but perhaps they understand what they are doing and it is just their customers who don’t have any common sense.

        Take the real-estate markets and securitized debt that was sold related to it. There was an underlying assumption that evaluated the risk based on relatively recent history and the fact that there had not been a prolonged decline in U.S. real estate since the Great Depression and of course, nothing like that would happen again. But if you look at the macro level series of ups and downs you realize that there is an extremely small sample size. If you are just looking in the major trends in the U.S. real-estate market since the depression, there are a few long sustained rising markets and a few short periods of a few years here and there were the market went down slightly.

        You can’t say anything with any confidence with a sample size of 5 or 6. You certainly can’t say that real-estate won’t go down by X because in our sample of 5 declines it only went down 0.4X therefore there is no risk involved in a model that assumes X is impossible. Yet this is exactly what the models assumed.

        • Tim van Beek says:

          10 years ago I was finishing my diploma thesis and looked around what to do next. One option was to do a PhD with a professor of statistics, who already had aquired the necessary funding from the DFG (the German equivalent of the NSF). He said that there would be a fabulous career waiting in mathematical finance, e.g. in London, but my impression of the field pretty much coincided with what you said here, except that your assessment seems to be based on experience and true insight, while mine was just a first impression from the distance.

          The problems are:

          a) the people who can handle the models (the quants) don’t think much about economics, to them everything seems to be alright as long as their models work, they are not used to sudden failures,

          b) the people who think about economics in the long term have no chance to understand the models due to their lack in mathematical proficiency,

          c) the models work very well in equillibrium situations but don’t even try to address sudden changes and crisises, so that, when they fail, they fail spectactulary.

          And then we see CIOs of bigshot banks from Wallstreet in a congressional hearing saying things like “we did not know that house prices would not always go up”.

          One important learning of the last crisis is that there is a strong group thinking process in a crisis, where everybody joins the panic. The free market as we know it today has no corrective force in this situation, contrary to what most economic theories claim, especially the neocons (the fans of the invisible hand of the market). Or, to be more precise, in the long term the market corrects itself, in the short term we are all homeless and broke :-)

        • Florifulgurator says:

          “Fat tails”: That’s what I gathered was the technical problem with mathematical finance. But I haven’t looked much.

          -> Excursion on my background: 15 years ago I was playing with stochastic analysis in geometry – but the Fundamental Theorem of Asset Pricing (and stuff) I found just too boring (plus, suspicious). Also, I’m a redneck hippie and wouldn’t feel comfortable wearing suit and cravat in Frankfurt… Anyhow I had a glance at the book by Thalmaier & Malliavin. <-

          So, would it help extending math. finance by using processes with jumps? I guess those "fat tails" are Poissonian stuff.

      • Miguel says:

        The Black-Scholes formula is a formula for a fair price of an option and is derived from a linear stochastic differential equation. To me, all of financial mathematics is about stochastic differential equations, is there more?

        To me, the stochastic differential equations of mathematical finance arise from representation theorems of functional analysis. Under rather general conditions and reasonably approximations you can prove that no-arbitrage asset prices are expectation values under a “market probabilities” different from the “physical probabilities”, and you can construct Monte-Carlo integrals for these which are interpretable in terms of SDEs under the “market measure”.

        These SDEs not only give you a no-arbitrage price. Most importantly, they also give you a constructive method for constructing hedging portfolios for derivatives. And once you have the hedging strategy you can show that the price is enforced by arbitrage without any reference to the price being obtainable from an SDE or expectation value.

        So the prices are “not debatable”. However, the probability measures are explicitly not the “physical” probability measures, and therefore may be misleading for risk management purposes.

        Of course, historical estimates of “physical” probabilities are also potentially very misleading for risk management purposes since the market is not a stationary process in the long run.

        Which is to say, the “hedging” and “pricing” problems have been solved by mathematical finance, but the “risk management” problem hasn’t, and probably can’t within mathematical finance. What the paradigm informed by mathematical finance is attempt to turn any risk into market risk. Like when Robert Shiller suggested that one way to prevent the subprime bubble would have been to attach to each mortgage a derivative on the Case-Shiller index. Not that Shiller has a conflict of interest in suggesting that his index should be used to underpin a potentially huge derivatives market, perish the thought.

        • streamfortyseven says:

          The trouble with a lot of the modelling is that it doesn’t account for cheating, it assumes that there are well-defined rules and that most of the participants will play by them, and we’ve seen from the sub-prime mortgage debacle that this is not the case in general. In the face of the total regulatory breakdown in the last 20 years, cheating is the rule, not the exception, and the efficient market theory entirely breaks down.

          With such a situation, the only way you can make any money and minimize your risk is to somehow get outside of having to depend on the usual sorts of information about stocks and their fluctuations, and to engage strictly in momentum plays, where you hold stocks for the least amount of time possible, and trade in the largest amounts possible. If you hold a million shares of stock for a second, and there is a price rise of a penny and you sell, then you’ve made $10,000. Most price moves are a lot greater than that, so you find the upward movers, get in and get out – and do that 100 times is a day and walk away with a million dollars of someone else’s money – and that someone else is invariably the small-time or institutional investor, who doesn’t have the micro-real-time information you have.

          What I wonder about is the transactional costs for these micro-trading deals – not to mention the impact of short-term capital gains taxes. Potentially, you’re looking at a Schedule D with millions of entries, which is potentially unauditable, and may be a way to evade taxation. And, if the micro-traders have seats on the exchanges, their transactional costs are effectively zero.

          Right now, with this sort of thing going on, perhaps the only way for a small investor not to get utterly fleeced is to trade in thinly-traded issues…

  9. Miguel says:

    in what sense do economists model the economy using the Clausius-Clapeyron equation?

    This is my one-liner explanation of the “groundbreaking” [sic] work of Paul Samuelson, whose contribution to the Neoclassical Synthesis is the so-called comparative statics:

    In economics, comparative statics is the comparison of two different economic outcomes, before and after a change in some underlying exogenous parameter.

    As a study of statics it compares two different equilibrium states, after the process of adjustment (if any). It does not study the motion towards equilibrium, nor the process of the change itself.

    Wikipedia also says

    Samuelson’s 1947 magnum opus Foundations of Economic Analysis, from his doctoral dissertation, is based on the classical thermodynamic methods of American thermodynamicist Willard Gibbs, specifically Gibbs’ 1876 paper On the Equilibrium of Heterogeneous Substances.

    Basically, it’s playing around with crossed partial derivatives. When I think of “thermodynamics and economics” I don’t think of borrowing the technology of partial derivatives as being a particularly thermodynamical insight…

    The moment you realise the economy is be a non-equilibrium collective phenomenon, you know comparative statics is ‘not even wrong”. I mean, in statistical physics case it’s akin to the Clausius-Clapeyron equation, it’s not even close to (say) a Landau mean-field theory.

    • John Baez says:

      Okay, I get it. Thanks, Miguel!

      The analogy between Samuelson’s work and the Clausius-Clapeyron equation is nice, because we know lots of ways to take the Clausius-Clapeyron insight in thermodynamics and make it progressively more sophisticated, so we could try to apply some of these to economics.

      (By the way, I didn’t know what the “Clausius-Clapeyron equation” is until I looked it up, so in case anyone else here shares my ignorance and is too lazy to click the link, let me just say that it’s something pathetically simple, almost too simple to deserve a name. It says that at higher pressures, the boiling point of water goes up, ’cause it’s harder for the water to expand into steam. It actually says exactly how much the boiling point goes up. But it’s nothing esoteric.)

      Could you sketch how far “mainstream economists” have gone down this road? You mentioned dynamic stochastic general equilibrium, a buzzphrase I had not heard of. Is this as far as the “mainstream” has gone (whoever they are)?

      By the way, I like the irony in the phrase “dynamic equilibrium”. It sounds like economists are so fond of equilibrium that they call nonequilibrium “dynamic equilibrium”. And it reminds me of how Lagrange seems to have invented the variational principle for dynamics by starting from the variational principle with statics and somehow trying to “reduce dynamics to statics”, in a manner I’ve always found a bit mysterious. The idea seems to be that we reinterpret a particle “moving as it should” under the influence of forces as a particle “in equilibrium”.

      Of course in relativity we can say a particle without any forces applied to it is static in some rest frame, but that’s not all that’s happening here! Somehow Lagrange’s idea was to focus attention on

      F - m a

      When this is zero, we could say the particle is in “dynamic equilibrium”. Lagrange didn’t say that, but that’s what you’re making me think.

      • Nathan Urban says:

        The Clausius-Clapeyron also provides an approximation to the climate water vapor feedback (or rather, the amount of water vapor that goes into the atmosphere in a warmer climate, from which the feedback can be derived).

        See O’Gorman and Muller (2010), and references therein. (The paper itself doesn’t discuss the theoretical basis or empirical tests, but rather the extent to which climate model predictions conform to this approximation.)

      • Miguel says:

        The problem I have with this is that the economy is not even close to equilibrium. It’s like pretending that Bernouilli’s law, which holds microscopically, is applicable to turbulent flow in terms of the average flow velocity.

        I don’t think state functions exist in economics, and therefore statements about the equality of crossed second partial derivatives (which is what the Clausius Clapeyron equation, and Samuelson’s “comparative statics” are) are likely to be wrong, or right only accidentally.

        There is no “local equilibrium” in economics, the “relaxation timescales” are all wrong to try to pretend that you can model dynamics as paramatric changes to an equilibrium point, etc, etc.

        At least in my not so humble opinion.

      • DavidTweed says:

        Uwe wrote (concerning what an economics textbook says):

        Down to earth this is a gauge symmetry and states that demand does not change by introducing a new currency.

        I can’t tell if you’re saying this is a well-supported element of a model, or just that this is what economists currently do and it’s different to one physics approach to economics.

        What interests me is the first viewpoint: I’m sure it’s “ought to be” what people do, but is it how actual “evolved from animals” humans actually act? I know it’s not the same thing as a the same price in different currencies, but on a ten dollar item a difference of 1 cent should be pretty much negligable. Yet retaillers know there is a small but significant difference in the quantity bought when pricing an item at 10 dollars or at 9.99 dollars, even when customers consciously know by now that 99 cent prices are deliberately being used. (You could look at things like American’s behaviour buying items abroad in Italian lira, say, but there you’ve got currency differences mixed in with “on holiday” issues.)

        That kind of thing is makes me wonder if any approach which postulates symmetries rather than identifies them empirically from observed data, whether in mainstream economics or an alternative physics based approach, isn’t taking a very risky step at the very foundations of the theory building process.

        • Tim van Beek says:

          Dave wrote:

          You could look at things like American’s behaviour buying items abroad in Italian lira…

          Although that may change in the near future, Italy is still part of the eurozone :-)

        • John F says:

          From what my sin nombre financial friends tell me it’s basically fraud, although none dare call it that. The money to be made from the equations and models are *not* that they perform so well in predicting the market, such that the money directly comes from buying and selling based on the predictions. The money comes from investors who think your equations and models could do better.

        • Miguel says:

          Uwe wrote (concerning what an economics textbook says):

          Down to earth this is a gauge symmetry and states that demand does not change by introducing a new currency.

          It is very unfortunate that this is not true in practice. Keynes’ General Theory is only the best known elucidation of the fact that nominal (are oppose to “real” or “inflation-adjusted”) money amounts matter. A lot.

          And the basic reason why they do is that credit is denominated in cash values.

        • What I am saying is that this symmetry (demand invariance under price-scaling) can be observed. For example, when Germany introduced the EURO currency all prices, debts aso. were divided by 1,95583. That did not increase the demand for fridges, cars aso. Why should it in the first place? Either you need a fridge or not. Whether you pay it in Deutsche Mark or EURO is not relevant.

          From this symmetry it is elementary to show that market equilibria do not exist. To do this, however, one needs some Functional Analysis which makes it inacessible for mainstream economics.

          What Miguel wrote about nominal values is true and does apply to my second point. The utility function to be maximized is time dependent. Individuals are known to be bad at correctly estimating a present value. However, that does not spoil the argument, since the exact form of the time dependence is not important. The result is a non-autonomous evolution equation (the coefficients depend on time).

          Of course, one can always doubt that agents in a market maximize utility and choose another approach to describe them.

      • John Baez says:

        John wrote:

        The analogy between Samuelson’s work and the Clausius-Clapeyron equation is nice, because we know lots of ways to take the Clausius-Clapeyron insight in thermodynamics and make it progressively more sophisticated, so we could try to apply some of these to economics.

        Miguel wrote:

        The problem I have with this is that the economy is not even close to equilibrium. It’s like pretending that Bernoulli’s law, which holds microscopically, is applicable to turbulent flow in terms of the average flow velocity.

        I understand. I don’t think you need to make that argument here. I don’t anyone here is silly enough to claim that the concept of ‘equilibrium’ is relevant to our economy, except in certain local situations where outside circumstances are (temporarily) unchanging and things happen to settle down for a while.

        But you didn’t answer my question: could you sketch how far “mainstream economists” have gone down the road of applying more sophisticated concepts from physics that are applicable to more general situations? Nonequilibrium thermodynamics, catastrophe theory, chaos, etc?

        • Eric says:

          Wish I could say more, but for now I’ll simply point you to E T Jaynes. By no means “mainstream”, but he did work on nonequilibrium economic modeling.

        • Miguel says:

          could you sketch how far “mainstream economists” have gone down the road of applying more sophisticated concepts from physics that are applicable to more general situations? Nonequilibrium thermodynamics, catastrophe theory, chaos, etc?

          The short answer is that I don’t think they have. That’s why Steve Keen complains that economists need to know more about dynamical systems.

          To be fair, when I was finishing my first degree in Physics in the late 90’s my Classical Mechanics textbook was Goldstein’s, which went as far as claiming that bifurcations in dynamical systems were “fortunately rare”. There was just one professor at my school talking about chaos in the advanced classical mechanics course, which he tought on alternate years if I remember correctly.

          So I’m sure lots of physicists of my generation and older know about the classical theory of autonomous dynamical systems and classifications of fixed points in 2D phase spaces by linearization, but haven’t gone any further into the wild unknown of systems with positive Lyapunov exponents.

        • Tim van Beek says:

          The common wisdom of my generation of physicists was that as soon as you had understood the basics of symplectic geometry you should advance to quantum mechanics as fast as possible. I learned about dynamical systems and chaos as a kind of hobby from math classes.

        • phorgyphynance says:

          ET Jaynes is, in my opinion, one of the greatest scientists of all time. Hopefully history takes note some day.

          Here is a paper (“in need of criticism”) he wrote:

          Click to access entropy.in.economics.pdf

          Abstract. Classical economics was built largely on the analogy to mechanics, as it was known in the time of Adam Smith; particularly the idea of mechanical equilibrium. But a macroeconomic system is in some ways more like a thermodynamic system than a mechanical one, so we develop that analogy. Since the time of J. Willard Gibbs it has been known that prediction of chemical processes – reversible or irreversible – could not possibly have succeeded until the entropy of a macrostate was recognized and taken into account. We conjecture that the same may be true in economics; the direction of economic change may have as much to do with the entropies of neighboring macrostates as with any of the other `dynamical’ factors now recognized.

        • Phil Henshaw says:

          Well, you can take the idea of equilibrium in reverse, look for what predicts instability, and that can be quite useful.

          You can be entirely certain that diverging processes will NOT continue to as stable parts of the same system, for example. That turns indicators of regular proportional change into solid predictions that some of your assumed boundary conditions will fail. Very handy indeed.

          Then all you do is look for the ones that might, and devise indicators (such as failure of market allocation processes, such as those seen in our economy presently) to indicate you better plan on a systemic change of form approaching, and letting people know you’ll need to change your model for it to continue being an approximation of reality.

        • Phil Henshaw says:

          Miguel quoted John asking:

          could you sketch how far “mainstream economists” have gone down the road of applying more sophisticated concepts from physics that are applicable to more general situations? Nonequilibrium thermodynamics, catastrophe theory, chaos, etc?

          Miguel, you responded in brief:

          The short answer is that I don’t think they have. That’s why Steve Keen complains that economists need to know more about dynamical systems.

          I’ve studied the ability of economists to adapt complex systems theory for 30 years. They basically are stopped dead in their tracks. The problem is that the dynamics of economies comes from the animated learning of the parts, with both people and their cultural systems acting opportunistically in exploring and inventing new stuff to do with their environments. Mathematical system theory is also stopped dead by that really obvious necessity of any scientific study of the problem. That reality is highly “equation unfriendly”. So you need to use equations for the very limited things that equations can be used for, with the benefit of getting limited real knowledge out rather than just bad hyperbole.

          There is no “science of economics” as a result, still relying on highly unreliable regressions for past regularities that never hold up, used to illustrate near religious beliefs in abstract ideologies that are just as good when they contradict the quite clear behavior of the world economy. People keep writing papers that *seem* impressively insightful, but mostly they’re not looking for what assumptions might be in error and keep letting their belief in things like limitless multiplying machines, built into their core principles, completely derail their thinking.

          There are certainly better options, but switching to a paradigm of studying learning systems is a real problem for them. It does not seem they’d even want to find a science based on reliable principles and observation. I have a fairly solid and versatile one, but it requires studying uncontrolled rather than controlled systems, like economies…, organized by their creative development processes.

          So, they draw a blank whenever someone points out the value of starting with finding some very simple things you can be quite sure of in an environment where it’s so clear you don’t know much.

        • John F says:

          Miguel,
          somewhat older generation here, we also did Goldstein. The funny thing is, for the generic dynamical system with both dampening and driving, chaotic behavior is typical, so far from being fortunately rare. Chaos is good math and good physics treated as generic and inevitable, but instead is treated educationally mostly as special cases.

        • Ivan Sutoris says:

          John wrote:

          I don’t think anyone here is silly enough to claim that the concept of ‘equilibrium’ is relevant to our economy, except in certain local situations where outside circumstances are (temporarily) unchanging and things happen to settle down for a while.

          As I mentioned below, the concept of equilibrium in economics is perhaps different from its usual meaning and does not imply a static, non-changing state. For example, in a simple exchange economy (people go to the market, sell their stuff and buy other stuff, with prices being the same for eveyone), the equilibrium is simply a set of allocations of goods among consumers and a vector of prices such that: 1) each consumer gets the most preferred allocation he could afford, given the prices and income from his initial allocation, 2) the market clears (supply = demand). This can be extended to include production, time and uncertainty, and you can obtain economy which will fluctuate, as the uncertainty is revealed over time.

          On the other hand, there is a problem (yet not satisfactorily resolved, AFAIK) of nonequilibrium dynamics, although in a different form – how do these market clearing prices materialize? A typical story from Econ 101 is that if demand is greater than supply, prices will go up and vice versa. So people were trying to formalize this idea, modeling evolution of prices by a differential, or difference equations and investigating whether the equilibrium is stable (this is called tatonnement dynamics, and goes back to 1960’s or 70’s, old stuff). The problem is that theory poses very few restrictions on the aggregate excess demand function (which drives the tatonnement process), and so the stability cannot be proven in general.

          And regarding chaos, catastrophe theory etc., it is true they are not used much in mainstream (certainly are not taught in graduate courses). However, it is not true that economists have ignored them entirely – for example, this paper reviews some applications of chaos theory and nonlinear dynamics, and this one by Barkley Rosser (who has also other papers on these topics on his website), is about catastrophe theory in economics.

          Why these ideas haven’t caught on is a different question, but it’s certainly not because economists are too dumb to learn the math.

        • John F says:

          Ivan,
          “the market” is a kind of Sumo ring, in which competition action occurs both slowly and quickly. Economic theories seem to me to only try to explain those long intervals in which the Sumo wrestlers are pushing and sweating but not actually moving. But the real action comes in big moves such as falls, which in the market is most often effected by cheating (pulling hair, etc.).

          I don’t think there can be, but suppose there were good theories of cheating, i.e. precisely how to beat the market, e.g. prior knowledge of when and how these big moves will take place. It seems to me the only way to get a proper theory of cheating is to cheat – to rig the market so that your predictions are correct.

          Anyway, dynamic attractors may be of some interest in economics. Chaos and attractors are ubiquitous in dynamical systems. And in most systems the attractor moves.

      • Maybe a good start to get an idea on what economists use and how they argue is ‘Microeconomic Theory’ by Mas-Collel and Whinston. Let me just sketch two arguments why most economist’s intuition that physics is not the ‘right’ way to do economics might be true.

        On page 23 (which happens to be the start of the chapter on ‘demand functions and comparative statics’) they define:

        The Walrasian demand correspondence x(p,w) is homogeneous of degree zero if x(a p,a w) =x(p,w) for any p, w and a>0.

        Down to earth this is a gauge symmetry and states that demand does not change by introducing a new currency. The prize-scaling group is generated by the demand. Representing this group as linear operators on a Hilbert space gives commutation relations for prize p and demand d of a good: [p,d]=i h p for some real number h (and not equal to i h when compared to momentum/displacement in quantum mechanics).

        On page 732 they start to discuss ‘Equilibrium and Time’ and develop fairly convincing arguments why utility is increasing, concave and depends on time. Maximizing utility leads to an Euler-Lagrange equation (with constraints). In such systems, energy (the Legendre transform of the Lagrangian) is not conserved. In other words, markets do not have an obvious invariant for time translation.

        The equations are too hard to be solved and thus phenomenological theories with all their obvious (to non-economists) deficiencies are being used. I do not know enough about physics to see how this can be mended. Having no ‘energy’ seems to be even a bigger problem when game theory enters the arguments (pp. 217).

        • John F says:

          Speaking of having a lack of energy, it’s probably due to Clausius–Clapeyron that my heat pump works so poorly when outside temperatures are below about 5 C. The air-cooled unit that works so well at 35 C freezes solid whenever the winter is humid.

        • Giampiero Campa says:

          There are plenty of systems where you don’t worry about the energy flow (because you want to model other things). However those systems can still be thoroughly understood within a dynamical system framework.

          In other words if you can simulate the system and/or you want to make it “behave”, you (only) need to know the evolution equations. Not having obvious invariant for time translation is not necessarily a showstopper.

  10. Robert Smart says:

    A house is a productive asset producing accommodation. A car produces transport. In this way of thinking then everything that is produced is consumed almost immediately. Money is a “right to consume” token.

    The other part of the economy is legal rights to productive assets, which people acquire by building houses or factories, or buying (or improving) farms, etc. Then there are financial assets which can be considered securitization of these real assets, with attendant risks of malfeasance. The fact that we use “right to consume” tokens to exchange this stuff may be making it hard to think clearly about it. Note that these legal rights cover some time frame in which “now” is either absent or not a significant component.

    • streamfortyseven says:

      If you’ve ever owned a house, you’d know that it’s not a “productive asset”. They require constant upkeep and repair, and are subject to ever-increasing property taxation, which is why, even though the assessed valuation of my house has fallen, slightly over the past 3 years, the mill levy has increased so as to cause a slight increase in property tax. A true productive asset is an oil well, or a gas well, or a farm, or a business where you can make money off of employees’ labor and creativity. Houses are a consumer good, not an asset. Cars are even worse, because in 95% of cases, their value depreciates markedly over time, in addition to upkeep and insurance – they’re not assets, they’re consumer goods with a very finite lifetime.

      • John Baez says:

        I understood most of what you’re saying, streamfortyseven — perhaps because I own a house — but I’d never heard the term “mill levy” before, so perhaps others would like to know the definition. It is not a tax that you pay when you own a mill.

        From What is a Mill Levy?:

        A mill levy, also known as a millage rate, is an alternate term for a property tax rate. Mill levies, when multiplied by the value of the property being taxed, provide a property’s annual tax liability. Some governments call this number a mill levy or millage rate, but in simplest terms, it is your property tax rate.

        Mill levies allow for the calculation of property tax liabilities by factoring in the value of the property being taxed. The property tax liability is computed by multiplying the value of a property times the mill rate, and dividing by 1,000.

        So, I guess the term ‘mill’ refers to the factor of 1000 here. I only recently learned that in addition to the familiar ‘percent’ symbls, there’s also ‘per mil’ symbol:

  11. Blake Stacey says:

    In week309 I plan to give an explanation of the Lotka-Volterra equation,

    Lotka-Volterra dynamics become quite interesting when extended to ecosystems which are distributed across space: lots of non-equilibrium phase transition stuff arises. It’s one of a couple areas in which field theory might plausibly be relevant to ecology.

    • Miguel says:

      You’re of course talking about a reaction-diffusion system in which the reaction part is the Lotka-Volterra dynamics.

    • Tim van Beek says:

      That is interesting! The topic of TWF 308 made me ask myself what is known about the qualitative analysis of stochastic differential equations, the Mobilia/Georgiev/Täuber paper does a little bit of what I had in mind.

      Maybe I should explain what “qualitative analysis” means: It’s about describing properties of families of solutions without actually solving the equation. The qualitative analysis of ordinary differential equations has a history of more than 100 years. The topic usually starts with an introduction to linear systems,

      \frac{d y}{d t} = A y,

      where y(t) is a n dimensional vector of real functions and A is a n \times n matrix. The solution of a scalar equation

      \frac{d y}{d t} = \lambda y

      with a constant \lambda is of course y(t) = \exp(\lambda t). If we assume that we can diagonalize the matrix A, then we can solve the linear equation, the solution being a vector of exponential functions a above, where the constants are the eigenvalues of A. Therefore, we can divide space into those dimensions where every solution converges to zero, that is spanned by eigenvectors with negative eigenvalues, and those where every solution escapes exponentially to infinity, that is spanned by eigenvectors with positive eigenvalues.

      For more general equations we can do this kind of analysis to the linear approximation, this is where the significance of the Lyapunov exponents comes from. For ODE this is just the beginning of a long story, but I have never seen a similar qualitative analysis for SDE.

    • Blake Stacey says:

      Maybe I should explain what “qualitative analysis” means: It’s about describing properties of families of solutions without actually solving the equation.

      Much of what I’ve seen in that vein for stochastic systems has been classifying behaviour in the vicinity of phase transitions. For example, many models of interest turn out to have a phase transition which falls into the directed percolation (DP) universality class, which is characterised by particular values of the exponents which describe how various quantities depend on the distance from the transition point and on the time since perturbation. Hinrichsen (arXiv:cond-mat/0001070) is a pretty good introduction to the subject, which and how it relates to field theory. Takeuchi et al. (arXiv:0907.4297) demonstrate an experimental realisation of a DP phase transition. The textbook Non-Equilibrium Phase Transitions by Henkel, Hinrichsen and Lübeck (Springer, 2008) is a fairly comprehensive and readable survey, which does the important job of addressing why DP transitions are so easy to see in models and so hard to get in the laboratory.

    • John Baez says:

      Thanks for all the info, Blake! It’s really fascinating.

      But you’re talking about field theories, while Tim seemed to be asking about something simpler: stochastic ordinary differential equations. For these, instead of ‘phase transitions’, it seems interesting to study the effect of stochasticity on ‘bifurcations’.

      For example, in “week308” we saw how the Hopf bifurcation was affected by the presence of white noise — and the paper I was discussing goes into a bit more detail on page 2547. (See the picture.)

      But one could imagine a much more detailed and mathematical theory of this stuff, and I think Tim was wondering if this theory already exists.

      It would be odd, but not unimaginable, if the qualitative study of stochastic field theories (stochastic PDE) had progressed faster than the qualitative study of stochastic ODE.

    • Blake Stacey says:

      But you’re talking about field theories, while Tim seemed to be asking about something simpler: stochastic ordinary differential equations.

      Well, I’m naturally going to spend more time jabbering about the material against which I’ve been slamming my head to the greater extent. (-:

      More seriously, though: squeezing a physical, ecological or economical system down to a few variables in a handful of coupled ODEs is a nontrivial simplification, and I wouldn’t be surprised if it fails horribly in cases we care about! It’s like looking at the Ising model below the critical dimension: mean-field approximations gang aft agley.

      (Ecologists have tools for facing situations where ecosystems aren’t fully mixed and some amount of spatial heterogeneity occurs; their “moment closures” are closely analogous to the ways statistical physicists have of cutting off the BBGKY hierarchy in the kinetic theory of gases. A typical application would be to find an expression for the ability of an invasive species to prosper in an ecosystem. One writes differential equations for the density of native individuals, p_N, the density of invasive mutants, p_M, the pairwise correlations p_{N|M} (giving the probability that a neighbour of a mutant is a native)… Then one linearizes this system of equations around a fixed point and finds the eigenvectors and eigenvalues. If the largest eigenvalue is positive, then the mutant population can invade. Moment closures don’t always work very well, though, which given all the cut-offs and assumptions going into the calculations isn’t terribly surprising.)

      For these, instead of ‘phase transitions’, it seems interesting to study the effect of stochasticity on ‘bifurcations’.

      Which reminds me of Yet Another thing I need to learn more about: Arnold’s ADE classification of catastrophes.

      • John Baez says:

        Blake wrote:

        Which reminds me of Yet Another thing I need to learn more about: Arnold’s ADE classification of catastrophes.

        I think Arnold called them “singularities” instead of “catastrophes” — he was always a bit sniffy about the phrase “catastrophe theory” since he and other people had been working on similar things for quite a while. I wrote about Arnold’s classification of singularities in “week230”, which was a kind of tour of things having ADE classifications, and how they’re all related. And here’s what I said:

        The ADE Dynkin diagrams also classify the simple critical points of holomorphic functions

        f: C3 → C

        A "critical point" is just a place where the gradient of f vanishes.

        We can try to classify critical points up to a holomorphic change of variables. It’s better to classify their "germs", meaning we only look at what’s going on right near the critical point. But, even this is hopelessly complicated unless we somehow limit our quest.

        To do this, we can restrict attention to "stable" critical points, which are those that don’t change type under small perturbations. But we can go a bit further: we can even classify "simple" critical points, namely those that change into only finitely many other types under small perturbations.

        These correspond to the ADE Dynkin diagrams!

        First I’ll say which diagram corresponds to which type of critical point. To do this, I’ll give a polynomial f(x,y,z) that has a certain type of critical point at x = y = z = 0. Then I’ll explain how the correspondence works:

        f: C3 → C

        A "critical point" is just a place where the gradient of f vanishes. We can try to classify critical points up to a holomorphic change of variables. It’s better to classify their "germs", meaning we only look at what’s going on right near the critical point. But, even this is hopelessly complicated unless we somehow limit our quest.

        To do this, we can restrict attention to "stable" critical points, which are those that don’t change type under small perturbations. But we can do better: we can classify "simple" critical points, namely those that change into only finitely many other types under small perturbations.

        These correspond to ADE diagrams!

        First I’ll say which diagram corresponds to which type of critical point. To do this, I’ll give a polynomial f(x,y,z) that has a certain type of critical point at x = y = z = 0. Then I’ll explain how the correspondence works:

        • The diagram An corresponds to the critical point of xn+1 + y2 + z2.

        • The diagram Dn corresponds to the critical point of xn-1 + xy2 + z2.

        • The diagram E6 corresponds to the critical point of x4 + y3 + z2.

        • The diagram E7 corresponds to the critical point of x3 y + y3 + z2.

        • The diagram E8 corresponds to the critical point of x5 + y3 + z2.

        Here’s how the correspondence works. For each of our Dynkin diagrams we have a finite subgroup of SU(2), thanks to the MacKay correspondence [which was the previous item on the list of ADE classifications in “week230”]. This subgroup acts on the ring of polynomials on C2, so we can form the subring of invariant polynomials. This turns out to be generated by three polynomials that we will arbitrarily call x, y, and z. But, they satisfy one relation, given by the polynomial above!

        Conversely, we can start with the polynomial

        f: C3 → C

        The zero set

        {f = 0}

        has an isolated singularity at the origin. But, we can “resolve this singularity” which means we find a smooth complex manifold N with a holomorphic map

        q: N → {f = 0}

        that has a holomorphic inverse on a dense open set. There may be lots of ways to do this, but in the present case there’s just one "minimal" resolution, meaning one that every other resolution factors through this one.

        Then – and here’s the magic part! – the inverse image of 0 in N turns out to be the union of a bunch of Riemann spheres. And if we draw a dot for each sphere, and an edge between these dots whenever their spheres intersect, we get back our Dynkin diagram!!!

        It’s pretty freaky, getting Dynkin diagrams to show up as the pattern of intersection of Riemann spheres… but this is actually quite typical in the ADE game, which is apparently the portion of mathematics where god was trying to pack in as much craziness as possible without actually making it inconsistent.

    • Blake Stacey says:

      Also: the people who write about “stochastic bifurcation theory” don’t seem to be very good about putting their papers on the arXiv. Sigh.

      • Tim van Beek says:

        John said:

        But you’re talking about field theories, while Tim seemed to be asking about something simpler: stochastic ordinary differential equations. For these, instead of ‘phase transitions’, it seems interesting to study the effect of stochasticity on ‘bifurcations’.

        Right, but I’m interested in both and Blake provided information about both!

        I became interested in white noise analysis and Malliavin calculus a couple of years ago as a means to define Feynman integrals and did not manage to connect this stuff to SPDE and what physicists usually do when they are doing “field theory”.

        • John Baez says:

          Personally I think the easiest relation between white noise and quantum fields is this. If you take the free Klein-Gordon field of mass m on Minkowski spacetime, and consider it on the spacelike slice t = 0, you can think of it as a random field of the classical sort, because its values at spacelike separated points commute. And, this random field is just the operator

          (-\nabla^2 + m^2)^{-1/2}

          applied to white noise.

          (Technical note: this random field is a bit ‘distributional’, except when space has dimension \le 1. In other words, you need to smear it with test functions to get random variables. This is why quantum field theory is so much harder when space has dimension > 1.)

      • Blake Stacey says:

        I’ve just started reading a review article by Schenk-Hoppé (1998) on “Random attractors–general properties, existence and applications to stochastic bifurcation theory“, which kicks off in the following fashion:

        Attractors play an important role in the study of the asymptotic behavior of dynamical systems. There is an extensive literature dealing with attractors of deterministic dynamical systems. For stochastic dynamical systems, however, comparably little progress has been made by now.

        The first salient point is that the notion of an “attractor” has to be changed. In a deterministic, unperturbed dynamical system, we can talk about a region A in phase space which evolves into itself over any length of time, \varphi(t)A = A\ \forall t, and which has a basin of attraction U within which the distance between a point and the attractor A tends to zero. In the stochastic case, the attractor becomes a family of phase-space regions A(W) dependent on the perturbation W. Convergence of points within the basin to the attractor itself is then something which happens “almost surely”.

        Two different approaches to stochastic bifurcation have been pursued so far, the “physicist’s” (P) and the “dynamical” (D) approach, see [L. Arnold et al. (1995)]. Suppose a random dynamical system which depends on a parameter is given. Then the approach called (P)-bifurcation deals with qualitative changes of stationary measures when a parameter is varied. It is particularly useful for stochastic differential equations, since densities of stationary measures are delivered by the Fokker–Planck equation. (P)-bifurcations have been studied intensively by engineers […]

        The approach called (D)-bifurcation studies the loss of stability of invariant measures and the occurrence of new invariant measures when a parameter is varied. It is tied to sample stability through the use of Lyapunov exponents (given by the multiplicative ergodic theorem) to characterize stability of invariant measures. This approach has proven to be very successful.

        This field of inquiry seems to’ve gotten started in the early 1990s, which is a little later than the field-theoretic study of percolation transitions, which goes back to the 1970s (depending on where you start counting — the early papers came out of Reggeon field theory for high-energy particle scattering phenomenology).

        • John Baez says:

          Great, Blake! This is the sort of stuff Tim and I were dreaming about…

          I don’t really understand the difference between P-bifurcation and D-bifurcation: the descriptions sound very similar. The first studies “qualitative changes in invariant measures as a parameter is varied”, while the second studies “loss of stability of invariant measures and the occurrence of new invariant measures when a parameter is varied.” The second sounds like a special case of the first.

          But never mind; we can just read the article.

        • Nathan Urban says:

          I don’t have access at the moment, but I see the abstract mentions the stochastic Duffing-van der Pol equation. We’ve seen this before in the Azimuth Project, as a model of the ice age cycles. Perhaps this paper describes some of the theoretical properties of such a physical system.

        • Nathan Urban says:

          Actually, I haven’t verified that these particular equations can be put into a van der Pol form, although vdP oscillators are a prototype of generic “Saltzman-type” equations.

        • John Baez says:

          I don’t know much about this stuff, but the ‘Duffing oscillator’ and Van Der Pol oscillators seem to be second-order differential equations. The Saltzman model of ice ages, and the simplified stochastic resonance model on the Azimuth Project, are first-order differential equations. So aren’t they pretty different?

          There does however seem to be a strong connection. The Duffing oscillator is this equation:

          \ddot x = - \alpha x - \beta \dot x - \delta x^3 + \gamma \cos(\omega t)

          describing a damped anharmonic oscillator with an sinusoidal driving force. It does things like this:

          The stochastic resonance model described on the Azimuth Project is the result of adding white noise to this equation:

          \dot x = x - x^3 + A \sin(t)

          So, apart from changing the cosine to a sine, which is unimportant, it’s a special case of what you get by studying the Duffing oscillator in a high damping limit, where the \ddot x term can be neglected.

          Maybe this is what you meant…

        • Nathan Urban says:

          I am probably mixing up my memories of the literature. I know there are a bunch of papers discussing the Saltzman equations in the context of van der Pol oscillators. The problem is, there is no such thing as “the” Saltzman equations. He developed a variety of related models. Probably some of them reduce to the vdP equation through some approximation procedure, perhaps as you suggest.

  12. Phil Henshaw says:

    John, in your first comment at the top I noticed your intent to go back to teaching math differently next time, as you said: “but now I’m more keen on real-world examples that illustrate the big problems facing our civilization.”

    What could be challenging and intriguing to try to model the elemental problem of “thinking systems”. The problem is how a network of independently “observing, interpreting & responding” agents interacts. Part of the interest is that they are not follow rules, but sharing their own original learning and responses from different view points of their dynamically changing environment.

    Those “thinking nodes” are key working parts in all kinds of natural system economies, responsible for swarm behaviors in nature generally, and not just in human economies. Whether their connections are direct through open markets or through specialized groups (like this blog, say), the interesting ways nature uses that structure to get distributed systems to work as rapidly reorganizing whole systems is a big part of the fascination. To avoid some of the indeterminacy of it all one starting point is to focus on the natural limitations of the actors or their environment, i.e. response time limits.

    It would be interesting to model conditions in which smooth flowing learning and communication becomes turbulent, for example. That happens easily when the delay or difficulty in observation leads to interpretation uncertainty, and in turn lead to response errors like indecision or panic etc… See my idea? Maybe if people learn to play with idealized examples of this kind of thing it would give them a leg up for learning how to observe active learning system behaviors in real time they’d otherwise not think twice about.

    There was a great example of an emerging systemic distortion developing in the collective “thinking system” of the financial markets in 2007, displayed in the S&P 500 index that kind of points to the subject. http://www.synapse9.com/issues/S&PmovementsAug07-L.jpg This is just one of a wide variety of other kinds and scales of emergent learning system behaviors, growth itself being both the simplest, most common and most difficult to understand.

    The screen shot shows the six month period from March to August, displaying increasing “fishtailing” leading to turbulence and the S&P whole market plunge, that triggered global intervention. I found lots of people who indeed agreed they’d never seen anything systemic like this, and that that a functioning market should never produce them. I think what was driving it was that at that time people in the financial markets were racing all over looking for where to hide their money… they were panicking.

    Another part of what’s interesting about it is how a highly exceptional systemic behavior did not stimulate the curiosity of the economists or traders I could find. All I could get was an opinion that it didn’t look right. That “failure of curiosity” hints at the deeper problem, of course.

    That time the intervention by the market system regulators was able to save the day. That was not not the case the next time “all hell broke loose”, though. That people “in the know” didn’t take interest in the problem, of course, is another problem when relying on the “observing, interpreting & responding” of independent agents. They can look at something unfamiliar and “draw a blank”.

    Anyway, that might help suggest the general lines of research into such emergent phenomena I’m trying to suggest. Various pieces of it could be experimented with mathematically, and give people an idea of what to be curious about, and “if you see something say something” if they happen to notice them taking place.

  13. […] The limits of learning machines… (drawing a blank!) One of the constant threads of my work from the start has been the curious gaps between the world our mind presents as whole, and the one that nature works with, a considerably more complete deck of cards, you might say… Here’s a good note to a mathematician and physicist, John Baez, prompted by his interest in studying the math of real world problems in his comments on mathematical economics. […]

  14. alpheccar says:

    John,

    You may be interested in the research of Didier Sornette summarized in this book :
    http://www.er.ethz.ch/books/stock_markets_us

    But more research articles are available on his web site : http://www.er.ethz.ch/publications/complex_systems/soc

  15. Miguel says:

    Evidently the software here is limiting the depth of comments to 4 (is that configurable?) so I can’t reply to Phil Henshaws’ comment in place…

    I’ve studied the ability of economists to adapt complex systems theory for 30 years. They basically are stopped dead in their tracks. The problem is that the dynamics of economies comes from the animated learning of the parts, with both people and their cultural systems acting opportunistically in exploring and inventing new stuff to do with their environments. Mathematical system theory is also stopped dead by that really obvious necessity of any scientific study of the problem. That reality is highly “equation unfriendly”. So you need to use equations for the very limited things that equations can be used for, with the benefit of getting limited real knowledge out rather than just bad hyperbole.

    There is no “science of economics” as a result.

    I couldn’t agree more. The thing is, there are entire schools of economics which have been all but expelled from mainstream economics into “sociology”, which do address these things, and tend to be more useful for understanding macroeconomics than the orthodox, mathematical, Neoclassical Economics and its Dynamic Stochastic General Equilibrium.

    I’m referring, chiefly, to American Institutional Economics. Thorstein Veblen is a prime example. I would add John K. Galbraith to the same category.

    Economics cannot be separated from the legal and social aspects of it. Once you have set the institutional framework, microeconomics can play out. But, as we know, solutions of PDEs are as much determined by the boundary conditions as by the actual equation. In the linear case the solution reduces to propagating the boundary conditions by the same boring Green’s function. Why should economics be different? The institutional boundary conditions matter. A lot. And removing constraints or refusing to touch the boundary conditions is as much of a policy intervention as anything else. Maybe economic policy should be considered a problem in stochastic control of dynamical systems.

    So I don’t think there is a “grand unified economic theory” waiting to be discovered. I don’t think that is possible. In addition, behavioural economics has shown that the Neoclassical Economics model of human behaviour is wrong. Utility functions cannot exist, and even if they did, empirical psychology tells us that people behave “uneconomically” (heh) all the time. At the level of the firm, as long as em>profit maximization dominates the culture, it is possible to model “profit-maximizing firms” under certain institutional “market conditions” and obtain some useful conclusions. But the premise is cultural. If firms are led by people who believe they should be doing something other than profit maximization and viability as a going concern (for instance, looting and pillaging the capital base in the name of “executive compensation” and “shareholder value”) then the microeconomic behaviour of firms will be different.

    In Stabilizing an Unstable Economy Hyman Minsky says

    [NCE] means that, for those subsystems of the economy where conditions are apt, the market can be relied upon, particularly if the market is not relied upon for
    1) the overall stability of the economy
    2) the determination of the pace and even the direction of investment
    3) income distribution
    and 4) the determination of prices and outputs in those sectors that use large amounts of capital assets per unit of input or per worker

    • John Baez says:

      Miguel wrote:

      Evidently the software here is limiting the depth of comments to 4 (is that configurable?)

      I can increase that number, but the width of the column gets smaller with each successive layer of comments (I haven’t been able to adjust that, despite help from David Tweed), and it gets ridiculously skinny after about four.

      So, people should do what you just did, or else go back up the comment tree until you get to a comment that has that little blue “Reply” thing under it, and click on that.

      By quoting the comment you’re replying to, or using the permalink feature to link to it, it’s pretty easy to make it clear what comment one is replying to.

    • Phil Henshaw says:

      I’ve heard Galbraith say a lot of very sensible things, but the angle I find most interesting is how the many “branches” of economics and ecology and physics that all have their own versions of reality.. demonstrate so nicely that science is a social construct, a set of conventions determined by agreement not necessity.

      You say: “So I don’t think there is a “grand unified economic theory” waiting to be discovered.” That’s actually a fairly good description of what I developed some time ago and find other people don’t like my need to base principles on necessity. I use the old fashioned method of physics, where the main effort is to find answerable questions, not to find the answers someone is looking for. It’s sort of “plodding” in that way, but as we all know once you have exceptionally solid foundations for a scientific construct, some rather high degrees of leverage are possible if done right too.

      I posted something on it this morning. We don’t yet have any theory for why energy conservation works as an explanatory principle. If you accept it, and ask the question, how do processes of energy use begin and end given that constraint, an enormous bucket load of brand new and very useful questions about complex systems as organizational processes spills out.

    • Giampiero Campa says:

      Maybe economic policy should be considered a problem in stochastic control of dynamical systems.

      I agree, and i think that in some sense this happens already.

      In addition, behavioural economics has shown that the Neoclassical Economics model of human behaviour is wrong.

      True. However that does not necessarily mean that we have to throw everything away. There are also many findings that tells us that people do behave almost optimally when operating within their confidence bounds (that is when they sort of know what they are doing). And if you see utility functions just as a way of saying that people have preferences … then it might not be an unreasonable assumption, provided that you keep in check your expectations about your model.

      I am still relatively new to economics, so my opinion doesn’t “count”, (for the record I am a control engineer, I have studied micro and then macro economics over the last couple of years, and I am just starting to build some simple models on my own), but I think that perhaps improvements could be made simply by:

      1) considering the market dynamics explicitly
      2) avoiding unrealistic classical assumptions, like having a vertical aggregate supply curve (by the way, why can’t anyone find out what is the actual slope of that curve ?).
      3) using common sets of agreed-upon data to refine, validate and compare models.

      Then again, perhaps this stuff is out there somewhere and i haven’t found the right book yet …

  16. Phil Henshaw says:

    The starting question John offered was, essentially, can we use phase transition equations to describe economies?:

    But here’s a basic question: in what sense do economists model the economy using the Clausius-Clapeyron equation? Is the idea that we can take this equation and use it to model economic equilibrium, somehow? How, exactly?

    Sometimes you need to step back from a question to look at the context before recognizing what the real problem is. The problem is that fully a century of ever more sophisticated mathematics is completely failing to provide people a way to keep the complex systems of the economy out of trouble. It could be a case of the typical sort, that scientists confront frequently, the failure of a wonderful language for one subject when trying to apply it to different subject. Will depicting systems as controlled numeric relationships between categories work for economies?

    Even raising that possibility, that a fine old tool may not fit a new kind of problem, is problematic. Scientists become emotionally attached to their tools and the great success they achieved in other areas. Ptolemaic science was great for describing points of light swirling in the sky, but didn’t work so well for treating them as planetary and stellar objects. The limitations of classical physics required particles to lose their materiality, and be described as purely mathematical “objects”, though that failure of the old paradigm also achieved much better descriptions of natural materials.

    We’re possibly at another one of those “phase changes” in knowledge and the languages explanation needed by science. The numerous attempts and stubborn inadequacy of using equations for complex systems could be like the efforts of Ptolemaic scientists creating more and more “epicycles” to describe planetary dots of light moving in the sky. We might be just balking at the prospect of giving up our “perfect circles” as the anchoring idea of science. Nature’s complex systems are just not well described as controlled relationships between categories. It was a fabulous idea for planetary motion, and then lots of other things. To solve other kinds of explanatory problems, though, you first need to find an explanatory language that fits them.

    Clinging to the very successful explanatory paradigm for nature’s fixed relationships, to the point of relying on it for natures rapidly changing organizations and relationships, has left a long trail of evidence of causing misleading conclusions. It’s actually been stalling the most important discoveries about how our world really works for over a century. It was Jevons (1886), who both wrote brilliantly on the scientific method and observed that making machines more efficient resulted in the complex systems using machines to consume more resources. His work went on to become used as the basis for modern economics, in fact, but filtered to remove that and other of his very best simple observations. Those who followed seemed to drop the insights that didn’t fit their equations.

    The point is not to criticize. The point is about how we can recover our balance if we just use the oldest of all scientific tools, direct observation. Having important direct observations culturally discarded and suppressed for over a century, because they “don’t fit”, steers whole societies off track. Recognizing in simple direct observations the “don’t fit” an opportunity and need for a new form of explanation is the better response. The most obvious thing about economies is that they have learning parts. They are simply not deterministic systems and will never be usefully represented by a tool for describing deterministic systems.

    If you “do what works”, there certainly is a range of useful application for the “wrong explanation” for the “right question” sometimes. That is never going to turn learning systems into deterministic ones… You might just as well “do what the ancients always ended up doing”, learn to use tools for the tasks that fit, and don’t end up getting trapped in describing a butterfly as an ink blot. Things of nature are what they are, and it’s OUR job to form our tools to nature’s forms, not the reverse.

    Why humans keep making valiant efforts to alter nature to conform to our tools and images is indeed a bit of a mystery. In history we have done that over and over, though, making it one of our main humerus/tragic learning methods. We sometimes do it with heart. We should at least be good enough observers of ourselves and our history to notice the hazards of it though. It propagates grave errors, and that’s a true vulnerability for both ourselves and for our natural world, that we seem to not being at all respectful of.

    Neither disturbing insights nor disturbing misunderstandings are easily swept away, because they have to do with mismatches between the mental tools we bring to our life problems. That’s another place where observation comes in, a way of “casting about” for elementally simple and completely neutral and private opinion, to help us recognize the latest “screwball ideas” we come up with.

    The simple observation that improving efficiency accompanies accelerating resource depletion for our economy could not be more clear in modern data. Socially and scientifically the broad consensus worldwide is to rely on the opposite for continuing our prosperity. You’d think scientists would just say, “Oops,… let me look for the mistake”. That simple response to a simple observation does turn out to get more involved, mainly in opening a whole series of new doors with interesting things to explore behind them.

  17. Ivan Sutoris says:

    The idea that current crisis has in some sense “refuted” mainstream/neoclassical economics seems quite common in the discussion here. If I may add my two cents and perhaps a different perspective, I think this is pretty strong, yet rather vague claim (for the record, I am grad student in economics with no physics background, though I majored in applied math).

    First problem is that neoclassical economics is not some single unified mathematical theory, like theories in physics often are. Instead, it is more a set of assumptions and tools, which are used to construct models tailored to specific questions of interest (of which many have nothing to do with macroeconomics). The crisis can refute a particular macroeconomic model, but to claim that it invalidates the whole paradigm is of course something much stronger – you need to show that neoclassical economics is not able to explain the crisis in principle. And/or you may propose a different paradigm, which explains the crisis and subsumes other results achieved by the old approach. Neither of these have been convincingly demonstrated, in my opinion. Some people have been pushing for models based on complexity theory, agent-based simulations, econophysics, etc., but they didn’t catch on so far. From my humble undestanding (I haven’t really studied these topics in detail), some of the reasons for that include: authors sometimes ignore or misrepresent mainstream economics, the models are driven by rather arbitrary assumptions and they are often even more unrealistic that the models they criticize.

    The second problem is that issues in economics are often more subtle than what outsiders think. For example, most of the models in modern macroeconomics are indeed dynamic stochastic general equilibrium (DSGE) models. But the concept of equilibrium here is not a static one – economy in DSGE model will fluctuate over time. It’s true that the fluctuations are caused by exogenous shocks, but given that our models are only (crude) approximations of the real economy, this is natural. “Equilibrium” simply means that agents in the model behave optimally, given the outcome, and the outcome is consistent with how agents behave. This is a pretty wide definition, and in my opinion, there is nothing that prevents us from analyzing financial crisis, such as the current one, in this framework, even if we don’t have good theory yet – many economists are currently working on the topic. This class of models also does not imply that there is no room for government policy – many models explicitly include market imperfections, and are used to analyze proper monetary and (more recently) fiscal policy, or government regulations. While many people seem to think that economics is all about ideology, in my experience this is simply not true.

    Anyway, my point is not that current economic theory is flawless, but criticism should be informed, specific and, if possible, constructive. Blanket statements are not helping anyone.

    • Miguel says:

      There is a very comprehensive (and constructive) critique in Hyman Minsky’s Stabilizing an Unstable Economy, just to name one. There’s also Steve Keen’s Debunking Economics which

      details the many critiques which have been made of economic theory by economists

      and his own dynamical models of the economy.

      Mainstream economics hides most of its glaring flaws and policy failures under the carpet of “external shocks”, “exogenous variables” and “nobody could have foreseen”. It is basically a theory of capitalism without capital markets, with all of the financial sector bundled into a notional single bank (which obviates the macroeconomic effects of capital markets and interbank credit) and so on.

      • Ivan Sutoris says:

        Well I’m not very familiar with Minsky’s work, but I believe that, for example, a recent paper by Krugman and Eggertson tries to capture some of those ideas, in a DSGE model. And while it’s true that macroeconomists haven’t paid much attention to the financial sector in the past, that is certainly changing (there is a nice interview with Gary Gorton about crisis in financial sector, going much into institutional detail – published at Minneapolis Fed, one of bastions of freshwater (DSGE) macro).

        Regarding Keen, I have trouble taking seriously someone who insists that economists make glaring obvious mistakes, yet doesn’t understand the concept of Nash equilibrium.

    • Giampiero Campa says:

      Ivan wrote:

      … the concept of equilibrium here is not a static one – economy in DSGE model will fluctuate over time. It’s true that the fluctuations are caused by exogenous shocks …

      In other words it is assumed that markets converge to equilibrium on a timescale that is much faster than the one of the external shocks.

      I haven’t find any discussion about this assumption, e.g. why is this reasonable. I have mentioned this to a few economists a couple of times, but i have walked away with the feeling that the main reasons behind this is because “this is just the way things are done” …

      Does anyone have any more insight about this ?

      • Ivan Sutoris says:

        Giampiero Campa wrote:

        In other words it is assumed that markets converge to equilibrium on a timescale that is much faster than the one of the external shocks.

        I haven’t find any discussion about this assumption, e.g. why is this reasonable. I have mentioned this to a few economists a couple of times, but i have walked away with the feeling that the main reasons behind this is because “this is just the way things are done” …

        Does anyone have any more insight about this ?

        I guess you’re right – as I understand it (which may not be correct), it is related to the problem of tatonnement dynamics I mentioned above – there are many ways to be out of equilibrium, and it’s hard to say anything specific. So yes, economists simply assume that system is in equilibrium, because nevertheless, the formal concept of equilibrium captures the idea that agents try to behave optimally under given constraints, which seems to be a reasonable concept.

        In intertemporal models (like DSGE models), the equilibrium is often defined in terms of policy functions – how do the people behave given the current state of the economy. So the idea is that people choose their policy function optimally in advance (all the uncertainty is quantifiable with probability distributions, so you can use stochastic dynamic programming to compute those optimal policies), and then they stick to it in all time periods. There is some research on learning, i.e. what happens if agents do not know some parameters of the model, but learn about them over time, and how that can converge to standard rational expectations equilibrium, although I’m not sure if that is what you’re looking for.

        • Giampiero Campa says:

          Thanks a lot for the link. That paper on learning looks interesting, (although the only dynamics is probably the one describing the evolution of expectations). But i am going to read it.

          By the way i wasn’t aware that “tatonnement” was the magic word here, indeed googling for tatonnement dynamics brings up interesting things.

        • DavidTweed says:

          Ivan wrote

          So yes, economists simply assume that system is in equilibrium, because nevertheless, the formal concept of equilibrium captures the idea that agents try to behave optimally under given constraints, which seems to be a reasonable concept.

          It’s statements like this that give me great pause. You don’t “that agents try to behave optimally under given constraints, which seem like a good approximation to what agents observed in the wild do”. I often wonder how much the conceptions in economics are influenced by a typcial economists view “that’s what I’d do”, neglecting the fact they’ve had x years training shaping their viewpoint, paticularly with regard to mathematizing things.

          Incidentally, although I’m slightly left-wing in my views, I also think “most people behave badly most of the time, deal with it”. So this isn’t a leftsim-vs-capitalism critique, it’s a “lack of empirical input” critique.

    • Phil Henshaw says:

      Ivan, You’re right to say that to challenge to neo-classical economics someone would:

      need to show that neoclassical economics is not able to explain the crisis in principle

      That’s hard to do, though, when the logic of economics is purely a matter of agreement, and has no basis in nature that natural science might apply to. The usual scientific tests, then, are not possible, since the theory offers no means of being tested.

      The theory describes the economy doing numerous things that no physical process in nature can possibly perform, for example. Nobody cares though, since it appears to be an ideology quite unconnected to physical science. It doesn’t need to to serve the communities that use it, so why would they care?? Take the rather simple matter that money saved in the past is thought to be owed ever multiplying real valued earnings over time in the future. There are dozens of other little problems like that too.

      It’s really hard to get anyone to think it matters, though, if the language to be tested does not recognize nature as a physical process. That keeps you from connecting it to natural science. I could provide a basis for connecting them quite easily. You simply use the average energy consumed per $1 of GDP as a ratio to calibrate their connection.

      It turns out to work remarkably well, in fact, for lots of very useful things. It also opens up a wide variety of fascinating new questions. I can’t get any economists to respond in any way except to say, essentially, “Oh, we don’t do it that way”….

  18. William Stein held a summer course in 2008, for finance quants, which also tried too show the show that their efforts are essentiallly useless. He used Mandelbrot’s (pop sci) book about the (mis)behaviour of markets and also added to it. In the book he bashes most on the econometrics community. I read the book then and loosely read through the course notes while trying to learn about time series support in Sage.

  19. John Baez says:

    Ivan Sutoris wrote:

    John Baez wrote:

    I realized that economics is inevitably warped by a powerful force field: its role in enhancing the wealth and power of the already wealthy and powerful.

    To the extent that economics is about controversial issues in the society, there will always be some ideology, especially when discussing policy issues (just like there is ideology in discussing/denying climate change).

    Alas, I’m not talking about situations where economists recommend certain policies for ideological reasons. I think the whole subject, from the foundations on up, is warped by the crucial role of economic theory as a tool for gaining and maintaining power and wealth.

    For example, I think the idea of people as rational agents trying to maximize something is an oversimplification with quite dangerous effects. It would take quite a while to argue this convincingly against a determined opponent, so I won’t even try. But to give a taste of what I’m getting at, here’s something I wrote in my economics diary back in 2003:

    • Amartya Sen, Rationality and Freedom, The Belknap Press, Cambridge, Massachusetts, 2002.

    This is a wonderful book that would take a long time to summarize. Amartya Sen won the so-called Nobel Prize in economics, and it’s easy to see why. I’ll just mention one thing here: his criticism of “rational choice theory”, for example his attack on such underlying assumptions as these:

    Self-centered welfare: A person’s welfare depends only on her own consumption and other features of the richness of her life (without any sympathy or antipathy towards others, and without any procedural concern).

    Self-welfare goal: A person’s only goal is to maximize her own welfare.

    Self-good choice: A person’s choices must be based entirely on the pursuit of her own goals.

    A reflective person need only state these assumptions to realize that they are either false or, by clever definition of terms, true but only vacuously. However, many economic theories are based on these assumptions, treating them as both true and non-vacuous. As Sen points out, this has the effect of treating people as “rational fools” who are unable to sympathize with others, unable to deliberately choose not to maximize their welfare, and unable to cooperate in pursuing someone else’s goals. Policies and ideologies based on these assumptions have a debasing effect on our society: they tend to actually make people into rational fools. People are always looking for a framework to justify their actions – a religion, one might say – and rational choice theory based on the above assumptions is one of the particularly pernicious religions of our time.

    Ivan wrote:

    However, I think that among academic economists, you would find much less ideology than you seem to think.

    I don’t really think most economists are ‘ideological’. And I don’t think they’re naughty people. I admire a lot of them. I think it’s a near-inevitable quality of economics that it become warped by oversimplifications. First, it’s a very complicated subject that is somewhat mathematical in nature, and simplifications are extremely attractive to people who like mathematics. Second, wealthy and powerful people are very adept at promoting simplified models as a way of further enhancing their wealth and power.

    And on the other hand, it’s not like the critics are ideology-free either – many simply dislike mainstream economics because of their leftist and anti-free-market convictions.

    Indeed, that’s just the flip side of the same problem! You’ve got the people who work within the existing assumptions — and opposing them, not necessarily people who value the truth more, but often people who want to change the power structure. This makes me even less optimistic.

    Aside from some of these guys being on quite different sides of political spectrum (like Friedman / Stiglitz), they all seem to say that there is too much math and formal modeling in economics (I don’t think they’re right, but that’s beside the point). But most of the discussion here is about applying even more sophisticated mathematics (nonlinear dynamics, complexity, chaos theory etc.) – so which one is it then? You can’t have it both ways :)

    I don’t think these guys are mainly saying there’s too much math in economics; I think they’re saying that there’s too much math devoted to carefully analyzing the consequences of certain assumptions that happen, alas, to be severely oversimplified.

    Since I’m a mathematician, I personally enjoy the idea of using fancier math to understand economics better. But if you want to get the most bang for your buck, it’s probably more important to start from better assumptions. A combination of better assumptions and fancier math would make me happiest of all! But I’m not really expecting that. I think what’s most likely in the short term is that we’ll see fancier math applied to economics, without a drastic rethink of the basic assumptions.

    I should add that as a mathematical physicist, I think it’s really useful and fun to study oversimplified models — “toy models”, as we call them. The problem starts only when we use these oversimplified models to make decisions that affect the real world.

    • streamfortyseven says:

      Well, when you come right down to it, economies depend on the production of food. Not enough food, prices go up. You can store food for a while, but eventually it goes bad, and you have to produce more. People have less to spend for other goods if food prices go up, and starvation imposes costs as well.

      Food production is dependent in large part on the weather, and it’s hard to predict the weather ten days in advance, much less over a 90-day growing season. There’s the famous “butterfly effect” in chaos theory which comes in here…

      So it seems to me that in order to really do any mathematical modelling of economics, other than very limited systems which have constraints applied so that they could not possibly exist in nature, you have to be able to predict the weather over a growing season, and it’s just not something that can be modelled, at all.

      • John Baez says:

        There could imaginably be a theory of economics that claimed to predict what people will do given the weather. While this theory could not predict the future with certainty (since we can’t predict the weather), it would still be experimentally testable, and it would count as a good scientific theory if it passed these tests.

        So, I think it makes sense to separate the question of ‘is there a good theory of economics’ from the question of ‘can we predict the weather’.

      • Phil Henshaw says:

        Streamfourtyseven, My approach to modeling economies is entirely different from what most people assume, more of a natural science approach to a system I can clearly see that in most ways I know too little about to model.

        What I do is look for regular proportional change. That offers both a simple way to identify nature doing something exceedingly complex in a predictable way, but also an irreversible process of changing scale and complexity for the system producing it. Then I go step by step in locating the growth or decay process and from those try to identify which of the feedbacks will get pushed beyond the limit of its responsiveness first.

        Growth and decay processes generally have momentum, but as they change scale the behavior of some environmental or internal part is pushed beyond it’s limits of response, and the process cannot continue unchanged after that. It’s not inventing an equation so I can ignore the details of the system, it’s identifying equations to help me probe the complex organization of the system and how it is and will be changing.

        • WebHubTel says:

          I use approximately the same approach. I first look for fundamental growth laws, such as a learning curve for productivity, and then add the secret sauce, the idea that the growth laws will stochastically disperse to the maximum amount subject to the constraints. You have to do that last step because the data amounts to a scatter plot when you look closely.

      • Roger Witte says:

        I more or less dismissed this on first read but this article on BBC news reminded me that in the long term, streamfortyseven is correct:

        http://www.bbc.co.uk/news/science-environment-12186245

    • Ivan Sutoris says:

      John Baez wrote:

      Alas, I’m not talking about situations where economists recommend certain policies for ideological reasons. I think the whole subject, from the foundations on up, is warped by the crucial role of economic theory as a tool for gaining and maintaining power and wealth.

      For example, I think the idea of people as rational agents trying to maximize something is an oversimplification with quite dangerous effects. It would take quite a while to argue this convincingly against a determined opponent, so I won’t even try. But to give a taste of what I’m getting at, here’s something I wrote in my economics diary back in 2003:

      Ah, now I see your point. That is indeed a common criticism. The idea of rational and selfish agent maximizing utility is of course a simplification. But you have to make some simplifications, otherwise you wouldn’t get anywhere, and the question is whether this assumption is good approximation for the problems studied in economics. It’s not the goal of economics to explain all human behavior, but only behavior associated with allocating scarce resources (things like production, markets, etc.), and in such situations, self-interest is undeniably an important element in making decisions.

      And even if people do care also about other things, a model with rational self-interested agents can serve as a useful benchmark, which helps us to evaluate in what specific ways do people deviate from it. That is basically what the new field of behavioral economics is about, which I’m not much familiar with, but for example Ernst Fehr has done work on social preferences which incorporate fairness, reciprocity, etc. (and that stuff has been published in top journals, so it’s not like it’s some fringe subject).

      Sen’s point about institutions influencing human behavior, like some sort of feedback effect, is interesting, although I’m not sure if I understand exactly what he has in mind (I guess I should read the book). But anyway, designing istitutions is even harder than understanding behavior, since it necessarily involves value judgments, and those must be argued on philosophical, not scientific grounds.

      Since I’m a mathematician, I personally enjoy the idea of using fancier math to understand economics better. But if you want to get the most bang for your buck, it’s probably more important to start from better assumptions. A combination of better assumptions and fancier math would make me happiest of all! But I’m not really expecting that. I think what’s most likely in the short term is that we’ll see fancier math applied to economics, without a drastic rethink of the basic assumptions.

      In a way, I agree – I don’t think fancy math just for the sake of math will help much – after all, there is a lot of sophisticated mathematics used in economics already. Maybe mathematicians, physicists, etc. can bring new ideas and tools into economics that will enhance our understanding, but to be succesful, they need to acknowledge the work done so far and engage in a discussion with mainstream economists. In reality, the opposite is sometimes true – for example, most of the work in econophysics is published in physics journals, using physics terminology and ignores existing economics literature, or is outright hostile to it. Not surprisingly then, the effect of econophysics on mainstream economics is pretty close to zero.

      • WebHubTel says:

        I also observe the tension between econophysicists and mainstream economists. The only good thing to come out of that situation is the field becomes wide open. If you find something interesting in econphysics, don’t be afraid to dive in — the economists themselves aren’t flocking in that direction.

      • John Baez says:

        Ivan wrote:

        The idea of rational and selfish agent maximizing utility is of course a simplification. But you have to make some simplifications, otherwise you wouldn’t get anywhere…

        I’m familiar with this argument from physics: “you have to make simplifications, otherwise you wouldn’t get anywhere”.

        If we were just starting to study nuclear physics, it would make sense to consider a crude liquid-drop model of the nucleus for this reason. We have to start somewhere. But by the time we’re spending millions of dollars on a Manhattan Project trying to build a working atomic bomb, we can’t afford oversimplifications: at least, not when we reach the point of trying to design a bomb that actually works!

        Similarly, if we’re in some preliminary phase of purely theoretical studies of some economic issue, I can see the point in working with models based on oversimplified notions about how humans make decisions. But when we start using ideas about economics or finance to make business or policy decisions that involve billions of dollars and affect hundreds of millions of people, we can’t afford oversimplifications.

        So yes: we need a lot more ‘behavioral economics’, which could be called ‘economics based on observation of actual humans’.

        Sen’s point about institutions influencing human behavior, like some sort of feedback effect, is interesting, although I’m not sure if I understand exactly what he has in mind (I guess I should read the book).

        It’s a great book — part of why he won the Nobel prize.

        Speaking of feedback: people have studied whether economists are less altruistic than other people and whether this difference (if it exists) is because because study theories of rational decision-making that downplay altruism, or because less altruistic people are more drawn to current ideas in economics. Here’s one recent study:

        • Lauren Gross, Altruism, Fairness and Social Intelligence: Are Economists Different?.

        She writes:

        Economics distinguishes itself from other social sciences in generally assuming that individuals possess somewhat stable, well-defined preferences from which they base rational choices. In addition, many economic models are built on the belief that individuals are solely motivated by self-interest. Indeed, the relatively small role of fairness considerations in standard economic theory remains one of the most striking contrasts between economics and other social science disciplines.

        Why do such fundamentally different views of human nature exist between disciplines? One potential explanation is that academic economists themselves are different in the sense that they behave differently in situations involving social cooperation. If this can be shown to be the case, a natural follow-up question is whether this distinction is due to self-selection or to training.

        This paper employs a novel experimental design to study these two questions, testing differences between economists and non-economists in the Ultimatum and Dictator Games. Having subjects play both games is the key innovation of this design, allowing one to separate altruism and fairness from strategy.

        This is not the first paper to study these questions. Studies by Marwell and Ames (1981) found economists more prone to free-ride, less inclined to donate to charities and general public funds, and more likely to defect in prisoner’s dilemma experiments. Their first experiment called for private contributions to public goods. Subjects were given equal initial endowments of money, which they then allocated into “private” and “public” accounts. Money deposited in the private account was returned to the subject dollar-for-dollar, while money deposited in the public account was pooled, multiplied by a factor greater than one, and distributed equally among all subjects. In this design, the socially optimal allocation is for all subjects to put their entire endowments in the public account, while the individually optimal allocation is to put everything into one’s private account. On average, economics students contributed 20% to the public account, while all other subjects contributed substantially more at 49%. The second experiment of Marwell and Ames (1981) consisted of a one-shot prisoner’s dilemma game and similarly revealed economists to be more self-interested. Out of a total of 267 games (534 choices between cooperation and defection), the defection rate was 60.4% for economics majors versus 38.8% for non-economics majors.

        While the majority of experiments find economists to be more self-interested, not all do. Results from the Ultimatum Game run at Hendrix College by Stanley and Tran (1998) indicated that economics majors are actually less motivated by self-interest than are other students. In their lost-envelope experiment, Yezer, Goldfarb and Poppen (1996) found cash-filled envelopes marginally more likely to be returned when left in economics (v. non-economics) classrooms.

        Aside from the question of whether economists are different, few studies have investigated why, differentiating between selection and learning. One exception is Carter and Irons (1991). In their study of behavior in the Ultimatum Game, the authors first examined differences between economists and non-economists and then discriminated between selection and learning hypotheses. The authors recruited students from four general groups: 1) freshman non-economists, 2) freshman economists, 3) senior non-economics majors, and 4) senior economics majors. Overall, economists offered on average $3.85, versus $4.66 for non-economists. Economists also demonstrated a lesser concern for fairness than non-economics students: on average accepting $1.70, versus $2.44 for non-economists. Carter and Irons then employed regression analysis first to confirm the found difference between economists and non-economists was significant and second, to distinguish between selection and learning hypotheses. The authors used the coefficient on the economist dummy variable to reflect the effect of self-selection and the coefficient on the senior economist dummy variable to reflect the effect of learning. Carter and Irons found their data failed to support the learning hypothesis, summarizing that economists are different, but are already so when they begin their area of study and that economic study does not augment this initial difference.

        The conclusions of her study:

        In conclusion, this study found students with economics to offer less in both the Dictator and Ultimatum Games and to hold lower rejection rates in the Ultimatum Game. Thus, it may be argued that in offering and accepting less, individuals with economics hold a lesser concern for fairness (or a lower notion of what is fair). Additionally, economics training seems to lower both offers and acceptance thresholds and thus, conceptions of fairness overall. Lastly, as subjects were informed that they were randomly matched to another student, individuals should have assumed that they were most likely not paired with an economics major. It follows that non-economics majors exhibited a greater degree of “social intelligence” or “rationality” in being far more likely to offer half and thus maximize expected value.

        • Phil Henshaw says:

          John, I’m surprised you didn’t reach the conclusion that because people make decisions inventively, in response to largely unpredictable environmental choices. That makes the first step to any reliable kind of modeling to be throwing out the assumption that you could predict people’s choices.

          Economies are not deterministic systems, so you really need another form of modeling be able to say anything about them with true confidence. I think lots of people have been spending their lives on this subject trying to approximately predict inherently unpredictable behaviors, and that’s been one source of the mistakes.

          My way of approaching it is the traditional starting point of natural science generally. When what’s most obvious is you don’t know much and looking for where to start, you look around for something nature is doing simply enough to let you say anything with high confidence. Do you see any of those in how economies work?

        • John Baez says:

          Phil Henshaw:

          John, I’m surprised you didn’t reach the conclusion that because people make decisions inventively, in response to largely unpredictable environmental choices.

          First, I would never conclude that, because it’s not a sentence. Second, please don’t assume that just because someone doesn’t say something, it means they haven’t thought about it.

          There are a lot of limitations on our ability to model human decision-making in terms of ‘utility maximization’.

          First of all, any course of action whatsoever maximizes some function — in fact, infinitely many — so one can always retroactively concoct a function, call it ‘utility’, and correctly state that the course of action people took maximized that function.

          So, it’s irrelevant to ask whether people behave in a way that maximizes something. They always do, but that’s not really interesting.

          It’s better to ask whether we can find a utility maximization model that we can use to make accurate predictions often enough to be a statistically significant improvement over wild guessing.

          There are certainly situations where this is the case. But they are limited in scope.

          There are lots of famous limitations of utility maximization models due to situations where people are bad at maximizing something they want to maximize, or where people’s decision-making strategy isn’t easily described in terms of maximizing something.

          Relevant buzzwords here include bounded rationality, satisficing, and behavioral economics.

          But it sounds like you’re talking about another sort of limitation, namely:

          In situations where someone is doing a good job of maximizing something, but the maximization requires a lot of inventiveness, a maximization model is only useful if the modeller is better than the person being modelled at maximizing this quantity!

          For example: I can predict what a good tic-tac-toe player will do in any situation (or at least, what’s the set of optimal choices). But I can’t predict what Kasparov will do when playing chess.

          These are simple examples, chosen so that the concept of ‘winning strategy’ is agreed on by both the modeller and the person being modelled.

          In real life, it’s rarely so simple. We’re not just dealing with inventive strategies for winning a predetermined game. We’re dealing with people who invent new games with new rules.

        • Phil Henshaw says:

          John, Well, Starting with an incomplete sentence, and the following sentence being the missing clause.. does expose my intent if somewhat awkwardly… ;-)

          I wrote:

          That makes the first step to any reliable kind of modeling to be throwing out the assumption that you could predict people’s choices.

          The core problem is directly with the idea of ‘utility maximization’. I grant one can assume that “people must follow a function” and “so one can always retroactively concoct a function, call it ‘utility’” but it would be rather over stating it to say “that the course of action people took maximized that function”. Making up a function without an identified process to associate with it really says rather little, doesn’t it?

          The bigger problem is that physical systems like economies don’t work by any process like cognitive function seeking… (what you’re assuming). If that’s what nature did everywhere, perhaps that’s what people would do. But nature does not generally operate by a function seeking processes, but by accumulative development processes instead. They’re an entirely different creature, for which there are models only in quite special cases.

          Those special cases are where accumulative development have settled into a regular pattern that has become integrated into the local environment. In that virtual “steady state” pattern, an economy of choices *will appear to* seek a utility function. That, is not an unimportant set of special cases, but is leaves out the main interesting cases.

          How people actually operate and make choices is by exploratory search and accumulative learning, follow paths of discovery in how to use their complex environment, They also compare notes with each other. As successful discoveries uncover new ways to connect complementary differences (that wheels work better with axels or that there’s a great flower shop near work) then expanding on further related opportunities become swarm behaviors and new systems as people learn from each other. To get a handle on how those are happening in a particular economy you need to consider people and the environment itself as engaged in learning, *not following*. It think, except where there’s nothing for the system to learn… the model that treats them as following will always give you wrong answers! ;-)

          So that’s the reason for needing a new paradigm.

        • John Baez says:

          Phil wrote:

          The core problem is directly with the idea of ‘utility maximization’.

          Yes, I agree completely. This is what I wrote a while back:

          I think the idea of people as rational agents trying to maximize something is an oversimplification with quite dangerous effects.

          You wrote:

          The bigger problem is that physical systems like economies don’t work by any process like cognitive function seeking… (what you’re assuming).

          What’s ‘cognitive function seeking’?

          By the way, I’ll instantly become a lot less grumpy as soon as you stop telling me what I’m assuming, what conclusions I’ve reached, etc. Your guesses are not very accurate, and I’d much rather hear what you think than be told what I think.

          I was deliberately trying to avoid making any general claims about how humans operate. For example, when I wrote:

          In situations where someone is doing a good job of maximizing something, but the maximization requires a lot of inventiveness, a maximization model is only useful if the modeller is better than the person being modelled at maximizing this quantity!

          I didn’t make any claim about how often people are trying to maximize something. I do believe it occurs sometimes. But I never made any claim that this sort of situation was common in life, or a good guide to understanding human behavior in general!

          Nor did I make a claim to the contrary. I simply didn’t want to propound a theory of human behavior — for reasons which I’d rather you wouldn’t try to guess (at least, not out loud).

          I grant one can assume that “people must follow a function” and “so one can always retroactively concoct a function, call it ‘utility’” but it would be rather over stating it to say “that the course of action people took maximized that function”. Making up a function without an identified process to associate with it really says rather little, doesn’t it?

          Right, that was exactly my point. I said:

          … it’s not really very interesting.

          As for the rest of your comment, I have no quarrel with it!

        • Ivan Sutoris says:

          (seems to me that the debate is starting to go in circles, but still I have one comment on utility)

          Maximizing utility function is equivalent to choosing the most preferred outcome. Formally, utility function U:X \rightarrow R is just a representation of a binary preference relation \succeq on a set of outcomes X, so that
          U(x) \geq U(y) \Leftrightarrow x \succeq y. By rationality we usually mean that preference relation is complete (defined for all pairs) and transitive, and that people choose the most preferred available outcome.

          Is it reasonable to assume that people have well defined preferences and always try to choose the most preferred outcome, given the constraints? It is certainly a simplification and may be unrealistic in some situations. And obviously, in the context of specific models, it is always possible to criticize specific form of preferences that the modeler assumes. As I have tried to argue elsewhere, economists do not ignore such situations and investigating deviations from utility maximization is a current research topic.

          However, the concept of preferences / utility maximization tries to capture the basic idea that people are not dumb automatons following some arbitrary decision rules – no, they act purposefully and respond to incentives. This is a fundamental concept in economics that distinguishes it from natural sciences, and rejecting it entirely can hardly lead to a good theory of human behavior. And neither will mechanistic application of methods from natural sciences – just because a particular methodology is succesful in explaining natural laws, doesn’t mean it will be succesful in social sciences as well.

        • Phil Henshaw says:

          Sorry for some misinterpretation. I don’t mean to be implying that you’re saying anything but what you seem to state. To me if you say a system follows an equation, that’s a shorthand for either a)referring to a physical process that the equation emulates or b)that the system is somehow “doing” the equation as its process. That you steer away from discussing the instrumental processes that are involved and so seem to avoid choice ‘a’. That causes me to hypothesize that you might really mean ‘b’.

          What does it mean when you, like theoretical scientists generally say, “the subject follows the equation”?

          There is nothing in the universe but a cognitive process that physically follows an equation. It might seem like a small point, but it also exposes one of the most productive ways of distinguishing between the information world and its consistent properties and the natural process world and its distinctly different set of properties.

          Physical processes invariably require mechanisms not present in information models, is another way of saying that.

        • Giampiero Campa says:

          Hi John (and Phil),

          The core problem is directly with the idea of ‘utility maximization’

          If you are trying to accurately model the behavior of very few agents, then I would agree. But if you are trying to build a model of how the economy works, then I am not sure that this is “the core problem”.

          There are 4 things I’d like to say, of which the last is by far the most important.

          First, the first chapter of this very well written and fun-to-read book presents convincing arguments about why people, while they most definitely do have altruistic and non-financial motivations, actually do behave rationally very often, albeit almost never at a conscious level.

          Second, the “utility maximization” framework, does not need to be as inflexible as many people seem to imply, your utility function might very well depend (nonlinearly) on your state of mind, the day of the week, the amount of commercials you see on TV, and what not. Just assuming that you are at least trying to maximize this function is just a way of saying that your behavior is not completely random.

          Third, even if we get rid of this framework, it’s unclear to me if you are proposing something better to use instead of it. Are you just advocating the use of more complex, or carefully selected, utility functions ? or using agent-based modeling from the ground up ?

          Fourth, at the macro level, the utility maximization framework does not really matter that much anyway. In standard macroeconomics, the fact that consumers try to maximize their utility affects only how the aggregate consumption C (and the aggregate demand for money L) depend on aggregate income (and interest rate). Only mild assumptions are made on the slope of those curves, and nobody really tries to devise them from first principles anyway, one has to always identify them from actual data (real economists might want to weight in about this point).

          It is true that these models might be too simple to offer detailed forecasts, but still i have found them to be really instructive in terms of qualitatively explaining how things work on a general level. They answer to questions like “what happens to interest rate if the government spends more money”, or “what happens to unemployment is the fed starts printing money” (By the way, the fact that most people who vote don’t have a clue about these things is, i think, a huge problem).

          My point is that there are many other assumptions that are commonly made (e.g. extent of wage flexibility, totally clearing markets, no explicit dynamics, and many more) in these models that have far reaching implications and look more wrong than “utility maximization”, so i am not at all sure that the latter is the culprit.

          John also said:

          But when we start using ideas about economics or finance to make business or policy decisions that involve billions of dollars and affect hundreds of millions of people, we can’t afford oversimplifications.

          Now, finance is a totally different matter (discipline) altogether, but i am not sure whether “policy decision” are actually made on the basis of mathematical models. Unfortunately i fear that too many other mundane issues come into play when selecting a policy. This is aggravated by the fact that, as I said before, voters don’t have a clue.

          Perhaps we need a millennium prize to devise a mathematical model that can predict how the world economy will evolve in the next 10 years …

        • Robert Smart says:

          How about we start with a millenium prize for understanding the past. Conflicting claims are made about the causes of the financial crisis. Yet even though there are mountains of relevant data, no one knows how to look at the data to understand it.

        • Phil Henshaw says:

          Well, Robert, I’d love to see the real cause of the collapse please stand up! I had a great meeting today with a researcher at the NY Fed, the one who wrote their study on the emergent unregulated “shadow banking” system, Zoltan Pozsar.

          One of the curious possibilities we talked about as real systemic cause is the underlying “peak everything” signal evident in the 6 year global escalation of commodity prices preceding the collapse. The curious part is the systemic behavior of all those different commodities displaying ~25%/yr price increases together. It’s as if the whole system of commodity resources was displaying rigidity to expansion at the same time. That could happen if the ability of different resources to substitute for each other was exhausted. That new behavior was a real departure from the past, when commodity supplies and prices had stayed in line with costs for decades before. http://www.synapse9.com/issues/92-08Commodities08.jpg

          A mysteriously appearing “law of limitless price” for commodities until the collapse, if it meant the system had found conflicting needs for increasing resource uses it could not meet together, is a quite serious problem. It’s not so much the 25%/yr price increase, but that it looks like the supply system was becoming rigid and inflexible, so only shedding of demand could lower prices again, as it then did. When a system that needs to be elastic becomes rigid it’s like the way the surface of a balloon becomes rigidly taught and inelastic just before continued inflating pressure causes it to shatter to bits.

        • John F says:

          Phil,
          I have assumed that the real cause of the collapse is the ubiquitous rent seeking of the rich. Almost all of them have no ideas of what to do productively with all their money, so they just buy stuff to own it and that drives up the price. That includes investments in harder stuff like commodities and of course real estate.

        • Phil Henshaw says:

          John, That’s a succinct way to say it but for one major omission. It wouldn’t change the outcome a bit to make “the rich” each walk the plank and disappear into the sea as their greed reached a certain level, the non-rich would just assume ownership of their assets and do precisely the same thing with them. The simple problem is that ALL our financial plans and institutions are built on a model of investing in things we can all trust will give us a good return. It’s not just everyone’s search to invest in things that they can be confident of producing a good return. What then seals our and their fates is the real hidden intent of doing that, to have reliable compounding of returns.

          That’s what everyone tries to stabilize, but for natural cause cannot be, pointing to the great rift between the information world in our heads and the physical world of our bodies. We look for investments to believe in and trust for the purpose of adding our winnings to our bets in them. Since they are physical things and not formulas, that assures they will become untrustworthy and collapse taking the environment that created them down too!

          The catch for explaining it seems to be that if the listener doesn’t have some suspicion that our information reality and our physical reality might be different, then they just hear it as just someone talking about something else they don’t recognize… and ignore it.

  20. Curtis Faith says:

    Ivan Sutoris wrote:

    So you are saying that richer people can get richer by investing? But that’s always been the case.

    I think it is an unescapable conclusion based on just the simple math that exponential growth increases faster than linear growth. Capitalism concentrates wealth. There is no other conclusion possible.

    Individuals—trust-fund kids and the like, for example—can act to squander wealth but the wealth they give up doesn’t flow to someone who is not participating as a capitalist. It doesn’t flow towards labor. It flows towards other capitalists that provide the goods and services acquired during the squandering.

    Also, wages increase as well (at least in nominal terms). If you were right, we would observe that over time, increasingly larger share of total income is attributable to capital, with labor share of income steadily dropping. But labor share is relatively constant over time.

    What do you mean exactly by “capital”? Do you mean capital in the old sense of plant and equipment, or do you mean capital in the sense of ownership of the enterprise?

    What do you mean by the “labor share” being relatively constant over time? Are you referring to the percentage of GDP attributable to labor? Are you referring to the percentage of goods and service expenses for an average country attributable to labor? Something else? What countries or regions? Any hard data so we can look further and reply with a more informed answer?

    My experience and research shows that there has been a very substantial drop in incomes over time on an inflation-adjusted basis in the U.S. and the U.K. at least over the last 30 years or so.

    Does the data you are considering include the effects of wealth created outside of the typical corporation. For example when private equity hedge funds buy distressed companies, fire thousands, restructure the debt leaving the company viable but barely so, and pocket hundreds of millions or billions in the process? An ex-girlfriend of mine was chief counsel at one of the better-known hedge funds that did just this sort of thing with a staff of about 25 people, so I know a lot about what really goes on in these deals. I also know a lot of traders who make tens or hundreds of millions each year with a staff of 20 to 40. So a great deal of the wealth created by the richest individuals has minimal associated labor costs.

    • John F says:

      It is important to keep in mind that jobs that just move money around – investors, bankers, insurance agents – cannot be counted as labor. $100M hush money aka compensation to a Wall Street executive is capital income.

      I don’t know what Ivan had it mind, but “labor share” by any measure has been dropping in all developed countries for decades. An easy measure is ratio of (reported!) total incomes below a threshold to total incomes above a threshold. Regardless of the threshold, that ratio has dropped like a rock, especially in the last decade.

    • Ivan Sutoris says:

      Curtis Faith wrote:

      Individuals—trust-fund kids and the like, for example—can act to squander wealth but the wealth they give up doesn’t flow to someone who is not participating as a capitalist. It doesn’t flow towards labor. It flows towards other capitalists that provide the goods and services acquired during the squandering.

      If a rich capitalist buys a private jet, part of what he pays goes to workers who built the jet, so I don’t see your point. Besides, “capitalists” is not some isolated group of people. Everyone can participate as a capitalist, e.g. by investing in a mutual fund, or a pension fund. I really don’t want to start some grand debate on the merits of capitalism, but I simply don’t find your rhetoric persuasive.

      What do you mean by the “labor share” being relatively constant over time? Are you referring to the percentage of GDP attributable to labor? Are you referring to the percentage of goods and service expenses for an average country attributable to labor? Something else? What countries or regions? Any hard data so we can look further and reply with a more informed answer?

      Yes, I mean part of GDP attributable to labor. This paper from 2004 describes how it can be computed and shows chart with post WW2 data in the US (I’m from Europe myself, but USA data are often most easily available). Although it may have decreased slightly in the recent period, there is certainly no sharp downward trend.

      My experience and research shows that there has been a very substantial drop in incomes over time on an inflation-adjusted basis in the U.S. and the U.K. at least over the last 30 years or so.

      I would be surprised if this were true. Quick search turns up for example this blogpost, which paints a different picture (wage is not the only part of worker’s compensation). If real incomes really dropped significantly, such a drop should translate also to decrease in living standards. Are living standards of people in US or UK significantly lower than 30 years ago? I don’t think so.

      • John F says:

        The living standards of the poor and middle class *relative* to the rich are indeed worse, and worsening, and have been for a long time.
        http://www.prospect.org/cs/articles?article=the_rich_the_right_and_the_facts

        Let’s suppose a poor family (in the US) is living in rented squalid substandard housing, for which the owner is compensated $800 per month via Section 8.
        http://en.wikipedia.org/wiki/Section_8_(housing)
        Not to get all uncivil, butshould the *owner’s* compensation really count as compensation for the poor family? Should their insurance payments which go directly to insurance companies (which then invest …) count for the family? Should payments to private prison to keep the family’s son in prison count as income to the prisoner?

        • Ivan Sutoris says:

          John F wrote:

          The living standards of the poor and middle class *relative* to the rich are indeed worse, and worsening, and have been for a long time.

          Saying that incomes are rising, but incomes of rich are rising faster is clearly something different than saying that incomes are falling. As I understand you, you seem to think that we should care only about the income of the poor and that inequality is bad. OK, that’s your opinion and your value judgement, but it’s certainly not something obvious.

          I personally don’t see inequality as something automatically bad – some amount of inequality is natural (unless you want to impose some drastic form of socialism). The question then becomes a matter of how much inequality there should be, and you cannot answer that without actually discussing the causes of inequality and their implications.

        • streamfortyseven says:

          Ivan, incomes (wages and salaries) of the middle and working classes have been falling since 1980:

          Figuring in 1980 dollars, it’s well-known (http://hdr.undp.org/en/reports/global/hdr2010/papers/HDRP_2010_36.pdf) that the labor share of income has actually dropped in recent years, from 1980 onwards to the present.

        • Phil Henshaw says:

          But Ivan, what about the share of resources?? I’d agree it’s not the relative earnings that matters. With demand exceeding supply globally for basic resources, though, those hanging onto the lower rungs of the economic ladders that get squeezed out of their former levels of prosperity, even large sectors of formerly middle class Americans.

          What happens in an economy where income for all sectors grew at relatively stable if different rates is that that pattern becomes a think of the past. Rapid increases in productivity are then needed to keep up with escalating prices if there is inadequate supply to meet total demand. So one of the ways to measure the approach of natural limits to growth is whether that reversal of fortunes has occurred and resulting in a society of winners & losers rather than all being relative winners.

          I think that is strongly indicated by how commodity prices have persistently outpaced growth in the past ten years. It has given large sectors of world population materially declining prosperity, including both European and American communities, as China and India take growing shares of soon to be declining oil production, for example. As limits approach, the rich and productive just leave decreasing amounts of resources available for other.

      • DavidTweed says:

        Ivan wrote

        If real incomes really dropped significantly, such a drop should translate also to decrease in living standards. Are living standards of people in US or UK significantly lower than 30 years ago? I don’t think so.

        At least in the UK (and AFAIK in the US) things are complicated by the fact there have also been drops in the prices of mass produced consumer goods due to outsourcing to India, China, etc, along with an influx of cheap labour from newly joined EU countries (don’t know about the US here). So there are some things which are significantly smaller proportion of income than 30 years ago. On the other hand, there’s been increases in costs of other things, eg, housing, education, medicine/medical care, professional services, etc. These things are much greater proportions of income than 30 years ago. All this has been made even more difficult to analyse by an increase in the debt load being carried and decline in savings rate for average individuals (passing over the bizarre econometrics definition that paying off debt counts as “saving”) over that time.

        So I don’t think you can easily point to “no apparent decline in the average standard of living” as evidence against a decline in real-terms wages without a much, much more comprehensive analysis.

        • DavidTweed says:

          Looking around for some stats reminded me of a final confounding factor: there’s been a significant rise in the number of two income households. So the average number of “person hours” worked to bring in the household income has increased as well.

          Again, I think to come to a conclusion about wage rates you’ve got to look at more specific data over time than just “apparent standard of living”.

      • Curtis Faith says:

        If a rich capitalist buys a private jet, part of what he pays goes to workers who built the jet, so I don’t see your point. Besides, “capitalists” is not some isolated group of people. Everyone can participate as a capitalist, e.g. by investing in a mutual fund, or a pension fund. I really don’t want to start some grand debate on the merits of capitalism, but I simply don’t find your rhetoric persuasive.

        One must distinguish between wealth and income.

        Certainly the money spent or even squandered by the rich contributes to the larger economy and indeed some of that money goes to pay the salaries of employees. It contributes to their income but not their wealth because they don’t have the same level of wealth to start with and likely don’t have the investments in their company that someone richer would be able to afford. I live in Savannah, GA where Gulfstream Aviation is headquartered and I can tell you that while they are good employers they are far far from being employee owned. Most of their employees have average middle-class incomes and no participation in the profits or capital gains from stock appreciation.

        The owners of the companies benefit from the purchases and incomes disproportionately.

        The link from the Cleveland Fed was interesting but misses the mark for two separate reasons:

        1) It measures income versus profits. Many large U.S. companies have been outsourcing their work to other countries and diverting their profits to other regions to avoid U.S. taxes. This means that increases in wealth come not through profits and dividends but through appreciation of the underlying stocks. The trend away from dividends has been increasing. Check out this chart at Wikipedia:

        http://en.wikipedia.org/wiki/Dividend_payout_ratio#Historic_Data

        So the Cleveland Fed does not consider capital gains as “income” attributable to capital. The rich have arranged it so that most of their income takes the form of capital gains and arranged so that the taxes paid on those gains are substantially less than normal income for U.S. citizens or residents.

        I also recommend the data here:

        http://www.equalitytrust.org.uk/why/evidence/methods

        2) The increase in money spent for benefits does not improve the lifestyle of the employees but it does increase the wealth of those who are doctors and investors in healthcare companies. If you are going to count the benefits as income then you need to count the capital gains attributable to the healthcare industry as income to capital too. You can’t have it both ways.

        Most employees only see that their incomes are flat. So to count this increase as income for the employee is, at best, a partial picture of what is actually going on.

  21. John Sidles says:

    While visiting my son in the Outer Islands of Micronesia, I had the very interesting experience of living in a culture whose economic toolset did not include “money”. I came away with considerable sympathy for a point that essayist David Brin has made:

    The real conundrum in modern markets is the continued reliance of investors and policymakers on two false mantras: the first is that markets are efficient; and the second is that investors are rational.

    In particular, the modern advent computerized trading on microsecond timescales has made a mockery of the once-widespread moral understanding that a core purpose of economic activity is to sustain Jeffersonian virtues in ordinary citizens.

    In the Outer Islands, economic activities were restrained by a network of social customs whose explicit purpose was to foster and sustain social virtues … these economic activities were accomplished slowly and in full public view … once you got used to it, it was terrific!

  22. Giampiero Campa says:

    This article on systemic risk of banking ecosystems is just in, i haven’t read it, but it looks a nice bridge towards “quantitative ecology”.

  23. trurl17 says:

    Well, I didn’t see much discussion here about how maybe physicists caused the problem in the first place—too heavy reliance on mathematical models for hedging and investing—so perhaps we should be aware of physicists bearing gifts!

    • John F says:

      trurl,
      we have had a couple of discussions here of the effects of quants, e.g. in

      This Week’s Finds (Week 304)

      But it could be interesting to delve into a meta-analysis of effects of making overly complicated models.

      • DavidTweed says:

        To be fair, it’s unclear if the “quants” have caused their meltdown yet, although they may very, very well do so in the near future (with HFT). The areas of the financial system that caused huge problems weren’t, to my understanding, those involving the physics based mathematical models. They were incredibly arcane and complicated financial instruments, but that was more from the formalisation of what economists and clients wanted.

  24. Todd Trimble says:

    John, in case you or some of your students (current and prior) are interested, Bob Walters recently posted something on his blog which touches upon comments made in this thread on the paper “On Partita Doppia”, here; it has some references which could prove useful.

Leave a reply to Frederik De Roo Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.