Compositional Game Theory and Climate Microeconomics

guest post by Jules Hedges

Hi all

This is a post I’ve been putting off for a long time until I was sure I was ready. I am the “lead developer” of a thing called compositional game theory (CGT). It’s an approach to game theory based on category theory, but we are now at the point where you don’t need to know that anymore: it’s an approach to game theory that has certain specific benefits over the traditional approach.

I would like to start a conversation about “using my powers for good”. I am hoping particularly that it is possible to model microeconomic aspects of climate science. This seems to be a very small field and I’m not really hopeful that anyone on Azimuth will have the right background, but it’s worth a shot. The kind of thing I’m imagining (possibly completely wrongly) is to create models that will suggest when a technically-feasible solution is not socially feasible. Social dilemmas and tragedies of the commons are at the heart of the climate crisis, and modelling instances of them is in scope.

I have a software tool (https://github.com/jules-hedges/open-games-hs) that is designed to be an assistant for game-theoretic modelling. This I can’t emphasise enough: A human with expertise in game-theoretic modelling is the most important thing, CGT is merely an assistant. (Right now the tool also probably can’t be used without me being in the loop, but that’s not an inherent thing.)

To give an idea what sort of things CGT can do, my 2 current ongoing research collaborations are: (1) a social science project modelling examples of institution governance, and (2) a cryptoeconomics project modelling an attack against a protocol using bribes. On a technical level the best fit is for Bayesian games, which are finite-horizon, have common knowledge priors, and private knowledge with agents who do Bayesian updating.

A lot of the (believed) practical benefits of CGT come from the fact that the model is code (in a high level language designed specifically for expressing games) and thus the model can be structured according to existing wisdom for structuring code. Really stress-testing this claim is an ongoing research project. My tool does equilibrium-checking for all games (the technical term is “model checker”), and we’ve had some success doing other things by looping an equilibrium check over a parameter space. It makes no attempt to be an equilibrium solver, that is left for the human.

This is not me trying to push my pet project (I do that elsewhere) but me trying to find a niche where I can do some genuine good, even if small. If you are a microeconomist (or a social scientist who uses applied game theory) and share the goals of Azimuth, I would like to hear from you, even if it’s just for some discussion.

10 Responses to Compositional Game Theory and Climate Microeconomics

  1. juleshedges says:

    This is a repost of my forum post on the Azimuth forum, which can be found at https://forum.azimuthproject.org/discussion/2540/compositional-game-theory-and-climate-microeconomics

    I’ll monitor the comments both here and there, and you can also email me by going to my website (linked from my name at the top of this post) and pressing “contact”.

    • nad says:

      Apart from my problems with category theory per se – I think that the means of automated social modelling are at least up to now rather limited -even for small game theory applications. So you know probably that the public goods game displays in experiments that the participants show “irrational” behaviour (which I -by the way- would call rather “rational” behaviour). That is Wikipedia writes:

      ..if the experiment were a purely analytical exercise in game theory it would resolve to zero contributions because any rational agent does best contributing zero, regardless of whatever anyone else does. This only holds if the multiplication factor is less than the number of players, otherwise the Nash equilibrium is for all players to contribute all of their tokens to the public pool.[1]. In fact, the Nash equilibrium is rarely seen in experiments; people do tend to add something into the pot. The actual levels of contribution found varies widely (anywhere from 0% to 100% of initial endowment can be chipped in).[2] The average contribution typically depends on the multiplication factor.[3]

      So I wonder a bit how you model “social feasibility” and whats really meant by that. Is it defined by whether things can be pushed through without riots? There were no riots at the Tesla factory (see comment below) but only more or less loud protests at the hearing and not very big demonstrations- so my question: does the “small protest” make the Tesla factory socially feasible for you?

  2. It’s not clear to me if this is a “new approach to game theory”, or a new tool for analyzing standard game theory?

    Can your tool be used to analyze games formulated (as all games should be) as posterior distributions over joint strategies of the players rather than set-valued solution concepts? (E.g., see https://www.nowpublishers.com/article/Details/RBE-0015)

    • juleshedges says:

      The answer to the first question is both. It’s a new mathematical approach (the first paper is https://arxiv.org/abs/1603.04641), and also separately a software tool implementing the theory (which is important because it turns out in practice to be very impractical to use it without software support)

      I’m just seeing distribution-valued solution concepts for the first time so the answer is no “out of the box”, but I’ll make a wild guess that it’s possible after doing a bit of theory. I’m going to show this paper to some people

      • davidwlocke says:

        Under the technology adoption lifecycle, “out of the box” happens after a lot of task sublimation happens. We are talking late mainstreet phase, which is on the other side of the mean. We are talking a discontinuous innovation here. There is much that has to happen before “out of the box” will be the reality. Rushing the process leaves money on the table.

        Right now, AI people want “out of the box.” But, that ignores the money involved.

        We will get there. Trust the process.

  3. John Baez says:

    I would like to start a conversation about “using my powers for good”.

    Great! By the way, I hope you point out this discussion to people on Twitter. If I were still on Twitter I’d advertise it there. It’ll take some work finding the right people for this project, but they are out there somewhere.

    The kind of thing I’m imagining (possibly completely wrongly) is to create models that will suggest when a technically-feasible solution is not socially feasible.

    How about suggesting when it is socially feasible?

    Could you please daydream a bit about what you’re imagining?

    I’ll try it just to get the ball rolling. Maybe there are economists working on models of new regulations or carbon taxes or tax credits for new technologies or … something like that… and they’re modeling them using “Bayesian games, which are finite-horizon, have common knowledge priors, and private knowledge with agents who do Bayesian updating”. The simulations they’re doing are getting awkward, and they’d be helped by your software (along with training from you). Something like that?

    • juleshedges says:

      Yes, you have the right idea. The opposite of showing that something is socially infeasible is designing mechanism that make it feasible

      The kind of thing I could daydream about is building models that convince people that climate action is not possible without certain regulation, for example.

  4. davidwlocke says:

    The petro companies have already gamed the system. They have been working on this for 40+ years. Having us pay them to decarbon the atmosphere means that they have an income for eternity and don’t have to die. They are already implementing this.

    • nad says:

      By the way it is also H2O that gets “poisoned”. And this happens not only in case of an accident like for example if one builds a highly chemical factory into a water preservation area but also by spilling it’s waste water into a public waster water net that has a waste water treatment plant which has not the means to filter out all the wicked chemicals that are spilled into it so that the stuff ends up in nature. And Tesla knows of course about this while babbling about “sustainability”.

  5. Hi Jules,

    I think of a game as a ‘place’ being a possibility, a fact being a ‘token’ put inside a place, and an event being a ‘transition’ in an ‘object-oriented petri net.’ How I implemented that years ago was using classes in Smalltalk. (I wrote this up in a paper for Summer Simulation and can send you that paper, if you’re interested. Just let me know.) I put instances of each of those– possibility, event, fact– on a multi-threaded queue organized by time, with semaphores and other gadgets as supplied in the parallel processing classes (used in the Model-View-Controller interface) in the Smalltalk ‘operating system.’

    To do that, I bought a bunch of classes for drawing lines, arrows, rectangles etc. And then ‘inherited’ that code into my own methods, so when I drew any of of the three and connected them with arrows, the drawing would automatically write appropriate methods. When the ‘time’ came, those methods would place whatever I had drawn– e.g. possibility or event– onto the time queue. As the simulation proceded, these methods would look at the incoming and outgoing arrows I had drawn to pull tokens and other objects, like Petri net transitions, onto the time queue.

    But when Petri net transitions where pulled onto the queue, I immediately found out that they would pull tokens into an incoming place and onto the time queue as fast as the system could process them. The simulation crashed. So for this kind of transition, I had to draw in a ‘self loop,’ which was just an arrow from a transition (an event) to a place (a possibility), and an arrow from that place back into the transition. It was like something I think Einstein once said– ‘Time exists so everything doesn’t happen at once.’ As a result, there could be only one method at a time for that event on the time queue. No copy of that method could be placed on the queue– until the ‘fact’ existed that the transition (event) had completed. So there could be only one copy of the method performing at a time– not the unlimited number that originally were pulled onto the queue, which caused the system to crash.

    The purpose of this simulation was to model a multi-million dollar automated manufacturing system. We wanted to see it simulated before we spent the money.

    The context for this industrial research is here:

    https://www.tandfonline.com/doi/abs/10.1080/08956308.1996.11671042

    (You can see how it relates to the equation for the probability learning game that I posted in a comment on John’s post about the Fisher theorem.)

You can use Markdown or HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.