Agent-Based Models (Part 8)

16 April, 2024

Last time I presented a class of agent-based models where agents hop around a graph in a stochastic way. Each vertex of the graph is some ‘state’ agents can be in, and each edge is called a ‘transition’. In these models, the probability per time of an agent making a transition and leaving some state can depend on when it arrived at that state. It can also depend on which agents are in other states that are ‘linked’ to that edge—and when those agents arrived.

I’ve been trying to generalize this framework to handle processes where agents are born or die—or perhaps more generally, processes where some number of agents turn into some other number of agents. There’s already a framework that does something sort of like this. It’s called ‘stochastic Petri nets’, and we explained this framework here:

• John Baez and Jacob Biamonte, Quantum Techniques for Stochastic Mechanics, World Scientific Press, Singapore, 2018. (See also blog articles here.)

However, in their simplest form, stochastic Petri nets are designed for agents whose only distinguishing information is which state they’re in. They don’t have ‘names’—that is, individual identities. Thus, even calling them ‘agents’ is a bit of a stretch: usually they’re called ‘tokens’, since they’re drawn as black dots.

We could try to enhance the Petri net framework to give tokens names and other identifying features. There are various imaginable ways to do this, such as ‘colored Petri nets’. But so far this approach seems rather ill-adapted for processes where agents have identities—perhaps because I’m not thinking about the problem the right way.

So, at some point I decided to try something less ambitious. It turns out that in applications to epidemiology, general processes where n agents come in and m go out are not often required. So I’ve been trying to minimally enhance the framework from last time to include processes ‘birth’ and ‘death’ processes as well as transitions from state to state.

As I thought about this, some questions kept plaguing me:

When an agent gets created, or ‘born’, which one actually gets born? In other words, what is its name? Its precise name may not matter, but if we want to keep track of it after it’s born, we need to give it a name. And this name had better be ‘fresh’: not already the name of some other agent.

There’s also the question of what happens when an agent gets destroyed, or ‘dies’. This feels less difficult: there just stops being an agent with the given name. But probably we want to prevent a new agent from having the same name as that dead agent.

Both these questions seem fairly simple, but so far they’re making it hard for me to invent a truly elegant framework. At first I tried to separately describe transitions between states, births, and deaths. But this seemed to triplicate the amount of work I needed to do.

Then I tried models that have

• a finite set S of states,

• a finite set T of transitions,

• maps u, d \colon T \to S + \{\textrm{undefined}\} mapping each transition to its upstream and downstream states.

Here S + \{\textrm{undefined}\} is the disjoint union of S and a singleton whose one element is called undefined. Maps from T to S + \{\textrm{undefined}\} are a standard way to talk about partially defined maps from T to S. We get four cases:

1) If the downstream of a transition is defined (i.e. in S) but its upstream is undefined we call this transition a birth transition.

2) If the upstream of a transition is defined but its downstream is undefined we call this transition a death transition.

3) If the upstream and downstream of a transition are both defined we call this transition a transformation. In practice most of transitions will be of this sort.

4) We never need transitions whose upstream and downstream are undefined: these would describe agents that pop into existence and instantly disappear.

This is sort of nice, except for the fourth case. Unfortunately when I go ahead and try to actually describe a model based on this paradigm, I seem still to wind up needing to handle births, deaths and transformations quite differently.

For example, last time my models had a fixed set A of agents. To handle births and deaths, I wanted to make this set time-dependent. But I need to separately say how this works for transformations, birth transitions and death transitions. For transformations we don’t change A. For birth transitions we add a new element to A. And for death transitions we remove an element from A, and maybe record its name on a ledger or drive a stake through its heart to make sure it can never be born again!

So far this is tolerable, but things get worse. Our model also needs ‘links’ from states to transitions, to say how agents present in those states affect the timing of those transition. These are used in the ‘jump function’, a stochastic function that answers this question:

If at time t agent a arrives at the state upstream to some transition e, and the agents at states linked to the transition e form some set S_e, when will agent a make the transition e given that it doesn’t do anything else first?

This works fine for transformations, meaning transitions e that have both an upstream and downstream state. It works just a tiny bit differently for death transitions. But birth transitions are quite different: since newly born agents don’t have a previous upstream state u(e), they don’t have a time at which they arrived at that state.

Perhaps this is just how modeling works: perhaps the search for a staggeringly beautiful framework is a distraction. But another approach just occurred to me. Today I just want to briefly state it. I don’t want to write a full blog article on it yet, since I’ve already spent a lot of time writing two articles that I deleted when I became disgusted with them—and I might become disgusted with this approach too!

Briefly, this approach is exactly the approach I described last time. There are fundamentally no births and no deaths: all transitions have an upstream and a downstream state. There is a fixed set A of agents that does not change with time. We handle births and deaths using a dirty trick.

Namely, births are transitions out of a ‘unborn’ state. Agents hang around in this state until they are born.

Similarly, deaths are transitions to a ‘dead’ state.

There can be multiple ‘unborn’ states and ‘dead’ states. Having multiple unborn states makes it easy to have agents with different characteristics enter the model. Having multiple dead states makes it easy for us to keep tallies of different causes of death. We should make the unborn states distinct from the dead states to prevent ‘reincarnation’—that is, the birth of a new agent that happens to equal an agent that previously died.

I’m hoping that when we proceed this way, we can shoehorn birth and death processes into the framework described last time, without really needing to modify it at all! All we’re doing is exploiting it in a new way.

Here’s one possible problem: if we start with a finite number of agents in the ‘unborn’ states, the population of agents can’t grow indefinitely! But this doesn’t seem very dire. For most agent-based models we don’t feel a need to let the number of agents grow arbitrarily large. Or we can relax the requirement that the set of agents is finite, and put an infinite number of agents u_1, u_2, u_3, \dots in an unborn state. This can be done without using an infinite amount of memory: it’s a ‘potential infinity’ rather than an ‘actual infinity’.

There could be other problems. So I’ll post this now before I think of them.


Agent-Based Models (Part 7)

28 February, 2024

Last time I presented a simple, limited class of agent-based models where each agent independently hops around a graph. I wrote:

Today the probability for an agent to hop from one vertex of the graph to another by going along some edge will be determined the moment the agent arrives at that vertex. It will depend only on the agent and the various edges leaving that vertex. Later I’ll want this probability to depend on other things too—like whether other agents are at some vertex or other. When we do that, we’ll need to keep updating this probability as the other agents move around.

Let me try to figure out that generalization now.

Last time I discovered something surprising to me. To describe it, let’s bring in some jargon. The conditional probability per time of an agent making a transition from its current state to a chosen other state (given that it doesn’t make some other transition) is called the hazard function of that transition. In a Markov process, the hazard function is actually a constant, independent of how long the agent has been in its current state. In a semi-Markov process, the hazard function is a function only of how long the agent has been in its current state.

For example, people like to describe radioactive decay using a Markov process, since experimentally it doesn’t seem that ‘old’ radioactive atoms decay at a higher or lower rate than ‘young’ ones. (Quantum theory says this can’t be exactly true, but nobody has seen deviations yet.) On the other hand, the death rate of people is highly non-Markovian, but we might try to describe it using a semi-Markov process. Shortly after birth it’s high—that’s called ‘infant mortality’. Then it goes down, and then it gradually increases.

We definitely want to our agent-based processes to have the ability to describe semi-Markov processes. What surprised me last time is that I could do it without explicitly keeping track of how long the agent has been in its current state, or when it entered its current state!

The reason is that we can decide which state an agent will transition to next, and when, as soon as it enters its current state. This decision is random, of course. But using random number generators we can make this decision the moment the agent enters the given state—because there is nothing more to be learned by waiting! I described an algorithm for doing this.

I’m sure this is well-known, but I had fun rediscovering it.

But today I want to allow the hazard function for a given agent to make a given transition to depend on the states of other agents. In this case, if some other agent randomly changes state, we will need to recompute our agent’s hazard function. There is probably no computationally feasible way to avoid this, in general. In some analytically solvable models there might be—but we’re simulating systems precisely because we don’t know how to solve them analytically.

So now we’ll want to keep track of the residence time of each agent—that is, how long it’s been in its current state. But William Waites pointed out a clever way to do this: it’s cheaper to keep track of the agent’s arrival time, i.e. when it entered its current state. This way you don’t need to keep updating the residence time. Whenever you need to know the residence time, you can just subtract the arrival time from the current clock time.

Even more importantly, our model should now have ‘informational links’ from states to transitions. If we want the presence or absence of agents in some state to affect the hazard function of some transition, we should draw a ‘link’ from that state to that transition! Of course you could say that anything is allowed to affect anything else. But this would create an undisciplined mess where you can’t keep track of the chains of causation. So we want to see explicit ‘links’.

So, here’s my new modeling approach, which generalizes the one we saw last time. For starters, a model should have:

• a finite set V of vertices or states,

• a finite set E of edges or transitions,

• maps u, d \colon E \to V mapping each edge to its source and target, also called its upstream and downstream,

• finite set A of agents,

• a finite set L of links,

• maps s \colon L \to V and t \colon L \to E mapping each link to its source (a state) and its target (a transition).

All of this stuff, except for the set of agents, is exactly what we had in our earlier paper on stock-flow models, where we treated people en masse instead of as individual agents. You can see this in Section 2.1 here:

• John Baez, Xiaoyan Li, Sophie Libkind, Nathaniel D. Osgood, Evan Patterson, Compositional modeling with stock and flow models.

So, I’m trying to copy that paradigm, and eventually unify the two paradigms as much as possible.

But they’re different! In particular, our agent-based models will need a ‘jump function’. This says when each agent a \in A will undergo a transition e \in E if it arrives at the state upstream to that transition at a specific time t \in \mathbb{R}. This jump function will not be deterministic: it will be a stochastic function, just as it was in yesterday’s formalism. But today it will depend on more things! Yesterday it depended only on a, e and t. But now the links will come into play.

For each transition e \in E, there is set of links whose target is that transition, namely

t^{-1}(e) = \{\ell \in L \; \vert \; t(\ell) = e \}

Each link in \ell \in  t^{-1}(e) will have one state v as its source. We say this state affects the transition e via the link \ell.

We want the jump function for the transition e to depend on the presence or absence of agents in each state that affects this transition.

Which agents are in a given state? Well, it depends! But those agents will always form some subset of A, and thus an element of 2^A. So, we want the jump function for the transition e to depend on an element of

\prod_{\ell \in t^{-1}(e)} 2^A = 2^{A \times t^{-1}(e)}

I’ll call this element S_e. And as mentioned earlier, the jump function will also depend on a choice of agent a \in A and on the arrival time of the agent a.

So, we’ll say there’s a jump function j_e for each transition e, which is a stochastic function

j_e \colon A \times 2^{A \times t^{-1}(e)} \times \mathbb{R} \rightsquigarrow \mathbb{R}

The idea, then, is that j_e(a, S_e, t) is the answer to this question:

If at time t agent a arrived at the vertex u(e), and the agents at states linked to the edge e are described by the set S_e, when will agent a move along the edge e to the vertex d(e), given that it doesn’t do anything else first?

The answer to this question can keep changing as agents other than a move around, since the set S_e can keep changing. This is the big difference between today’s formalism and yesterday’s.

Here’s how we run our model. At every moment in time we keep track of some information about each agent a \in A, namely:

• Which vertex is it at now? We call this vertex the agent’s state, \sigma(a).

• When did it arrive at this vertex? We call this time the agent’s arrival time, \alpha(a).

• For each edge e whose upstream is \sigma(a), when will agent a move along this edge if it doesn’t do anything else first? Call this time T(a,e).

I need to explain how we keep updating these pieces of information (supposing we already have them). Let’s assume that at some moment in time t_i an agent makes a transition. More specifically, suppose agent \underline{a} \in A makes a transition \underline{e} from the state

\underline{v} = u(\underline{e}) \in V

to the state

\underline{v}' = d(\underline{e}) \in V.

At this moment we update the following information:

1) We set

\alpha(\underline{a}) := t_i

(So, we update the arrival time of that agent.)

2) We set

\sigma(\underline{a}) := \underline{v}'

(So, we update the state of that agent.)

3) We recompute the subset of agents in the state \underline{v} (by removing \underline{a} from this subset) and in the state \underline{v}' (by adding \underline{a} to this subset).

4) For every transition f that’s affected by the state \underline{v} or the state \underline{v}', and for every agent a in the upstream state of that transition, we set

T(a,f) := j_f(a, S_f, \alpha(a))

where S_f is the element of 2^{A \times t^{-1}(f)} saying which subset of agents is in each state affecting the transition f. (So, we update our table of times at which agent a will make the transition f, given that it doesn’t do anything else first.)

Now we need to compute the next time at which something happens, namely t_{i+1}. And we need to compute what actually happens then!

To do this, we look through our table of times T(a,e) for each agent a and all transitions out of the state that agent is in. and see which time is smallest. If there’s a tie, break it. Then we reset \underline{a} and \underline{e} to be the agent-edge pair that minimizes T(a,e).

5) We set

t_{i+1} := T(\underline{a},\underline{e})

Then we loop back around to step 1), but with i+1 replacing i.

Whew! I hope you followed that. If not, please ask questions.


Agent-Based Models (Part 6)

21 February, 2024

Today I’d like to start explaining an approach to stochastic time evolution for ‘state charts’, a common approach to agent based models. This is ultimately supposed to interact well with Kris Brown’s cool ideas on formulating state charts using category theory. But one step at a time!

I’ll start with a very simple framework, too simple for what we need. Later I will make it fancier—unless my work today turns out to be on the wrong track.

Today I’ll describe the motion of agents through a graph, where each vertex of the graph represents a possible state. Later I’ll want to generalize this, replacing the graph by a Petri net. This will allow for interactions between agents.

Today the probability for an agent to hop from one vertex of the graph to another by going along some edge will be determined the moment the agent arrives at that vertex. It will depend only on the agent and the various edges leaving that vertex. Later I’ll want this probability to depend on other things too—like whether other agents are at some vertex or other. When we do that, we’ll need to keep updating this probability as the other agents move around.

Okay, let’s start.

We begin with a finite graph of the sort category theorists like, sometimes called a ‘quiver’. Namely:

• a finite set V of vertices or states,

• a finite set E of edges or transitions,

• maps u, d \colon E \to V mapping each edge to its source and target, also called its upstream and downstream.

Then we choose

• a finite set A of agents.

Our model needs one more ingredient, a stochastic map called the jump function j, which I will describe later. But let’s start talking about how we ‘run’ the model.

At each moment in time t \in \mathbb{R} there will be a state map

\sigma \colon A \to V

saying what vertex each agent is at. Note, I am leaving the time-dependence of \sigma implicit here! We could call it \sigma_t if we want, but I think that will ultimately more confusing than helpful. I prefer to think of \sigma as a kind of ‘database’ that we will keep updating as time goes on.

Regardless, our main goal is to describe how this map \sigma changes with time: given \sigma initially we want our software to compute it for later times. But this computation will be stochastic, not deterministic. Practically speaking, this means we’ll use (pseudo)random number generators as part of this computation.

We could subdivide the real line \mathbb{R} into lots of little ‘time steps’ and do a calculation at each time step to figure out what each agent will do at that step: that’s called incremental time progression. But that’s computationally expensive.

So instead, we’ll use a version of discrete event simulation. We only keep track of events: times when an agent jumps from one state to another. Between events, nothing happens, so our simulation can jump directly from one event to the next.

So, whenever an event happens, we just need to compute the time at which the next event happens, and what actually happens then: that is, which agent moves from the state it’s in to some other state, and what that other state is.

For this we need to think about what the agents can do. For each vertex v \in V there is a set d^{-1}(v) \subseteq E of edges going from that vertex to other vertices. An agent at vertex v can move along any of these edges and reach a new vertex. So these are the questions we need to answer about this agent:

which edge will it move along?

and

when will it do this?

We will answer these questions stochastically, and we will do it by fixing a stochastic map called the jump function:

j \colon A \times E \times \mathbb{R} \rightsquigarrow \mathbb{R}

Briefly, j tells us the time for a specific agent to make a specific transition if it arrived at the state upstream to that transition at a specific time.

The squiggly arrow means that j is not an ordinary map, but rather a stochastic map. Mathematically, this means it maps points in A \times E \times \mathbb{R} to probability distributions on \mathbb{R}. In practice, a stochastic map is a map whose value depends not only on the inputs to that map, but also on a random number generator.

Suppose a is an agent, e \in E is an edge of our graph, and t \in \mathbb{R} is a time. Then j(a, e, t) is the answer to this question:

If at time t agent a arrives at the vertex u(e), when will this agent move along the edge e to the vertex d(e), given that it doesn’t do anything else first?

Here’s what we do with this information. At every moment in time we keep track of some information about each agent a \in A, namely:

• Which vertex is it at now? This is \sigma(a).

• For each edge e whose upstream is \sigma(a), when will agent a move along this edge if it doesn’t do anything else first? Call this time T(a,e).

I need to explain how we compute these. Let’s assume that at some moment in time t_i an agent has just moved along some edge. More specifically, suppose agent a_0 \in A has just moved to some vertex v_0 \in V. At this moment we update the following information:

1) We set

\sigma(a_0) := v_0

(So, we update the state of the agent.)

2) For every edge e with u(e) = v_0, we set

T(a_0,e) := j(a_0, e, t)

(So, we update our table of times at which agent a will move along each available edge, given that it doesn’t do anything else first.)

Now we need to compute the next time at which something happens, namely t_i. And we need to compute what actually happens then!

To do this, we look through our table of times T(a,e) for all agents a and all edges e with u(e) = \sigma(a), and see which time is smallest. If there’s a tie, break it by adding a little bit to some times T(a,e). Then let \underline{a}, \underline{e} be the agent-edge pair that minimizes T(a,e).

3) We set

t_{i+1} := T(\underline{a},\underline{e})

Then here’s what we do at time t_{i+1}. We take the state of agent \underline{a} and update it, to indicate that it’s moved along the edge \underline{e}. More precisely:

4) We set

\sigma(\underline{a}) = d(\underline{e})

And now we go back to step 1), and keep repeating this loop.

Conclusion

As you can see, I’ve spent most of my time describing an algorithm. But my goal was really to figure out what data we need to describe an agent-based model of this specific sort. And I’ve seen that we need:

• a graph u,d \colon E \to V of states and transitions

• a set A of agents

• a stochastic map j \colon A \times E \times \mathbb{R} \rightsquigarrow \mathbb{R} describing the time for a specific agent to make a specific transition if it arrived in the state upstream to that transition at a specific time… and nothing else happens first.

Note that this last item gives us great flexibility. We can describe continuous-time Markov chains and also their semi-Markov generalization where the hazard rate of an edge (the probability per time for an agent to jump along that edge, assuming it doesn’t do anything else first) depends on how long the agent has resided in the upstream vertex. But we can also make these hazard rates have explicit time-dependence, and they can also depend on the agent!


Agent-Based Models (Part 5)

15 February, 2024

Agent-based models are crucial in modern epidemiology. But currently, many of these models are large monolithic computer programs—opaque to everyone but their creators. That’s no way to do science!

Our team of category theorists, computer scientists, and public health experts has come up with a cool plan to create agent-based models out of small reusable modules which can be explained, tested, compared and shared. This will make it easier to compare different models and build new ones. As a test case, we plan to apply these models to the vaping crisis—comparing them with traditional monolithic models.

If you know some philanthropists who might want to fund some potentially revolutionary research in public health and computer science, please let them know about this. I’m doing all my work for free, but some of my team members can only work on this if they get paid.

Here are the other team members:

Kris Brown and Evan Patterson, who work on computer science and scientific modeling with category theory at the Topos Institute,

Nathaniel Osgood, who works on computer science and epidemiological modeling at the University of Saskatchewan, and

Patty Mabry, who works on public health and human behavior modeling at HealthPartners Institute.

After a few weeks of hard work, racing against deadlines and many obstacles, we team have just finished writing a grant proposal for this project. It was worth writing, because we came up with some really exciting ideas and nailed down a lot of technical details.

Brendan Fong, head of the Topos Institute, wrote:

Very excited for this project—it takes over a decade of work in applied category theory and uses it to make a significant difference in the science of behaviour change, by modularising and systematising modelling. The goal is to show how it addresses very concrete problems like the public health of vaping.

There’s a lot to say, but for now I’ll just give a sketchy summary of our project.

Overview

Modeling is a key to understanding the specific mechanisms that underlie human behavior. Moreover, explicit examination and experimentation to isolate the behavioral mechanisms responsible for the effectiveness of behavioral interventions are foundational in advancing the field of Science of Behavior Change (SOBC). Unfortunately, this has been held back by the difficulty of precisely comparing behavioral mechanisms that are formulated in disparate contexts using different operational definitions. This is a problem we aim to solve.

We propose to initiate a new era of epidemiological modeling, in which agent-based models (ABMs) can be flexibly created from standardized behavioral mechanism modules that can be easily combined, shared, adapted, and reused in diverse ways for different types of insight. To do this, we must transform the sprawling repertoire of ABM methodologies into a systematic science of modular ABMs. This requires developing new mathematics based on Applied Category Theory (ACT). The proposers have already used ACT to develop modular models that represent human behavior en masse. To capture human behavior at the individual level in a modular way demands significant further conceptual advances, which we propose to make here.

We will develop the mathematics of modular ABMs and implement it by creating modules that capture the behavioral mechanisms put forward by SOBC: self-regulation, interpersonal & social processes, and stress-reactivity & resilience. We will evaluate this approach in a test case—the vaping crisis—by using these modules to build proof-of-concept ABMs of this crisis and comparing these new modular ABMs to existing models. We will create open-access libraries of modules for specific behavioral mechanisms and larger ABMs built from these modules. We will also run education and training events to disseminate our work.

Intellectual Merit

This interdisciplinary project will have a longstanding and substantial impact across three fields: Applied Category Theory, Science of Behavior Change, and Incorporating Human Behavior in Epidemiological Models. Our unified approach to describing system dynamics in a modular and functorial way will be a major contribution to ACT. The SOBC has ushered in a new era in behavioral intervention design based on the study of behavioral mechanisms. Our work incorporating SOBC-studed behavioral mechanisms as standardized modules in ABMs will transform the field of epidemiological modeling and lead to more rapid progress in SOBC.

The key contribution of our work to all three fields is that it provides the ability to easily compare models with different operational definitions of behavioral mechanisms—since rather than a model being a large monolithic structure, opaque to everyone but its creators, it can now be built from standardized behavioral mechanism modules, and the effect of changing a single module can easily be studied.

Broader Impacts

Our work will be transformative to behavioral science, and the magnitude of the impact can hardly be overstated. The ability to communicate and compare behavioral mechanisms will pave the way for large-scale leveraging of evidence from across behavioral epidemiological models for collective use. A valuable impact of our work will be in providing an accessible framework for reproducible, standardized models built from transparent, mathematically well-defined modules.

Until now, interacting with epidemiological models has required knowledge of mathematics, programming skills, and access to proprietary software. Our ACT-enabled modular representation of behavioral mechanisms will open up access to health professionals and community members of all levels to participate in the construction of epidemiological models. We will also build capacity in applying our methodology through events funded by this proposal.


What Can Mathematicians Do About Climate Change?

25 November, 2023

For my Fields Institute project on Mathematics of Climate Change, I’m trying to compile a list of ways that mathematicians can help the human response to climate change.

Here’s a short list, with reading material for each one. I threw it together quite quickly. I’m sure I left out some big important topics, and I probably haven’t found the best reading material for many topics. Can you help out?

Before you try:

Note that right now I’m looking for topics that are abstract and general enough that they will resonate with mathematicians. I’m imagining that mathematicians could get interested in these topics, go to workshops where they talk to experts on how these topics are relevant to dealing with climate change.

Also note that I’m not including climate modeling—for example, solving nonlinear PDE to model the behavior of the oceans and atmosphere. The big question I’m interested in here is not what will the Earth do? but what should we do?

Finally, note that I’m looking for fairly broad topics, not individual problems within these topics. You should imagine these as possible topics for workshops or conferences.

Okay, here we go. The Wikipedia articles here are okay for a quick overview, but you’ll need to dig into their bibliographies to go further.

Parameter estimation

How can we improve methods for estimating parameters from data in complex models?

• Wikipedia, Estimation theory.

• William Broniec, Sungeun An, Spencer Rugabera and Ashok Goel, Guiding parameter estimation of agent-based modeling through knowledge-based function approximation.

Causal discovery and attribution

How can we improve our frameworks for determining what causes what?

• Wikipedia, Extreme event attribution.

• Clark Glymour, Kun Zhang and Peter Spirtes, Review of causal discovery methods based on graphical models.

• Alessio Zanga and Fabio Stella, A survey on causal discovery: theory and practice.

Optimization

How can we optimize decisions when our models of these situations are complex and uncertain?

• Wikipedia, Mathematical optimization.

• Hoai An Le Thi, Hoai Minh Le and Tao Pham Dinh, Optimization of Complex Systems: Theory, Models, Algorithms and Applications. (Not open access, but see LibGen.)

• Marco Janssen, Jan Rotmans, Kooze Vrieze, Climate change: optimization of response strategies. (Not open access, but see LibGen and SciHub.)

General systems theory

How can existing work using modern mathematics to study general dynamical systems be leveraged to create better simulation software for improving the human response to climate change?

• David Jaz Myers, Categorical Systems Theory.

• Brendan Fong, The Algebra of Open and Interconnected Systems.

• Matteo Capucci, Bruno Gavranović, Jules Hedges and Eigil Fjeldgren Rischel, Towards foundations of categorical cybernetics.

• John Baez, Xiaoyan Li, Sophie Libkind, Nathaniel D. Osgood and Eric Redekopp, A categorical framework for modeling with stock and flow diagrams.

Agent-based models

What is a good general mathematical framework for agent-based models, that would allow them to be built in a transparent, unified yet flexible, and composable manner?

• Wikipedia, Agent-based model.

• Wikipedia, Agent-based computational economics.

• Wikipedia, Agent-based social simulation.

• Sharan S. Agrawal, Scalable Agent-Based Models for Optimized Policy Design: Applications to the Economics of Biodiversity and Carbon.

Uncertainty quantification

How should we measure the level of confidence or reliability of predictions?

• Wikipedia, Uncertainty quantification.

• Ralph C. Smith, Uncertainty Quantification: Theory, Implementation, and Applications. (Not open access, but see LibGen.)

Extreme events

How should we quantify the risks associated to low-probability but high-impact events?

• Wikipedia, Large deviations theory.

• Wikipedia, Heavy tailed distribution.

• Wikipedia, Fat-tailed distribution.

• Wikipedia, Extreme value theory.

• Wikipedia, Tail risk.

Tipping point theory

How can we improve our ability to detect the approach to ‘tipping points’, where a system dramatically changes its behavior?

• Wikipedia, Tipping points in the climate system.

• Valerie Livina, Tipping Point Analysis and Applications.

• Valerie Livinia and Tim M. Lenton, A modified method for detecting incipient bifurcations in a dynamical system.

• T. M. Bury, C. T. Bauch and M. Anand, Detecting and distinguishing tipping points using spectral early warning signals.

Game theory

How can we use game theory to improve the decision-making process regarding climate change?

• Peter John Wood, Climate change and game theory: a mathematical survey.

• Parkash Chander, Game Theory and Climate Change. (Not open access, but table of contents and introduction available here.)

• Matthew Kopec, Game theory and the self-fulfilling climate tragedy.

• Further papers listed here.

Decision theory

How can we improve our ability to make decisions in dynamical situations where the information available changes based on our decisions?

• Wikipedia, Decision theory.

• John Winsor Pratt, Howard Raïffa and Robert Schlaifer, Introduction to Statistical Decision Theory. (Not open access, but see LibGen.)

• N. Osgood, K. Yee, W. An and W. Grassmann, Addressing dynamically complex decision problems using decision analysis and simulation. (Not open access.)

Network theory

How can we improve collective decision-making by better understanding the behavior of social and communication networks?

• Wikipedia, Network theory.

• Mark Newman, Albert-László Barabási and Duncan J. Watts, The Structure and Dynamics of Networks. (Not open-access.)

• Joseph B. Bak-Coleman et al, Stewardship of global collective behavior.

• Linton C. Freedman, The Development of Social Network Analysis.


Mathematics for Climate Change

13 November, 2023

Some good news! I’m now helping lead a new Fields Institute program on the mathematics of climate change.

You may have heard of the Fields Medal, one of the most prestigious math prizes. But the Fields Institute, in Toronto, holds a lot of meetings on mathematics. So when COVID hit, it was a big problem. The director of the institute, Kumar Murty, decided to steer into the wind and set up a network of institutions working on COVID, including projects on the mathematics of infectious disease and systemic risks. This worked well, so now he wants to start a project on the mathematics of climate change. Nathaniel Osgood and I are leading it.

Nate, as I call him, is a good friend and collaborator. He’s a computer scientist at the University of Saskatchewan and, among other things, an expert on epidemiology who helped lead COVID modeling for Canada. We’re currently using category theory to develop a better framework for agent-based models.

Nate and I plan to focus the Fields Institute project not on the geophysics of climate change—e.g., trying to predict how bad global warming will be—but the human response to it—that is, figuring out what we should do! This project will be part of the Fields Institute’s Centre for Sustainable Development.

I’ll have a lot more to say about this. But for now, let me just say: I’m very excited to have this opportunity! Mathematics may not be the main thing we need to battle climate change, but there are important things in this realm that can only be done with the help of math. I know a lot of mathematicians, computer scientists, statisticians and others with quantitative skills want to do something about climate change. I aim to help them do it.


Software for Compositional Modeling in Epidemiology

25 October, 2023

Here is the video of my talk at Applied Category Theory 2023. While it has ‘epidemiology’ in the title, it’s mostly about general ways to use category theory to build flexible, adaptable models:

Here are the slides:

Software for compositional modeling in epidemiology.

Abstract. Mathematical models of disease are important and widely used, but building and working with these models at scale is challenging. Many epidemiologists use “stock and flow diagrams” to describe ordinary differential equation (ODE) models of disease dynamics. In this talk we describe and demonstrate two software tools for working with such models. The first, called StockFlow, is based on category theory and written in AlgebraicJulia. The second, called ModelCollab, runs on a web browser and serves as a graphical user interface for StockFlow. Modelers often regard diagrams as an informal step toward a mathematically rigorous formulation of a model in terms of ODEs. However, stock and flow diagrams have a precise mathematical syntax. Formulating this syntax using category theory has many advantages for software, but in this talk we explain three: functorial semantics, model composition, and model stratification.

You can get the code for Stockflow here and ModelCollab here.

For more, read these:

• John Baez, Xiaoyan Li, Sophie Libkind, Nathaniel Osgood and Evan Patterson, Compositional modeling with stock and flow diagrams.

• John Baez, Xiaoyan Li, Sophie Libkind, Nathaniel D. Osgood, Long Pham and Eric Redekopp, A categorical framework for modeling with stock and flow diagrams.

• Andrew Baas, James Fairbanks, Micah Halter, Sophie Libkind and Evan Patterson, An algebraic framework for structured epidemic modeling.

• Sophie Libkind, Andrew Baas, Evan Patterson and James Fairbanks,
Operadic modeling of dynamical systems: mathematics and computation.


Agent-Based Models (Part 1)

6 July, 2023

I’m working with Nate Osgood and other folks to develop better modeling tools for epidemiologists. Right now we’re trying to develop a category-based framework for agent-based models. It’s a bit tough since many different techniques are used in such models, without yet an overarching discipline—Nate likened it to the ‘wild West’. I have a lot to learn, but I thought I should start keeping notes on our conversations.

Agent-based models involve multiple ‘agents’ that have ‘states’ which change over time, often including location in some sort of ‘space’, and which interact via various ‘networks’. Our goal is to create convenient, flexible software for creating agent-based models, composing larger ones out of smaller parts, refining existing models, running the models, and viewing them as they run in many different ways.

There is already software for doing these things, such as AnyLogic, NetLogo and Repast. Nate is familiar with it, since his job is building agent-based models. But he’s dissatisfied with it for many reasons. We want to make better software by first analyzing the whole problem with the help of math. All those quoted words—‘agents’, ‘states’, ‘space’, ‘networks’—need to be clarified in a way that’s quite general but also practical.

I’m still learning about all this stuff. Here’s one thing I learned that makes me happy. The internal dynamics of agents—that is, how their states change with time—is often described using two methods:

stock and flow diagrams

state diagrams

Stock and flow diagrams are good for describing continuous quantities that evolve according to ordinary differential equations, while state diagrams are good for describing discrete quantities that evolve in steps. (In ether case the evolution could be deterministic or stochastic.)

We’ve already worked out the category theory of stock and flow diagrams, and used it to create software for working with such diagrams:

• John Baez, Xiaoyan Li, Sophie Libkind, Nathaniel D. Osgood and Eric Redekopp, A categorical framework for modeling with stock and flow diagrams, to appear in Mathematics for Public Health, Springer.

• John Baez, Xiaoyan Li, Sophie Libkind, Nathaniel Osgood and Evan Patterson, Compositional modeling with stock and flow diagrams, to appear in Proceedings of Applied Category Theory 2022.

So, it seems we should work out the category theory of state diagrams! Luckily they are similar to stock and flow diagrams, so a lot of the same math should apply: decorated cospans, the operad for undirected wiring diagrams, etc. So, it shouldn’t be a huge task.

However, there are other issues we need to deal with. For example, in a ‘hierarchical state diagrams’ there may be states within a single state. For example in the state ‘infected’ there may be states such as ‘diagnosed’ and ‘undiagnosed’, and in ‘diagnosed’ there may be many states describing whether and how someone has been treated.

Also, there may be variables defined only in a given state, whose dynamics are governed by some stock and flow diagram.

So, we ultimately need a good mechanism for building stock and flow diagrams and state diagrams hierarchically, and mixing the two. Maybe we shouldn’t even treat them as two distinct kinds of diagram, but rather as two ways of using some more general kind of diagram!

I’ll try to take one step at a time. Getting a good category theoretic treatment of state diagrams should be pretty quick.


Adjoint School 2023

18 December, 2022

Are you interested in applying category-theoretic methods to problems outside of pure mathematics? Apply to the Adjoint School!

Apply here. And do it soon.

• January 9, 2023. Application Due.

• February – July, 2023. Learning Seminar.

• July 24 – 28, 2023. In-person Research Week at University of Maryland, College Park, USA

Participants are divided into four-person project teams. Each project is guided by a mentor and a TA. The Adjoint School has two main components: an online learning seminar that meets regularly between February and June, and an in-person research week held in the summer adjacent to the Applied Category Theory Conference.

During the learning seminar you will read, discuss, and respond to papers chosen by the project mentors. Every other week a pair of participants will present a paper which will be followed by a group discussion. After the learning seminar each pair of participants will also write a blog post, based on the paper they presented, for The n-Category Café

Projects and Mentors

• Message passing logic for categorical quantum mechanics – Mentor: Priyaa Srinivasan
• Behavioural metrics, quantitative logics and coalgebras – Mentor: Barbara König
• Concurrency in monoidal categories – Mentor: Chris Heunen
• Game comonads and finite model theory – Mentor: Dan Marsden

See more information about research projects at https://adjointschool.com/2023.html.

Organizers:

• Ana Luiza Tenorio 
• Angeline Aguinaldo
• Elena Di Lavore 
• Nathan Haydon


Categories and Epidemiology

1 November, 2022

I gave a talk about my work using category theory to help design software for epidemic modeling:

Category theory and epidemiology, African Mathematics Seminar, Wednesday November 2, 2022, 3 pm Nairobi time or noon UTC. Organized by Layla Sorkatti and Jared Ongaro.

This talk was a lot less technical than previous ones I’ve given on this subject, which were aimed mainly at category theorists! You can see it here:

Abstract. Category theory provides a general framework for building models of dynamical systems. We explain this framework and illustrate it with the example of “stock and flow diagrams”. These diagrams are widely used for simulations in epidemiology. Although tools already exist for drawing these diagrams and solving the systems of differential equations they describe, we have created a new software package called StockFlow which uses ideas from category theory to overcome some limitations of existing software. We illustrate this with code in StockFlow that implements a simplified version of a COVID-19 model used in Canada. This is joint work with Xiaoyan Li, Sophie Libkind, Nathaniel Osgood and Evan Patterson.

Check out these papers for more:

• John Baez, Xiaoyan Li, Sophie Libkind, Nathaniel Osgood and Evan Patterson, Compositional modeling with stock and flow diagrams.

• Andrew Baas, James Fairbanks, Micah Halter, Sophie Libkind and Evan Patterson, An algebraic framework for structured epidemic modeling.

For some more mathematical talks on the same subject, go here.