As a mathematician who has gotten interested in the problems facing our planet, I’ve been trying to cook up some new projects to work on. Over the decades I’ve spent a lot of time studying quantum field theory, quantum gravity, *n*-categories, and numerous pretty topics in pure math. My accumulated knowledge doesn’t seem terribly relevant to my new goals. But I don’t feel like doing a complete ‘brain dump’ and starting from scratch. And my day job still requires that I prove theorems.

#### Green Mathematics

I wish there were a branch of mathematics—in my dreams I call it **green mathematics**—that would interact with biology and ecology just as fruitfully as traditional mathematics interacts with physics. If the 20th century was the century of physics, while the 21st is the century of biology, shouldn’t mathematics change too? As we struggle to understand and improve humanity’s interaction with the biosphere, shouldn’t mathematicians have some role to play?

Of course, it’s possible that when you study truly complex systems—from a living cell to the Earth as a whole—mathematics loses the unreasonable effectiveness it so famously has when it comes to simple things like elementary particles. So, maybe there is no ‘green mathematics’.

Or maybe ‘green mathematics’ can only be born after we realize it needs to be fundamentally different than traditional mathematics. For starters, it may require massive use of computers, instead of the paper-and-pencil methods that work so well in traditional math. Simulations might become more important than proofs. That’s okay with me. Mathematicians like things to be elegant—but one can still have elegant definitions and elegant models, even if one needs computer simulations to see how the models behave.

Perhaps ‘green mathematics’ will require a radical shift of viewpoint that we can barely begin to imagine.

It’s also possible that ‘green mathematics’ already exists in preliminary form, scattered throughout many different fields: mathematical biology, quantitative ecology, bioinformatics, artificial life studies, and so on. Maybe we just need more mathematicians to learn these fields and seek to synthesize them.

I’m not sure what I think about this ‘green mathematics’ idea. But I think I’m getting a vague feel for it. This may sound corny, but I feel it should be about structures that are more like this:

than this:

I’ve spent a long time exploring the crystalline beauty of traditional mathematics, but now I’m feeling an urge to study something slightly more earthy.

#### Network Theory

When dreaming of grand syntheses, it’s easy to get bogged down in vague generalities. Let’s start with something smaller and more manageable.

Network theory, and the use of diagrams, have emerged independently in many fields of science. In particle physics we have Feynman diagrams:

In the humbler but more practical field of electronics we have circuit diagrams:

Throughout engineering we also have various other styles of diagram, such as bond graphs:

I’ve already told you about Petri nets, which are popular in computer science… but also nice for describing chemical reactions:

‘Chemical reaction networks’ do a similar job, in a more primitive way:

Chemistry shades imperceptibly into biology, and biology uses so many styles of diagram that an organization has tried to standardize them:

• Systems Biology Graphical Notation (SBGN) homepage.

SBGN is made up of 3 different languages, representing different visions of biological systems. Each language involves a comprehensive set of symbols with precise semantics, together with detailed syntactic rules how maps are to be interpreted:

1) The Process Description language shows the temporal course of biochemical interactions in a network.

2) The Entity Relationship language lets you to see all the relationships in which a given entity participates, regardless of the temporal aspects.

3) The Activity Flow language depicts the flow of information between biochemical entities in a network.

Biology shades into ecology, and in the 1950s, Howard T. Odum developed the ‘Energy Systems Language’ while studying tropical forests. Odum is now considered to be the founder of ‘systems ecology’. If you can get ahold of this big fat book, you’ll see it’s packed with interesting diagrams describing the flow of energy through ecosystems:

• Howard T. Odum, *Systems Ecology: an Introduction*, Wiley-Interscience, New York, 1983.

His language is sometimes called ‘Energese’, for short:

The list goes on and on, and I won’t try for completeness… but we shouldn’t skip probability theory, statistics and machine learning! A Bayesian network, also known as a “belief network”, is a way to represent knowledge about some domain: it consists of a graph where the nodes are labelled by random variables and the edges represent probabilistic dependencies between these random variables. Various styles of diagrams have been used for these:

And don’t forget neural networks!

#### What Mathematicians Can Do

It’s clear that people from different subjects are reinventing the same kinds of diagrams. It’s also clear that diagrams are being used in a number of fundamentally different ways. So, there’s a lot to sort out.

I already mentioned one attempt to straighten things out: Systems Biology Graphical Notation. But that’s not the only one. For example, in 2001 the International Council on Systems Engineering set up a committee to customize their existing Unified Modeling Language and create something called Systems Modeling Language. This features *nine types of diagrams!*

So, people are already trying to systematize the use of diagrams. But mathematicians should join the fray.

Why? Because mathematicians are especially good at soaring above the particulars and seeing general patterns. Also, they know ways to think of diagrams, not just as handy tools, but as *rigorously defined structures that you can prove theorems about*… with the help of category theory.

I’ve written a bit about diagrams already, but not their ‘green’ applications. Instead, I focused on their applications to traditional subjects like topology, physics, logic and computation:

• John Baez and Aaron Lauda, A prehistory of n-categorical physics, to appear in *Deep Beauty: Mathematical Innovation and the Search for an Underlying Intelligibility of the Quantum World*, ed. Hans Halvorson, Cambridge U. Press.

• John Baez and Mike Stay, A Rosetta stone: topology, physics, logic and computation, in *New Structures for Physics*, ed. Bob Coecke, Lecture Notes in Physics vol. 813, Springer, Berlin, 2011, pp. 95-174.

It would be good to expand this circle of ideas to include chemistry, biology, ecology, statistics, and so on. There should be a mathematical theory underlying the use of networks in *all* these disciplines.

I’ve started a project on this with Jacob Biamonte, who works on two other applications of diagrams, namely to quantum computation and condensed matter physics:

• Jacob D. Biamonte, Stephen R. Clark and Dieter Jaksch, Categorical tensor networks.

So far we’ve focused on one aspect: stochastic Petri nets, which are used to describe chemical reactions and also certain predator-prey models in quantitative ecology. In the posts to come, I want to show how ideas from quantum field theory be used in studying stochastic Petri nets, and how this relates to the ‘categorification’ of Feynman diagram theory.

I’ve seen thousands of mistakes made because people accepted a wrong result of a computer or misinterpreted the results, although they could have spotted the mistake by analyzing the problem with pen and paper.

It’s like a class in complex analysis where the students learn how to evaluate integrals of real functions over the real axis using residue calculus, sooner or later someone will make a mistake (or use a computer algebra program that has a bug) and get a

complex numberas a result of areal integralof areal function. Or a finite number for a divergent integral. It will always be useful to be able to tell that such a result has to be wrong, based on traditional pen-and-paper techniques :-)(To me computers are nothing more than advanced pen-and-papers anyway.)

One of the main problems is that these diagrams – when used for software engineering – are supposed to model interactions, processes and dependencies of all kinds, from user-software interactions that are very hard to formalize, to collaboration diagrams of objects in an object-oriented system, where the very meaning of “collaboration” takes a special connotation for every line that is drawn. One big challenge in software engineering is to figure out what aspects of a complex system can be formalized and to what level, and what can not.

The far reaching goal of the UML was to enable software engineers to generate a large part of the code of a new software system from UML diagrams, there exist a lot of tools today that can generate DDL scripts (DDL = database description language, specifies the table layout of relational databases) from entity-relationship diagrams, and code from class diagrams, and vice versa. But usually these are only intermediate steps of the development, and a lot of work has to be done afterwards, because the UML models are usually much too coarse.

I would agree with Tim’s sentiment.

The problem with UML is that, first and foremost, it is a tool designed to make money for the software tool vendors. So the vendors make sure that it is easy to use and it allows the users of the tool to take shortcuts and be loose with the diagramming rules. (Hey, it’s only a diagram, right?) But, because it is so loose with obeying the rules, especially WRT the activity diagrams and sequence diagrams, the end result is a diagram that is ambiguous and in many cases useless.

For example, Activity Diagrams include Petri Nets as a diagramming capability, yet it does not include any of the formal semantics of Petri Nets. So the result is that the software designers enjoy playing with the tool, don’t have to think too hard, and the vendor sells more of the tools as the word spreads.

Yet, the software developers still have to debug their code. The upshot is that many people consider that UML does not buy a development team that much, but it has that marketing muscle and high adoption levels behind it.

The alternative is to use something much more formal, but no one wants to buy the stuff, because then they will actually need to hire some methodical and thorough designers. Since there are not enough of these people the tool does not get enough sales and disappears from the market.

That is why we have UML; it makes the most people happy.

I agree with both of you.

I also think that one of the limitations of UML seems to be that it is not natural to model systems governed by differential equations in that formalism, and therefore it’s hard to model/simulate a system that it’s not just software but includes nontrivial physical subsystems.

This is one of the reasons why both the Automotive and Aerospace industries have long adopted Model Based Design as a development standard.

This seems to work better also for generating the code, i think for the simple reason that it forces the designer to gradually refine things (while writing some code) early on in the process, in order to simulate the whole system.

Now, while general purpose software tend to interface less with physical systems, it does interface much more with humans, which is a problem because you don’t have a mathematical model of the user, so you can’t simulate how your software behaves before deploying it. So cycling through a lot of beta versions is basically your only answer in that case, while i still think that formalisms such as UML do help a little.

As to use of computers over pen and paper… Yes, computers bring with them a laziness that leads to mistakes. But with many nonlinear and stochastic systems there is no other option whatsoever. Complex analysis is one thing. Complex systems are quite another beast, for which the tricks of traditional P.D.E. are often wholly impotent.

It’s a field of inquiry with a long history starting perhaps with Leibniz: http://en.wikipedia.org/wiki/Characteristica_universalis

Perhaps one way to begin to analyze the problem could be to use a “pattern language” (see http://en.wikipedia.org/wiki/Pattern_language) to look at wholes and the parts which comprise them, and the subparts which make up the parts, and continue on down to the atomic level of parts which can no longer be subidivided.

Here’s some articles on this topic:

Anatomy of a Pattern Language

http://www.designmatrix.com/pl/anatomy.html

Pattern Language as applied to architecture and urban design:

http://www.patternlanguage.com/leveltwo/patternsframegreen.htm?/leveltwo/../apl/twopanelnlb.htm

Pattern Language as applied to software engineering:

parlab.eecs.berkeley.edu/wiki/_media/patterns/paraplop_g1_1.pdf

As applied to permaculture design:

http://www.holocene.net/dissertation.htm

As applied to forest garden design:

http://appleseedpermaculture.com/forest-gardening-vision-pattern-language/

Patterns => also known as “design rules” in engineering circles.

http://en.wikipedia.org/wiki/Design_rule_checking

As applied to musical improvisation:

and

and a platform on which to create a pattern language for music and other things:

http://en.flossmanuals.net/PureData/Introduction

as applied to design of ecosystems:

http://www.designmatrix.com/pl/ecopl/index.html

and

http://www.irl.ethz.ch/plus/people/agrtrega/wissen_2010

a pattern language for querying directed acyclic graphs with several distinctive and innovative characteristics:

arantxa.ii.uam.es/~ssantini/work/papers/recent/s613_jisbd_grafos.pdf

and another platform (mac only, so far):

http://impromptu.moso.com.au/

As applied to design of peer-to-peer overlay networks:

Thanks for all these links, streamfortyseven. Ever since my high school pal John Garrahan, who became an architect, gave me a copy of this book:

• Christopher Alexander,

A Pattern Language.I’ve been fond of this approach to architecture. My mother is a huge fan of Frank Lloyd Wright, and built a house in roughly that style, but I decided that old cities and towns almost anywhere in the world, which evolved organically following principles that Alexander tried to understand, are more beautiful and more functional than the ‘top-down’ approach of modern building design, even when this top-down approach is carried out in an inspired way, e.g. by Frank Lloyd Wright.

In future issues of

This Week’s FindsI’ll interview Thomas Fischbacher about permaculture, and some of these ideas may come up.It’s tricky to relate any of this stuff to mathematics — it’s possible that the mathematics governing the veins of a leaf is also lurking in the canals of Venice:

(Click to enlarge.)

You might want to ask Dr Fischbacher if he’s ever heard of Dave Jacke, who wrote Edible Forest Gardens (see http://www.youtube.com/watch?v=ybhN0ep_0eE) which uses pattern language in permaculture design (see http://regenerativedesigns.files.wordpress.com/2010/04/fg_pl_sheets.pdf and http://appleseedpermaculture.com/forest-gardening-vision-pattern-language/)

btw, if the canals of Venice follow pre-existing watercourses, they most probably have fractal geometric character.

Among other references regarding fractal characteristics of rivers:

Nestler, J. M. and Sutton, V.K. (2000) “Describing Scales of Features in River Channels Using Fractal Geometry Concepts,” Regulated Rivers: Research and Management 16: 1-22

That’s interesting, as I recently found a link between Alexander’s language and mine. It’s that what he calls “harmony-seeking computation” is what I call “organizational learning process“. He pays more attention to spatial design features of the negotiation between the system and the environment and I key more on the eventful succession of change. Both involve a whole system forming as a successful way of facilitating innovative communication between the parts…

http://synapse9.com/blog/2011/02/27/the-fit-with-alexander-%e2%80%93-and-clearer-escape-from-our-traps/

I think that is the case.

I mostly know about the ‘green’ maths that relates to phylogenetic analysis, and branching processes (trees not networks) though networks are increasingly being used too. Here’s a couple of articles whose titles at least seem likely to appeal to you.

• P D Jarvis, J D Bashford and J G Sumner, Path integral formulation and Feynman rules for phylogenetic branching models, http://iopscience.iop.org/0305-4470/38/44/002

• J. G. Sumner, B. H. Holland, P. D. Jarvis, The algebra of the general Markov model on phylogenetic trees and networks, http://arxiv.org/abs/1012.5165

The former article is also on the arxiv:

http://arxiv.org/abs/q-bio/0411047

Graham wrote:

Good. I hope so. It’s very hard, perhaps impossible, to invent a new idea from scratch. But mathematicians are good at taking vague glimmerings of new ideas and gradually making them more precise and more explicit.

Thanks for the references. It’s easy to drown under references in this game, so my thanks are not a covert way of asking other people to post

morereferences—but thanks!John,

Your article made me remember a beautiful ‘pattern’ I saw on Arthur Winfree’s interesting book, When Time Breaks Down. It describes the simulation of a 4D rotating scroll that is related to the electrochemical activity of heart muscles that drive the beating cycles of heart operation.

Speaking of analogies, and close similarities, I wonder if anyone can say something about Roger Penrose’s sketch of Robinson Congruence in his The Road to Reality, chapter 33, Fig 33.15.

Thanks,

Ali

I don’t know anyone in bioinformatics using sbgn per se, even technophile early adopters. Just about everyone uses Python flavors of sbml,

http://sbml.org/SBML_Software_Guide/SBML_Software_Summary

or other Python network software

http://en.wikipedia.org/wiki/Genenetwork

Since you yearn for math about structures like this:

you might like to check out this paper:

• Qinglan Xia, The formation of a tree leaf.

(especially the figures on pages 13 and 14); here is a related talk:

• Qinglan Xia, The formation of a tree leaf, 9 November 2005.

The math used here is related to some of the work for which Cedric Villani recently won a Fields medal.

Whoops, I tried to include a copy of the leaf image from your post, but it didn’t work, somewhat ruining my punchline.

I fixed that. Alas, WordPress has decided that I’m the only one able to post images on my blog. If anyone wants to post an image, just include the URL. If it’s pretty enough, and relevant enough, I can make it display on the blog.

Thanks for the leaf-related links. I am indeed curious about how plants create leaves (and many other things like this).

Look at Barnsley’s Iterated Function Systems, too: http://www.superfractals.com/

Thanks, streamfortyseven. This brings back memories…

I spent some time thinking about iterated function systems back in early 1990s shortly after Barnsley first popularized them. In case anyone is wondering, iterated function systems are a simple and elegant way to generate fractals, some of which look suprisingly plant-like. The math is summarized nicely in Thomas Colthurst’s comment below. The example everyone loves is called Barnsley’s fern:

My friend Nate Osgood, then a computer science grad student at MIT, became curious when he heard that Barnsley had started a company, Iterated Systems, which claimed to do image compression using iterated function systems. In 1992 this company got a $2.1 million government grant to work on this stuff, but the methods were secret—or at least we never heard anyone explain them. So, we tried to figure them out.

I wound up deciding that it wouldn’t really work very well. It’s impressive that the Barnsley fern can be encoded in just a few numbers using an iterated function system. But this is a very special image. If you take a typical image—like a human face, for example—I don’t think you can generate it efficiently using iterated function systems. And it’s even harder to imagine an automatic system where you hand it an image and it figures out how to compress it a lot using iterated function systems. I’d hoped that you could use some clever ideas involving harmonic analysis on the affine group, but after a while I decided not.

The subsequent history of attempts to do image compression using iterated function systems makes me think I was right. Read the Wikipedia article. You’ll see things like:

So: that’s a summary of my old reasons for being less than thrilled with iterated function systems. My new reason is that I don’t believe Barnsley’s fern sheds a vast amount of light on

how a fern actually grows.It does shed a

littlebit of light. Briefly: if little bits of leaf keep growing littler bits following a simple pattern, you’ll get a fancy-looking leaf!But then comes the question of how the fern actually does this. And here it’s worth noting that not all leaves are easily drawn using iterated function systems.

But now I’m curious about the state of the art, so now I want to read that paper by Qinglan Xia.

This is great stuff !

Perhaps it is obvious, but this also brings up the question about the right level of abstraction you want to model a given natural system.

Of course the “right” level of abstraction is the highest one that is able to answer your questions.

But i am afraid that for some natural systems you have to go way down the abstraction level to answer even basic questions. The weather and climate are an example.

I think that the leaf tree allows higher level description because it has evolved to do something specific in its environment, so to speak. So you might be able to model it without having to simulate the single atoms.

Sorry, i am basically just thinking out loud here, but perhaps it can spark an interesting discussion, or perhaps John will come up with another great link :)

Giampiero wrote:

I don’t have anything exciting to say, but I think you’re right. It’s quite wonderful how evolution lets us apply the idea of final cause to biological systems. Aristotle had various concepts of cause, and one of them involved the notion of

telos, or goal. For example, if you ask “why is a spoon concave?” it makes sense for me to say “in order for it to hold soup”, because it was designed to accomplish that goal. The marvelous thing is that even without ‘intelligent design’, evolution leads to the proliferation of entities whose structure can be understood (to some extent) by acting as if they were designed to achieve some goal!So, for example: we can mathematically study a leaf, see what it would do to optimize something, and perhaps see that leaves in fact do this.

Physics on the other hand mainly deals with the concept of ‘efficient cause’, as in “the spoon is concave because you pushed down on it with a hard object”. Efficient causality is such a big deal in physics that this about the only kind of cause most physicists discuss when speaking of ‘causality’.

There is, however, one famous exception, namely that a particle tends to move along that path from start to finish that minimizes the action. This smells like a case of ‘final cause’, and it bugged physicists for a long time. Maupertuis, who came up with the principle of least action, wrote:

and there we see how the concept of ‘final cause’ tends to get linked to the concept of ‘intelligent design’.

But anyway, all this is just historical-philosophical chit-chat about junk I’ve known for ages. Qinglan Xia’s paper on the structure on the formation of a tree leaf is the sort of thing I’m actually interested in now.

Thanks for the Xia link! Looks like the first convincing model of leaf growth. And it has ramifications to other (bio-)logistic systems. Thrilling. Green math indeed.

I very much look forward to what you have to say about categorification of Feynman diagrams, but I think of them as a shorthand that makes possible the extraction of global information from nonconvergent functional Taylor series, which places models into Universality classes. I wonder —although I admit this to be a prejudice— whether networks of all kinds are (only) essential waypoints towards understanding systems in qualitatively more global and continuous terms, which are, however, perhaps (only) a useful way to systematize discrete structures. There is always information that is not included in a finite network, though it is also doubtful to me —another prejudice— that any continuous structure would be more than a model.

I think you’ll recognize that what you are doing is “green mathematics” when it feels both holistic and mathematics. [I’ll be back to earth later today. You and Tim van Beek together sent me into orbit.]

I know the above is off the wall, but this afternoon I came across a comment on the website of the Courant Research Center in Göttingen that says something like it:

Today we live in a period during which very different areas of mathematics come closer together and exchange techniques and ideas. In this process, new problems occur, e.g. if the “flexible” world of topology and geometry is used in the “rigid” world of number theory. For the time being, we are still lacking a good understanding of the overarching structure which makes this efficiently possible.Peter wrote:

You can read about that here:

• John Baez and James Dolan, From finite sets to Feynman diagrams, in

Mathematics Unlimited – 2001 and Beyond, vol. 1, edited by Björn Engquist and Wilfried Schmid, Springer, Berlin, 2001, pp. 29-50.and in a much more gentle expository way in these course notes:

•Quantization and categorification seminar: Fall 2003, Winter 2004 and Spring 2004.

and then more formally, with new ideas added, in this paper by a student who took that course:

• Jeffrey Morton, Categorified algebra and quantum mechanics,

Theory and Applications of Categories16(2006), 785-854.This is the some of the ‘old stuff I used to love’. But now Jacob and I are adapting it for applications to chemistry and population biology.

I’ll probably focus on the new aspects, and try to write about them in a way that doesn’t assume folks know the old stuff.

I think you’re right. I’m not sure “holistic” is quite the word for it, but there’s a kind of feeling, almost like a taste in my mouth, that I get sometimes when things seem to be going in the direction of “green mathematics”. It’s sort of weird. But a lot of science starts from intuitions like this, just as much as on logic or experiment. Only time will tell if it’s a good intuition or a delusion.

Heh. I’m glad you find it exciting!

After that beautiful picture of a leaf, I was really expecting something about the mathematics of fractals. (http://en.wikipedia.org/wiki/File:Fractal_fern_explained.png, for example).

But in the spirit of John’s post let me tell you how to get a graph from an iterated function system.

An iterated function system or IFS is just a compact set of contractive mappings on a complete metric space. Every IFS has an unique attractor satisfying

Define the graph of an IFS by having a node for each mapping and an edge between and iff and intersect.

The fun thing is that the graph of an IFS tells you a lot about the attractor. For example, is connected iff the graph is. If the graph has no edges, then is Cantor dust (except in the one special case when all the mappings contract to a single point).

Thanks for explaining the idea! Despite my skepticism about applications of iterated function systems to biology and image compression, the math is definitely neat.

chemistry, biology, ecology, statistics, and so on. There should be a mathematical theory underlying the use of networks in all these disciplines.Algebraic geometry? “Graphical models” (bayesian networks, hidden Markov models) are algebraic varieties!

Bernd Sturmfels studies combinatorial independence (previously he was into matroids) and seems to lead a programme of applying algebraic geometry of toric varieties to various problems in statistics, biostatistics (phylogenetics included) and machine learning.

Krysztof wrote:

Algebraic geometry by itself isn’t particularly ‘green’. But it’s a tool that you gotta know if you want to do mathematics without a limp. So yes: we’ll probably be seeing it, along with many other bits of math we know and love, in ‘green mathematics’.

How does a Bayesian network give an algebraic variety?

Hmm! Thanks for telling me!

It’s sort of a side-issue in my current plan, but sometimes I’ve imagined talking about the relation between Petri nets and toric varieties. If I don’t ever get around to it, interested people will just have to read these:

• Craciun, Dickenstein, Shiu and Sturmfels, Toric dynamical systems.

* Leonard Adleman, Manoj Gopalkrishnan, Ming-Deh Huang, Pablo Moisset and Dustin Reishus, On the mathematics of the law of mass action.

I knew about these papers, but I didn’t know Sturmfels has a whole “programme” for applying toric varieties to different problems.

Sorry for I have no time for a proper response.

Like this

• Luis David Garcia, Michael Stillman, Bernd Sturmfels, Algebraic Geometry of Bayesian Networks (+ accompanying thesis).

Well, I lied slightly about what, exactly, gives what ;)

There are books; electronic:

• Mathias Drton, Bernd Sturmfels and Seth Sullivant, Lectures on Algebraic Statistics

And dead tree, quite green nevertheless:

• Lior Pachter and Bernd Sturmfels, Algebraic Statistics for Computational Biology.

It is also helpful (almost like in “assumed”-helpful (that is, besides the whole intimidating lot of assumed AG)) to know about uses of graphical models in statistics.

Presumably, there are authors better suited to our “networks” topic but I also wanted to point at the (unrelated) concept of “variational inference” that Michael I. Jordan pushes:

• Michael I. Jordan, Graphical Models (overview).

• Martin J. Wainwright and Michael I. Jordan, Graphical Models, Exponential Families, and Variational Inference (book, 300 pp.)

Thanks for all these references, Krzysztof — they look quite relevant! Someday I hope to explain some of my ideas about those “graphical models” you mention above. I hadn’t seen any connection between them and, say, toric varieties (which I see coming up in Petri net theory). I still don’t see the connection but it sounds like you’re hinting at one. I’ll have to read some of this stuff.

Also, information geometry too admits a natural description within (real) algebraic geometry. There’s a (involved) book “Algebraic Geometry and Statistical Learning Theory” by Sumio Watanabe which beyond above also develops (not terribly practical at the moment) methods for graphical models from that viewpoint.

I have a friend who is doing his thesis on a tree. Specifically it’s a green Mechanical Engineering Ph.D. about transport processes for a single tree in situ, with hundreds of sensors I helped install. Some of these are soil microtensiometers. Google tensiometry and transpirational pull http://en.wikipedia.org/wiki/Transpirational_pull

if you’re looking for some weekend escapism.

John F wrote:

For a second I thought he was trying to outdo Julia Butterfly Hill!

Sounds interesting—thanks!

Right on! That is to say: abstracting. Topology is beautiful, and once again, we see the name Euler pop up in its beginnings, first in his lovely V-F+E = 2 + 2r to describe polyhedra (actually minus the 2r if a convex polyhedra) Polyhedron formula, then again in attacking the Bridges of Konigsberg (now Kalingrad) problem, which I do believe was the start of Network theory, yes?

In any event, just last week I learned that Network theory (as boyhood love of mine) had been renamed Graph Theory, which means I have to go back and study Fotini’s stuff!

The Rosetta Stone paper you and Mike Stay have written looks sweet as well, and I look forward and to reading it and calling attention to it soon, so thanks to you and Mike.

Yes, we live in interdisciplinary times. In 1950 Math and Physics were never father apart, and logic-based Computer Science was just getting started. 61 years later we see how interconnected they are, and props to you and all others making that point. good luck, sirs. And ladies.

The origin of my strange approach to “green mathematics” was being unable to conceive of any kind of equation that would be able to be responsive to its interactions with its environment, the way physical systems do.

So that’s why I invented my “other mathematics” which is more of a carefully constructed “pattern language”, to serve as an environment that equations could potentially relate to. As an “artificial environment” it describes boundary conditions within which various kinds of developmental progressions might take place.

So far there are only very limited features of that relationship that I can see how to automate, except for the approach of natural boundaries posing questions about how to change the equations. In that way it seems to naturally become an aid for people using a combination of equations and observations, as a way to learn about how defined and undefined systems are interacting.

Has anyone else found any other way to emulate the back and forth conversation between systems and natural environments, that I might find useful?

You could consider non-axiomatic foundations of mathematics. In such frameworks axioms (as a priori true statements) are replaced by relations. The truth of assertions can only be described in relation to the truth of other assertions. One can then reason about feedback loops and other circular systems in a systematic way. The price to pay is a higher vagueness of the results due to the multi-valuedness of the logic. As far as I know, there is no active research in this area. Maybe for a reason, since there are no relevant applications to traditional mathematical topics like number theory and one has to give up a lot (like the notion of proof becoming different).

While writing this I realize, that this is a contender for John’s ‘green mathematics’ with a ‘radical shift of viewpoint’.

Uwe, Yes, I inject fuzzy relationships at strategic points in the relation between models and their environments. That comes out of looking at how far from normal a system might be able to diverge. That serves to hypothesize a zone within which a model is self-contained, and beyond which interactions with other things not in the model arise at some point. It’s how I get questions about the systems’s environment into the model, by anticipating the onset of abnormal behavior rather than seeing uncertainty as only the limits of normal behavior.

It’s true that lots of people and a lot of talent went into exploring fuzzy logic and vague relationships for building logical networks. I don’t know where that ended up really, but it doesn’t seem to have fulfilled it’s once great promise. Failed experiments often do produce at least a few bits of learning that one can use with high confidence, though.

You might have a short list of those, but one seems to be that mathematics doesn’t seem to work well with undefined variables, or ones that essentially say “something will upset the model”, “go look at your environment”, but when studying open systems that’s exactly what we’re trying to model.

When you look at it, how physical systems interact with their environments is not definable for a host of interesting reasons. One is that environments can support many different kinds of systems all of which located by their internal consistency and mutual independence. So how they would interact would not be definable withing the logic of either. To me that was a thrilling discovery, a new way to look at the problem.

Kent Palmer is a sociologist-philosopher who has written extensively on the concepts of ’emergence’ and complex systems, mainly based on general systems science and its extension to meta-systems. I have been studying his 2nd doctoral thesis that talks about his investigations into the roots of Emergent Design. You might want to look him up. Check this link:

http://works.bepress.com/kent_palmer/

[…] So you cheered on Math4love for the Monthly Math Hours, followed the intriguing discussion of Network Theory and Green Mathematics at John Baez’s Azimuth, were amazed at the Math History Tour of Nottingham with your friendly […]

Not necessarily very green, but the two big commercial operations research and network analysis software systems are the unimaginately named Netica

http://www.norsys.com/

and Analytica

http://www.lumina.com/

I had a thought here (and I know I’m swimming in deep waters), but perhaps your wish might be fulfilled if you thought about “greening” from the “O” part instead of the “I” part of I/O. In a sense, so long as your mathematical output results in something green, this would relieve the necessity of creating some sort of “green math”. Math is math. Let the results of your math be green and there you have it. Just a thought. Good article.

One of the catches for either an input or output side analysis is that all equations operate in an open environment. Other business models, also constructed around their own I/O equations, will be “out there” to interact with. Neither’s equations will contain information about their future interactions with others. That’s the problem of open environments, and a daunting challenge for using “stand-alone math” for modeling their parts independent of the whole.

The available solution isn’t perfect, of course, but a great many of the important interactions that develop over time can be identified from closely observing how the larger system behaves as a whole. That way mathematical models, unable to define their own environment, can still interact with their environment through the critical observations of the model maker.

For example, one might be concerned with the major new wealth transfer presently occurring from the physical resource using side of business to the largely non-resource using side largely using financial information as a business. Think of the enormous resource price escalation that began around 2003 as a wholesale transfer of working capital from one side of the economy to the other.

It seems to be for natural causes, that global demand is exceeding global supply for the whole network of formerly interchangeable food and fuel resources. The price jump appears to be caused by finance responding by setting prices according to scarcity rather than material cost. It also seems likely to be permanent, till something else reverses the trend of increasing resource demand.

The point is that this kind of change of environment, for “green” products, is bad for business. It would cause struggling business plans to fail first, perhaps costing the economy much of the present wave of creative start-ups that are trying to invent our way out of the natural limits crisis we’re confronting.

See how that works? It’s important to inventors to know how their environment is changing. Stand-alone formulas can’t exist for how a formula’s assumptions need to change.

Here is a page of graphs from 1950 to the present showing what seem like the key resource market factors in our present crunch. I assembled them from WorldWatch Vital Signs 2011.

For

largenetworks:The map equation

M. Rosvall, D. Axelsson, and C. T. Bergstrom

European Journal of Physics 178:13-23. Also arXiv:0906.1405v1 [physics.soc-ph]

(

“Networks are useful constructs to schematize the organization of interactions in social and biological systems. Networks are particularly valuable for characterizing interdependent interactions, where the interaction between components A and B influences the interaction between components B and C, and so on. For most such integrated systems, it is a flow of some entity – passengers traveling among airports, money transferred among banks, gossip exchanged among friends, signals transmitted in the brain – that connects a system’s components and generates their interdependence. Network structures constrain these flows. Therefore, understanding the behavior of integrated systems at the macro-level is not possible without comprehending the network structure with respect to the flow, the dynamics on the network.”

(http://octavia.zoology.washington.edu/publications/RosvallEtAl10.pdf)

Multilevel compression of random walks on networks reveals hierarchical organization in large integrated systems

M. Rosvall and C. T. Bergstrom

arXiv:1010.0431

“To comprehend the hierarchical organization of large integrated systems, we introduce the hierarchical map equation that reveals multilevel structures in networks. In this information-theoretic approach, we exploit the duality between compression and pattern detection; by compressing a description of a random walker as a proxy for real flow on a network, we find regularities in the network that induce this system-wide flow. Finding the shortest multilevel description of the random walker therefore gives us the best hierarchical clustering of the network — the optimal number of levels and modular partition at each level — with respect to the dynamics on the network”

(http://octavia.zoology.washington.edu/publications/working/RosvallAndBergstrom10.pdf)

More available in PDF at:

http://octavia.zoology.washington.edu/publications/publications.html

[…] Network Theory (Part 1) […]

[…] c’est ici […]

And then there’s fuzzy logic! And other assorted paradigms from AI (agents, expert systems, cellular automata and so on).

Reverse engineering real cellular networks is a worthwhile pursuit also. Knowing the circuit diagram of a cell allows deep, “bottom up” comprehension / simulation of “everything in biology” :). Sound good?

So – if I present you with a time series of matrices of interactions between N components, can you evince the “circuit diagram”? Even if it contains positive and negative feedback loops? Even if they are nested to arbitrary depth? (this sounds like the kind of puzzle that “Dr. Ecco” (Dennis Shasha) would set.

Hi Again,

If I am not mistaken, by a green mathematics, you mean a foundation for mathematics which can be seen as a “way of reasoning” that is also “green”. Category theory as a foundation is a subject in the philosophy of mathematics. The problem is, we are always presented the theory of categories in SET. Thus, where is the foundation?

If you look at the diagrammatic calculus of Coecke and Penrose, and especially the paper by Joyal, you see that the reasoning is primarily the reasoning about directed graphs. But every category is a directed graph with algebraic data over the edges. There is a relationship (which I am not entirely clear on) between the axioms of these diagrams and the axioms of what is called “linear logic”. Thus, given your interest in diagrammatic reasoning, perhaps the “Green Mathematics” is simply the presentation of the theory of categories in a linear logic with all axioms supported in the rewrite rules of a diagrammatic calculus. As for why this might be green, I have a blog post about how autocatalytic reactions (prominent in the molecular theories of life) are traces in a symmetric monoidal category:

http://whyilovephysics.blogspot.com/

Thus, we probably want our “Green” math to be a foundation with traces. Like I’ve said, this is a presentation of the theory of categories in a linear logic.

I’m a big fan of John’s work and this blog is especially interesting.

Best.

I’m glad you like my stuff, including this blog.

This blog post is the first of a series, on network theory. You can see the whole series—or at least the part I’ve written so far—here. There’s a lot more yet to come. In the long run it will connect with the diagrammatic calculi used by Penrose, Abramsky–Coecke, Joyal–Street and others.

I was confused by this relationship for a long time. Right now my best understanding can be found in a paper with Mike Stay, in the second to last section. We explain the relation between string diagrams and multiplicative intuitionistic linear logic, or MILL. This is a simple, easily comprehensible fragment of linear logic.

Thanks, I’ll read that soon!

Computer scientists have long been interested in the relation between traces and feedback, and there’s a nice account of these ideas in Joyal and Street’s paper on traced monoidal categories. Since autocatalysis is a form of feedback, I guess you’re talking about something similar.

That’s a very ambitious goal. Usually the so-called ‘foundations’ of a subject are only established fairly late, when people have come to settled opinions about what they’re doing. So, instead of trying to figure out the foundations, I’m starting by working out the relationship between a bunch of existing approaches to complex systems made of interacting parts.

I am not sure if I am interested in a foundation or not. As a scientist, I want to have direct, intuitive access to the underlying reasoning of the mathematical structure which I will then use to describe my contact with nature. Furthermore, I want to see the most basic aspects of that structure well reflected in the visceral experience of observation. To have this, I’ve found I’ve had to touch on foundations simply because a theory, like the theory of categories, is always presented in some ambient reasoning structure. I want direct access, I suppose.

I think that categories, are a great start to a green mathematics. For instance, I have a dream of modelling economic growth with technological growth built right in at the bottom. This kind of growth is “network diagram growth”. What I mean is the following. When we draw a diagram, we start with a single line. Then we draw a dot. Then we draw another line. If we slow this process down and do it in stages, we have three diagrams. A dot, a line with a dot, and a line a dot and a line. Thus, drawing a diagram is the process of diagram growth.

This growth is highly structured. It is similar to Sorkin’s Causal growth dynamics and it is similar to Panangaden and Martin’s Domains as spacetimes (since the domain map is the evolving causal structure). It is different, in that the basic thing that is growing is a graph, not a partial order.

I have attempted to give precise rules for this growth. This has lead me to think about continuous functors, since the diagrams encode the axioms of particular categories.

In any case, “network diagram growth” is a technological growth,in that, a small diagram can be seen as a small category: one with very little going on. It’s like a factory that does one thing like grinding wheat into flour. Technological growth, happens when the factory owner realizes he can mix flour and water and then bake bread. That produces a new diagram and the old one maps into it in a structure preserving way. In this light, technology is understood as “learning about your past”. The larger diagram can be seen as a context in which to interpret one’s past.

This is “green”, in that, it is similar to how an organism might grow and change as when a seed grows into a tree. It is also “green”, in that, we now have a basic way to understand the value of economic growth. Namely, it allows us to understand our past. In that way, it is truly remarkable. Also, if we have good mathematical models of growth, perhaps we can have growth that is healthier for the planet. We can direct it at evolving the species, rather than just increasing the gross amount of resource we are consuming.

I should clarify what I mean by a healthier growth.

A factory has some material inputs and processes them, using process A, into some material outputs. The owner skims a profit off the top. If he increases the amount of throughput, ie the number of times that process A is enacted per day, then he grows his profit. This is one of the dimensions of growth.

The other dimension of growth is where the owner of the plant invents a new process, B, which takes some of the original inputs and produces new outputs. This new process has higher value than the last one (like they way new technologies fetch a higher price). This is growing value. He can take some of the material that was originally allotted to process A and use it to start enacting process B. Thus, the total amount of resource is the same, but because B fetches a higher price, his bottom line grows.

This is how we can have growth, without increases in gross consumption.

Great stuff but you have skipped over social networks. We are using these to collect data on differing societal responses to climate change. The project is described at our website http://www.compon.org. It would be great to have some nat sci math networks input

[…] interesting programming puzzle to chew on: look over John Baez’s posts on Network Theory. Learn some stuff on Petri nets (see also Azimuth project wiki on Petri […]

Inspired by your blog, I started one a few days ago.

I don’t know if you knew of this before, but you might like this:

http://arjunjainblog.wordpress.com/2013/04/14/phyllotaxy-a-serendipitous-surprise/

Am citing your interest in green mathematics in a piece in process of completion

Potential of Feynman Diagrams for Challenging Psychosocial Relationships? Comprehending the neglect of an unexplored possibility (http://www.laetusinpraesens.org/docs10s/feynman.php)

I’m actually a friend of the guy who made one of the diagrams you’ve used in your article, he’s doing his PhD at the moment, and we’ve had many long conversations on this topic, as he introduced me to the work of Howard Odum while I was studying other areas (around 15-20 years ago).

I’ve had an obsession dating back to original questions I began asking in my teens, which centres around the quantification of Ecological & Social Justice & Sustainability, using the principles of Ecological Systems Modelling & Thermodynamics, to create systems of non-species-biased, non-property/trade/currency-based, and non-hierarchical (aka anarchic) justice economics & politics … see >> http://www.open-empire.org for an overview.

I’m trying to find a way forward, but I’ve been working with a budget of $0 the whole time, and other difficulties like spinal damage & homelessness … so progress has been slower than I’d have liked, but I’m now working on a way to implement stages 0 & 1 of a 4 stage strategy, where the 4th stage is recursive – ie: a new instance of stage 4 is instantiated for every new project undertaken within the framework set up during stages 0 – 3. This work also involves me attempting to start coding the core systems on my own, though I do have some friends whom have volunteered to help, but to use that help I’ve gotta have more specific tasks for them to do.

I’m currently working on a modified blockchain that can handle the non-species-biased, non-property/trade/currency-based & non-hierarchical justice economics & politics … but there’s a lot of other elements I have to build too, and I’m basically having to teach myself everything as I go.

Most economic fallacies derive from the tendency to assume that there is a fixed pie, that one party can gain only at the expense of another.

The only relevant test of the validity of a hypothesis is comparison of prediction with experience.

That’s an interesting hypothesis. But how do you plan to test

thathypothesis, without assuming it’s true ahead of time?