Can We Fix The Air?

12 January, 2020

A slightly different version of this article I wrote first appeared in Nautilus on November 28, 2019.


Water rushes into Venice’s city council chamber just minutes after the local government rejects measures to combat climate change. Wildfires consume eastern Australia as fire danger soars past “severe” and “extreme” to “catastrophic” in parts of New South Wales. Ice levels in the Chukchi Sea, north of Alaska, hit record lows. England sees floods all across the country. And that’s just this week, as I write this.

Human-caused climate change, and the disasters it brings, are here. In fact, they’re just getting started. What will things be like in another decade, or century?

It depends on what we do. If our goal is to stop global warming, the best way is to cut carbon emissions now—to zero. The United Kingdom, Denmark, and Norway have passed laws requiring net zero emissions by 2050. Sweden is aiming at 2045. But the biggest emitters—China, the United States, and India—are dragging their heels. So to keep global warming below 2 degrees Celsius over pre-industrial levels by 2100, it’s becoming more and more likely that we’ll need negative carbon emissions:

That is, we’ll need to fix the air. We’ll need to suck more carbon dioxide out of the atmosphere than we put in.

This may seem like a laughably ambitious goal. Can we actually do it? Or is it just a fantasy? I want to give you a sense of what it would take. But first, here’s one reason this matters. Most people don’t realize that large negative carbon emissions are assumed in many of the more optimistic climate scenarios. Even some policymakers tasked with dealing with climate change don’t know this.

In 2016, climate scientists Kevin Anderson and Glen Peters published a paper on this topic, called “The trouble with negative emissions.” The title is a bit misleading, since they are not against negative emissions. They are against lulling ourselves into complacency by making plans that rely on negative emissions—because we don’t really know how to achieve them at the necessary scale. We could be caught in a serious bind, with the poorest among us taking the biggest hit.

So, how much negative carbon emissions do we need to stay below 2 degrees Celsius of warming, and how people are hoping to achieve them? Let’s dive in!

In 2018, humans put about 37 billion tonnes of carbon dioxide into the air. A “tonne” is a metric ton, a bit larger than a US ton. Since the oxygen is not the problem—carbon dioxide consisting of one atom of carbon and two of oxygen—it might make more sense to count tonnes of carbon. But it’s customary to keep track of carbon by its carbon dioxide equivalent, so I’ll do that here. The National Academy of Sciences says that to keep global warming below 2 degrees Celsius by the century’s end, we will probably need to be removing about 10 billion tonnes of carbon dioxide from the air each year by 2050, and double that by 2100. How could we do this?

Whenever I talk about this, I get suggestions. Many ignore the sheer scale of the problem. For example, a company called Climeworks is building machines that suck carbon dioxide out of the air using a chemical process. They’re hoping to use these gadgets to make carbonated water for soft drinks—or create greenhouses that have lots of carbon dioxide in the air, for tastier vegetables. This sounds very exciting…until you learn that currently their method of getting carbon dioxide costs about $500 per ton. It’s much cheaper to make the stuff in other ways; beverage-grade carbon dioxide costs about a fifth as much. But even if they bring down the price and become competitive in their chosen markets, greenhouses and carbonation use only 6 million tonnes of carbon dioxide annually. This is puny compared to the amount we need to remove.

Thus, the right way to think of Climeworks is as a tentative first step toward a technology that might someday be useful for fighting global warming—but only if it can be dramatically scaled up and made much cheaper. The idea of finding commercial uses for carbon dioxide as a stepping-stone, a way to start developing technologies and bringing prices down, is attractive. But it’s different from finding commercial uses that could make a serious dent in our carbon emissions problem.

Here’s another example: using carbon dioxide from the air to make plastics. There’s a company called RenewCO2 that wants to do this. But even ignoring the cost, it’s clear that such a scheme could remove 10 billion tonnes of carbon dioxide from the air each year only if we drastically ramped up our production of plastics. In 2018, we made about 360 million tonnes of plastic. So, we’d have to boost plastic production almost ten-fold. Furthermore, we’d have to make all this plastic without massively increasing our use of fossil fuels. And that’s a general issue with schemes to fix the air. If we could generate a huge abundance of power in a carbon-free way—say from nuclear, solar, or wind—we could use some of that power to remove carbon dioxide from the atmosphere. But for the short term, a better use of that power is to retire carbon-burning power plants. Thus, while we can dream about energy-intensive methods of fixing the air, they will only come into their own—if ever—later in the century.

If plastics aren’t big enough to eat up 10 billion tonnes of carbon dioxide per year, what comes closer? Agriculture. I’m having trouble finding the latest data, but in 2004 the world created roughly 5 billion tonnes of “crop residue”: stems, leaves, and such left over from growing food. If we could dispose of most of this residue in a way that would sequester the carbon, that would count as serious progress. Indeed, environmental engineer Stuart Strand and physicist Gregory Benford—also a noted science fiction writer—have teamed up to study what would happen if we dumped bales of crop residue on the ocean floor. Even though this stuff would rot, it seems that the gases produced will take hundreds of years to resurface. And there’s plenty of room on the ocean floor.

Short of a massive operation to sink crop residues to the bottom of the sea, there are still many other ways to improve agriculture so that the soil accumulates more carbon. For example, tilling the land less reduces the rate at which organic matter decays and carbon goes back into the air. You can actually fertilize the land with half-burnt plant material full of carbon, called “biochar.” Planting crops with bigger roots, or switching from annual crops to perennials, also helps. These are just a few of the good ideas people have had. While agriculture and soil science are complex, and you probably don’t want to get into the weeds on this, the National Academy of Sciences estimates that we could draw down 3 billion tonnes of carbon dioxide per year from improved agriculture. That’s huge.

Having mentioned agriculture, it’s time to talk about forests. Everyone loves trees. However, it’s worth noting that a mature forest doesn’t keep on pulling down carbon at a substantial rate forever. Yes, carbon from the air goes to form wood and organic material in the soil. But decaying wood and organic material releases carbon back into the air. A climax forest is close to a steady state: the rate at which it removes carbon from the air is roughly equal to the rate at which it releases this carbon. So, the time when a forest pulls down the most carbon is when it’s first growing.

In July 2019, a paper in Science argued that the Earth has room for almost 4 million square miles of new forests. The authors claimed that as these new trees grow, they could pull down about 730 billion tonnes of carbon dioxide.

At first this sounds great. But remember, we are putting out 37 billion tonnes a year. So, the claim is that if we plant new forests over an area somewhat larger than the US, they will absorb the equivalent of roughly 20 years of carbon emissions. In short, this heroic endeavor would buy us time, but it wouldn’t be a permanent solution. Worse, many other authors have argued that the Science paper was overly optimistic. One rebuttal points out that it mistakenly assumed treeless areas have no organic carbon in the soil already. It also counted on a large increase of forests in regions that are now grassland or savanna. With such corrections made, it’s possible that new forests could only pull down at most 150 billion tonnes of carbon dioxide.

That’s still a lot. But getting people to plant vast new forests will be hard. Working with more realistic assumptions, the National Academy of Sciences says that in the short term we could draw down 2.5 billion tonnes of carbon dioxide per year by planting new forests and better managing existing ones. In short: If we push really hard, better agriculture and forestry could pull 5.5 billion tonnes of carbon dioxide from the air each year. One great advantage of both these methods is that they harness the marvelous ability of plants to turn carbon dioxide into complex organic compounds in a solar-powered way—much better than any technology humans have devised so far. If we ever invent new technologies that do better, it’ll probably be because we’ve learned some tricks from our green friends.

And here’s another way plants can help: biofuels. If we burn fuels that come from plants, we’re taking carbon out of the atmosphere and putting it right back in: net zero carbon emissions, roughly speaking. That’s better than fossil fuels, where we dig carbon up from the ground and burn it. But it would be even better if we could burn plants as fuels but then capture the carbon dioxide, compress it, and pump it underground into depleted oil and gas fields, unmineable coal seams, and the like.

To do this, we probably shouldn’t cut down forests to clear space for crops that we burn. Turning corn into ethanol is also rather inefficient, though the corn lobby in the U.S. has persuaded the government to spend lots of money on this, and about 40 percent of all corn grown in the U.S. now gets used this way. Suppose we just took all available agricultural, forestry, and municipal waste, like lawn trimmings, food waste, and such, to facilities able to burn it and pump the carbon dioxide underground. All this stuff ultimately comes from plants sucking carbon from the air. So, how much carbon dioxide could we pull out of the atmosphere this way? The National Academy of Sciences says up to 5.2 billion tonnes per year.

Of course, we can’t do this and also sink all agricultural waste into the ocean—that’s just another way of dealing with the same stuff. Furthermore, this high-end figure would require immensely better organization than we’ve been able to achieve so far. And there are risks involved in pumping lots of carbon dioxide underground.

What other activities could draw down lots of carbon? It pays to look at the biggest human industries: biggest, that is, in terms of sheer mass being processed. For example, we make lots of cement. Global cement production in 2017 was about 4.5 billion tons, with China making more than the rest of the world combined, and a large uncertainty in how much they made. As far as I know, only digging up and burning carbon is bigger: for example, 7.7 billion tons of coal is being mined per year.

Right now cement is part of the problem: To make the most commonly used kind we heat limestone until it releases carbon dioxide and becomes “quicklime.” Only about 7 percent of the total carbon we emit worldwide comes from this process—but that still counts for more than the entire aviation industry. Some scientists have invented cement that absorbs carbon dioxide as it dries. It has not yet caught on commercially, but the pressure on the industry is increasing. If we could somehow replace cement with a substance made mostly of carbon pulled from the atmosphere, and do it in an economically viable way, that would be huge. But this takes us into the realm of technologies that haven’t been invented yet.

New technologies may in fact hold the key to the problem. In the second half of the century we should be doing things that we can’t even dream of yet. In the next century, even more so. But it takes time to perfect and scale up new technologies. So it makes sense to barrel ahead with what we can do now, then shift gears as other methods become practical. Merely waiting and hoping is not wise.

Totaling up some of the options I’ve listed, we could draw down 1 billion tonnes of carbon dioxide by planting trees, 1.5 billion by better forest management, 3 billion by better agricultural practices, and up to 5.2 billion by biofuels with carbon capture. This adds up to over 10 billion tonnes per year. It’s not nearly enough to cancel the 37 billion tonnes we’re dumping into the air each year now. But combined with strenuous efforts to cut emissions, we might squeak by, and keep global warming below 2 degrees Celsius.

Even if we try, we are far from guaranteed to succeed—Anderson and Peters are right to warn about this. But will we even try? This is more a matter of politics and economics than of science and technology. The engineer Saul Griffith said that dealing with global warming is not like the Manhattan Project—it’s like the whole of World War II but with everyone on the same side. He was half right: We are not all on the same side. Not yet, anyway. Getting leaders who are inspired by these huge challenges, rather than burying their heads in the sand, would be a big step in the right direction.


UN Climate Action Summit

4 September, 2019

Christian Williams

Hello, I’m Christian Williams. I study category theory with John Baez at UC Riverside. I’ve written two posts on Azimuth about promising distributed computing endeavors. I believe in the power of applied theory – that’s why I left my life in Texas just to work with John. But lately I’ve begun to wonder if these great ideas will help the world quickly enough.

I want to discuss the big picture, and John has kindly granted me this platform with such a diverse, intelligent, and caring audience. This will be a learning process. All thoughts are welcome. Thanks for reading.

(Greta Thunberg, coming to help us wake up.)

…..
I am the master of my fate,
      I am the captain of my soul.

It’s important to be positive. Humanity now has a global organization called the United Nations. Just a few years ago, members signed an amazing treaty called The Paris Agreement. The parties and signatories:

… basically everyone.

By ratifying this document, the nations of the world agreed to act to keep global warming below 2C above pre-industrial levels – an unparalleled environmental consensus. (On Azimuth, in 2015.) It’s not mandatory, and to me that’s not the point. Together we formally recognize the crisis and express the intent to turn it around.

Except… we really don’t have much time.

We are consistently finding that the ecological crisis is of a greater magnitude and urgency than we thought. The report that finally slapped me awake is the IPCC 2018, which explains the difference between 2C and 1.5C in terms of total devastation and lives, and states definitively:

We must reduce global carbon emissions by 45% by 2030, and by 100% by 2050 to keep within 1.5C. We must have strong negative emissions into the next century. We must go well beyond our agreement, now.

(Blue is essentially, “we might still have a stable society”.)

So… how is our progress on the agreement? That is complicated, and a whole analysis is yet to be done. Here is the UN progress tracker. Here is an NRDC summary. Some countries are taking significant action, but most are not yet doing enough. Let that sink in.

However, the picture is much deeper than only national. Reform sparks at all levels of society: a US politician wanting to leave the agreement emboldened us to form the vast coalition We Are Still In. There are many initiatives like this, hundreds of millions of people rising to the challenge. A small selection:

City and State Levels
Mayors National Climate Action Agenda, U.S. Climate Alliance
Covenant of Mayors for Climate & Energy
International Levels
Reducing emissions from deforestation and forest degradation (REDD)

RE100, Under2 Coalition (The Climate Group)
Everyone Levels
Fridays for Future, Sunrise Movement, Extinction Rebellion
350.org, Climate Reality

Each of us must face this challenge, in their own way.

…..

Responding to the findings of the IPCC, the UN is meeting in New York on September 23, with even higher ambitions and higher stakes: UN Climate Action Summit 2019. The leaders will not sit around and give pep talks. They are developing plans which will describe how to transform society.

On the national level, we must make concrete, compulsory commitments. If they do not soon then we must demand louder, or take their place. The same week as the summit, there will be a global climate strike. It is crucial that all generations join the youth in these demonstrations.

We must change how the world works. We have reached global awareness, and we have reached an ethical imperative.

Please listen to an inspiring activist share her lucid thoughts.

Breakthrough Institute on Climate Change

10 March, 2019

I found this article, apparently by Ted Nordhaus and Alex Trembath, to be quite thought-provoking. At times it sinks too deep into the moment’s politics for my taste, given that the issues it raises will probably be confronting us for the whole 21st century. But still, it raises big issues:

• Breakthrough Institute, Is climate change like diabetes or an asteroid?

The Breakthrough Insitute seeks “technological solutions to environmental challenges”, so that informs their opinions. Let me quote some bits and urge you to read the whole thing! Even if it annoys you, it should make you think a bit.

Is climate change more like an asteroid or diabetes? Last month, one of us argued at Slate that climate advocates should resist calls to declare a national climate emergency because climate change was more like “diabetes for the planet” than an asteroid. The diabetes metaphor was surprisingly controversial. Climate change can’t be managed or lived with, many argued in response; it is an existential threat to human societies that demands an immediate cure.

The objection is telling, both in the ways in which it misunderstands the nature of the problem and in the contradictions it reveals. Diabetes is not benign. It is not a “natural” phenomena and it can’t be cured. It is a condition that, if unmanaged, can kill you. And even for those who manage it well, life is different than before diabetes.

This seems to us to be a reasonably apt description of the climate problem. There is no going back to the world before climate change. Whatever success we have mitigating climate change, we almost certainly won’t return to pre-industrial atmospheric concentrations of greenhouse gases, at least not for many centuries. Even at one or 1.5 degrees Celsius of warming, the climate and the planet will look very different, and that will bring unavoidable consequences for human societies. We will live on a hotter planet and in a climate that will be more variable and less predictable.

How bad our planetary diabetes gets will depend on how much we continue to emit and how well adapted to a changing climate human societies become. With the present one degree of warming, it appears that human societies have adapted relatively well. Various claims attributing present day natural disasters to climate change are controversial. But the overall statistics suggest that deaths due to climate-related natural disasters globally are falling, not rising, and that economic losses associated with those disasters, adjusting for growing population and affluence, have been flat for many decades.

But at three or four degrees of warming, all bets are off. And it appears that unmanaged, that’s where present trends in emissions arelikely to take us. Moreover, even with radical action, stabilizing emissions at 1.5 degrees C, as many advocates now demand, is not possible without either solar geoengineering or sucking carbon emissions out of the atmosphere at massive scale. Practically, given legacy emissions and committed infrastructure, the long-standing international target of limiting temperature increase to two degrees C is also extremely unlikely.

Unavoidably, then, treating our climate change condition will require not simply emissions reductions but also significant adaptation to known and unknown climate risks that are already baked in to our future due to two centuries of fossil fuel consumption. It is in this sense that we have long argued that climate change must be understood as a chronic condition of global modernity, a problem that will be managed but not solved.

A discussion of the worst-case versus the best-case IPCC scenarios, and what leads to these scenarios:

The worst case climate scenarios, which are based on worst case emissions scenarios, are the source of most of the terrifying studies of potential future climate impacts. These are frequently described as “business as usual” — what happens if the economy keeps growing and the global population becomes wealthier and hence more consumptive. But that’s not how the IPCC, which generates those scenarios, actually gets to very high emissions futures. Rather, the worst case scenarios are those in which the world remains poor, populous, unequal, and low-tech. It is a future with lots of poor people who don’t have access to clean technology. By contrast, a future in which the world is resilient to a hotter climate is likely also one in which the world has been more successful at mitigating climate change as well. A wealthier world will be a higher-tech world, one with many more low carbon technological options and more resources to invest in both mitigation and adaptation. It will be less populous (fertility rates reliably fall as incomes rise), less unequal (because many fewer people will live in extreme poverty), and more urbanized (meaning many more people living in cities with hard infrastructure, air conditioning, and emergency services to protect them).

That will almost certainly be a world in which global average temperatures have exceeded two degrees above pre-industrial levels. The latest round of climate deadline-ism (12 years to prevent climate catastrophe according to The Guardian) won’t change that. But as even David Wallace Wells, whose book The Uninhabitable Earth has helped revitalize climate catastrophism, acknowledges, “Two degrees would be terrible but it’s better than three… And three degrees is much better than four.”

Given the current emissions trajectory, a future world that stabilized emissions below 2.5 or three degrees, an accomplishment that in itself will likely require very substantial and sustained efforts to reduce emissions, would also likely be one reasonably well adapted to live in that climate, as it would, of necessity, be one that was much wealthier, less unequal, and more advanced technologically than the world we live in today.

The most controversial part of the article concerns the “apocalyptic” or “millenarian” tendency among enviromentalists: the feeling that only a complete reorganization of society will save us—for example, going “back to nature”.

[…] while the nature of the climate problem is chronic and the political and policy responses are incremental, the culture and ideology of contemporary environmentalism is millenarian. In the millenarian mind, there are only two choices, catastrophe or completely reorganizing society. Americans will either see the writing on the wall and remake the world, or perish in fiery apocalypse.

This, ultimately, is why adaptation, nuclear energy, carbon capture, and solar geoengineering have no role in the environmental narrative of apocalypse and salvation, even as all but the last are almost certainly necessary for any successful response to climate change and will also end up in any major federal policy effort to address climate change. Because they are basically plug-and-play with the existing socio-technical paradigm. They don’t require that we end capitalism or consumerism or energy intensive lifestyles. Modern, industrial, techno-society goes on, just without the emissions. This is also why efforts by nuclear, carbon capture, and geoengineering advocates to marshall catastrophic framing to build support for those approaches have had limited effect.

The problem for the climate movement is that the technocratic requirements necessary to massively decarbonize the global economy conflict with the egalitarian catastrophism that the movement’s mobilization strategies demand. McKibben has privately acknowledged as much to several people, explaining that he hasn’t publicly recognized the need for nuclear energy because he believes doing so would “split this movement in half.”

Implicit in these sorts of political calculations is the assumption that once advocates have amassed sufficient political power, the necessary concessions to the practical exigencies of deeply reducing carbon emissions will then become possible. But the army you raise ultimately shapes the sorts of battles you are able to wage, and it is not clear that the army of egalitarian millenarians that the climate movement is mobilizing will be willing to sign on to the necessary compromises — politically, economically, and technologically — that would be necessary to actually address the problem.

Again: read the whole thing!


Exploring New Technologies

13 February, 2019

I’ve got some good news! I’ve been hired by Bryan Johnson to help evaluate and explain the potential of various technologies to address the problem of climate change.

Johnson is an entrepreneur who sold his company Braintree for $800M and started the OS Fund in 2014, seeding it with $100M to invest in the hard sciences so that we can move closer towards becoming proficient system administrators of our planet: engineering atoms, molecules, organisms and complex systems. The fund has invested in many companies working on synthetic biology, genetics, new materials, and so on. Here are some writeups he’s done on these companies.

As part of my research I’ll be blogging about some new technologies, asking questions and hoping experts can help me out. Stay tuned!




Stratospheric Controlled Perturbation Experiment

28 November, 2018

I have predicted for a while that as the issue of climate change becomes ever more urgent, the public attitude regarding geoengineering will at some point undergo a phase transition. For a long time it seems the general attitude has been that deliberately interfering with the Earth’s climate on a large scale is “unthinkable”: beyond the pale. I predict that at some point this will flip and the general attitude will become: “how soon can we do it?”

The danger then is that we rush headlong into something untested that we’ll regret.

For a while I’ve been advocating research in geoengineering, to prevent a big mistake like this. Those who consider it “unthinkable” often object to such research, but I think preventing research is not a good long-term policy. I think it actually makes it more likely that at some point, when enough people become really desperate about climate change, we will do something rash without enough information about the possible effects.

Anyway, one can argue about this all day: I can see the arguments for both sides. But here is some news: scientists will soon study how calcium carbonate disperses when you dump a little into the atmosphere:

First sun-dimming experiment will test a way to cool Earth, Nature, 27 November 2018.

It’s a good article—read it! Here’s the key idea:

If all goes as planned, the Harvard team will be the first in the world to move solar geoengineering out of the lab and into the stratosphere, with a project called the Stratospheric Controlled Perturbation Experiment (SCoPEx). The first phase — a US$3-million test involving two flights of a steerable balloon 20 kilometres above the southwest United States — could launch as early as the first half of 2019. Once in place, the experiment would release small plumes of calcium carbonate, each of around 100 grams, roughly equivalent to the amount found in an average bottle of off-the-shelf antacid. The balloon would then turn around to observe how the particles disperse.

The test itself is extremely modest. Dai, whose doctoral work over the past four years has involved building a tabletop device to simulate and measure chemical reactions in the stratosphere in advance of the experiment, does not stress about concerns over such research. “I’m studying a chemical substance,” she says. “It’s not like it’s a nuclear bomb.”

Nevertheless, the experiment will be the first to fly under the banner of solar geoengineering. And so it is under intense scrutiny, including from some environmental groups, who say such efforts are a dangerous distraction from addressing the only permanent solution to climate change: reducing greenhouse-gas emissions. The scientific outcome of SCoPEx doesn’t really matter, says Jim Thomas, co-executive director of the ETC Group, an environmental advocacy organization in Val-David, near Montreal, Canada, that opposes geoengineering: “This is as much an experiment in changing social norms and crossing a line as it is a science experiment.”

Aware of this attention, the team is moving slowly and is working to set up clear oversight for the experiment, in the form of an external advisory committee to review the project. Some say that such a framework, which could pave the way for future experiments, is even more important than the results of this one test. “SCoPEx is the first out of the gate, and it is triggering an important conversation about what independent guidance, advice and oversight should look like,” says Peter Frumhoff, chief climate scientist at the Union of Concerned Scientists in Cambridge, Massachusetts, and a member of an independent panel that has been charged with selecting the head of the advisory committee. “Getting it done right is far more important than getting it done quickly.”

For more on SCoPEx, including a FAQ, go here:

Stratospheric Controlled Perturbation Experiment (SCoPEx), Keutsch Group, Harvard.


Statebox: A Universal Language of Distributed Systems

22 January, 2018

guest post by Christian Williams

A short time ago, on the Croatian island of Zlarin, there gathered a band of bold individuals—rebels of academia and industry, whose everyday thoughts and actions challenge the separations of the modern world. They journeyed from all over to learn of the grand endeavor of another open mind, an expert functional programmer and creative hacktivist with significant mathematical knowledge: Jelle |yell-uh| Herold.

The Dutch computer scientist has devoted his life to helping our species and our planet: from consulting in business process optimization to winning a Greenpeace hackathon, from updating Netherlands telecommunications to creating a website to determine ways for individuals to help heal the earth, Jelle has gained a comprehensive perspective of the interconnected era. Through a diverse and innovative career, he has garnered crucial insights into software design and network computation—most profoundly, he has realized that it is imperative that these immense forces of global change develop thoughtful, comprehensive systematization.

Jelle understood that initiating such a grand ambition requires a massive amount of work, and the cooperation of many individuals, fluent in different fields of mathematics and computer science. Enter the Zlarin meeting: after a decade of consideration, Jelle has now brought together proponents of categories, open games, dependent types, Petri nets, string diagrams, and blockchains toward a singular end: a universal language of distributed systems—Statebox.

Statebox is a programming language formed and guided by fundamental concepts and principles of theoretical mathematics and computer science. The aim is to develop the canonical process language for distributed systems, and thereby elucidate the way these should actually be designed. The idea invokes the deep connections of these subjects in a novel and essential way, to make code simple, transparent, and concrete. Category theory is both the heart and pulse of this endeavor; more than a theory, it is a way of thinking universally. We hope the project helps to demonstrate the importance of this perspective, and encourages others to join.

The language is designed to be self-optimizing, open, adaptive, terminating, error-cognizant, composable, and most distinctively—visual. Petri nets are the natural representation of decentralized computation and concurrency. By utilizing them as program models, the entire language is diagrammatic, and this allows one to inspect the flow of the process executed by the program. While most languages only compile into illegible machine code, Statebox compiles directly into diagrams, so that the user immediately sees and understands the concrete realization of the abstract design. We believe that this immanent connection between the “geometric” and “algebraic” aspects of computation is of great importance.

Compositionality is a rightfully popular contemporary term, indicating the preservation of type under composition of systems or processes. This is essential to the universality of the type, and it is intrinsic to categories, which underpin the Petri net. A pertinent example is that composition allows for a form of abstraction in which programs do not require complete specification. This is parametricity: a program becomes executable when the functions are substituted with valid terms. Every term has a type, and one cannot connect pieces of code that have incompatible inputs and outputs—the compiler would simply produce an error. The intent is to preserve a simple mathematical structure that imposes as little as possible, and still ensure rationality of code. We can then more easily and reliably write tools providing automatic proofs of termination and type-correctness. Many more aspects will be explained as we go along, and in more detail in future posts.

Statebox is more than a specific implementation. It is an evolving aspiration, expressing an ideal, a source of inspiration, signifying a movement. We fully recognize that we are at the dawn of a new era, and do not assume that the current presentation is the best way to fulfill this ideal—but it is vital that this kind of endeavor gains the hearts and minds of these communities. By learning to develop and design by pure theory, we make a crucial step toward universal systems and knowledge. Formalisms are biased, fragile, transient—thought is eternal.

Thank you for reading, and thank you to John Baez—|bi-ez|, some there were not aware—for allowing me to write this post. Azimuth and its readers represent what scientific progress can and should be; it is an honor to speak to you. My name is Christian Williams, and I have just begun my doctoral studies with Dr. Baez. He received the invitation from Jelle and could not attend, and was generous enough to let me substitute. Disclaimer: I am just a young student with big dreams, with insufficient knowledge to do justice to this huge topic. If you can forgive some innocent confidence and enthusiasm, I would like to paint a big picture, to explain why this project is important. I hope to delve deeper into the subject in future posts, and in general to celebrate and encourage the cognitive revolution of Applied Category Theory. (Thank you also to Anton and Fabrizio for providing some of this writing when I was not well; I really appreciate it.)

Statebox Summit, Zlarin 2017, was awesome. Wish you could’ve been there. Just a short swim in the Adriatic from the old city of Šibenik |shib-enic|, there lies the small, green island of Zlarin |zlah-rin|, with just a few hundred kind inhabitants. Jelle’s friend, and part of the Statebox team, Anton Livaja and his family graciously allowed us to stay in their houses. Our headquarters was a hotel, one of the few places open in the fall. We set up in the back dining room for talks and work, and for food and sunlight we came to the patio and were brought platters of wonderful, wholesome Croatian dishes. As we burned the midnight oil, we enjoyed local beer, and already made history—the first Bitcoin transaction of the island, with a progressive bartender, Vinko.

Zlarin is a lovely place, but we haven’t gotten to the best part—the people. All who attended are brilliant, creative, and spirited. Everyone’s eyes had a unique spark to light. I don’t think I’ve ever met such a fascinating group in my life. The crew: Jelle, Anton, Emi Gheorghe, Fabrizio Genovese, Daniel van Dijk, Neil Ghani, Viktor Winschel, Philipp Zahn, Pawel Sobocinski, Jules Hedges, Andrew Polonsky, Robin Piedeleu, Alex Norta, Anthony di Franco, Florian Glatz, Fredrik Nordvall Forsberg. These innovators have provocative and complementary ideas in category theory, computer science, open game theory, functional programming, and the blockchain industry; and they came to share an important goal. These are people who work earnestly to better humanity, motivated by progress, not profit. Talking with them gave me hope, that there are enough intelligent, open-minded, and caring people to fix this mess of modern society. In our short time together, we connected—now, almost all continue to contribute and grow the endeavor.

Why is society a mess? The present human condition is absurd. We are in a cognitive renaissance, yet our world is in peril. We need to realize a deeper harmony of theory and practice—we need ideas that dare to dream big, that draw on the vast wealth of contemporary thought to guide and unite subjects in one mission. The way of the world is only a reflection of how we choose to think, and for more than a century we have delved endlessly into thought itself. If we truly learn from our thought, knowledge and application become imminently interrelated, not increasingly separate. It is imperative that we abandon preconception, pretense and prejudice, and ask with naive sincerity: “How should things be, really, and how can we make it happen?”

This pertains more generally to the irresponsibly ad hoc nature of society—we find ourselves entrenched in inadequate systems. Food, energy, medicine, finance, communications, media, governance, technology—our deepening dependence on centralization is our greatest vulnerability. Programming practice is the perfect example of the gradual failure of systems when their design is left to wander in abstraction. As business requirements evolved, technological solutions were created haphazardly, the priority being immediate return over comprehensive methodology, which resulted in ‘duct-taped’ systems, such as the Windows OS. Our entire world now depends on unsystematic software, giving rise to so much costly disorganization, miscommunication, and worse, bureaucracy. Statebox aims to close the gap between the misguided formalisms which came out of this type of degeneration, and design a language which corresponds naturally to essential mathematical concepts—to create systems which are rational, principled, universal. To explain why Statebox represents to us such an important ideal, we must first consider its closest relative, the elephant in the technological room: blockchain.

Often the best ideas are remarkably simple—in 2008, an unknown person under the alias Satoshi Nakamoto published the whitepaper Bitcoin: A Peer-to-Peer Electronic Cash System. In just a few pages, a protocol was proposed which underpins a new kind of computational network, called a blockchain, in which interactions are immediate, transparent, and permanent. This is a personal interpretation—the paper focuses on the application given in its title. In the original financial context, immediacy is one’s ability to directly transact with anyone, without intermediaries, such as banks; transparency is one’s right to complete knowledge of the economy in which one participates, meaning that each node owns a copy of the full history of the network; permanence is the irrevocability of one’s transactions. These core aspects are made possible by an elegant use of cryptography and game theory, which essentially removes the need for trusted third parties in the authorization, verification, and documentation of transactions. Per word, it’s almost peerless in modern influence; the short and sweet read is recommended.

The point of this simplistic explanation is that blockchain is about more than economics. The transaction could be any cooperation, the value could be any social good—when seen as a source of consensus, the blockchain protocol can be expanded to assimilate any data and code. After several years of competing cryptocurrencies, the importance of this deeper idea was gradually realized. There arose specialized tools to serve essential purposes in some broader system, and only recently have people dared to conceive of what this latter could be. In 2014, a wunderkind named Vitalik Buterin created Ethereum, a fully programmable blockchain. Solidity is a Turing-complete language of smart contracts, autonomous programs which enable interactions and enact rules on the network. With this framework, one can not only transact with others, but implement any kind of process; one can build currencies, websites, or organizations—decentralized applications, constructed with smart contracts, could be just about anything.

There is understandably great confidence and excitement for these ventures, and many are receiving massive public investment. Seriously, the numbers are staggering—but most of it is pure hype. There is talk of the first global computer, the internet of value, a single shared source of truth, and other speculative descriptions. But compared to the ambition, the actual theory is woefully underdeveloped. So far, implementations make almost no use of the powerful ideas of mathematics. There are still basic flaws in blockchain itself, the foundation of almost all decentralized technology. For example, the two viable candidates for transaction verification are called Proof of Work and Proof of Stake: the former requires unsustainable consumption of resources, namely hardware and electricity, and the latter is susceptible to centralization. Scalability is a major problem, thus also cost and speed of transactions. A major Ethereum dApp, Decentralized Autonomous Organization, was hacked.

These statements are absolutely not to disregard all of the great work of this community; it is primarily rhetoric to distinguish the high ideals of Statebox, and I lack the eloquence to make the point diplomatically, nor near the knowledge to give a real account of this huge endeavor. We now return to the rhetoric.

What seems to be lost in the commotion is the simple recognition that we do not yet really know what we should make, nor how to do so. The whole idea is simply too big—the space of possibility is almost completely unknown, because this innovation can open every aspect of society to reform. But as usual, people try to ignore their ignorance, imagining it will disappear, and millions clamor about things we do not yet understand. Most involved are seeing decentralization as an exciting business venture, rather than our best hope to change the way of this broken world; they want to cash in on another technological wave. Of the relatively few idealists, most still retain the assumptions and limitations of the blockchain.

For all this talk, there is little discussion of how to even work toward the ideal abstract design. Most mathematics associated to blockchain is statistical analysis of consensus, while we’re sitting on a mountain of powerful categorical knowledge of systems. At the summit, Prof. Neil Ghani said “it’s like we’re on the Moon, talking about going to Mars, while everyone back on Earth still doesn’t even have fire.” We have more than enough conceptual technology to begin developing an ideal and comprehensive system, if the right minds come together. Theory guides practice, practice motivates theory—the potential is immense.

Fortunately, there are those who have this big picture in mind. Long before the blockchain craze, Jelle saw the fundamental importance of both distributed systems and the need for academic-industrial symbiosis. In the mid-2000’s, he used Petri nets to create process tools for businesses. Employees could design and implement any kind of abstract workflow to more effectively communicate and produce. Jelle would provide consultation to optimize these processes, and integrate them into their existing infrastructure—as it executed, it would generate tasks, emails, forms and send them to designated individuals to be completed for the next iteration. Many institutions would have to shell out millions of dollars to IBM or Fujitsu for this kind of software, and his was more flexible and intuitive. This left a strong impression on Jelle, regarding the power of Petri nets and the impact of deliberate design.

Many experiences like this gradually instilled in Jelle a conviction to expand his knowledge and begin planning bold changes to the world of programming. He attended mathematics conferences, and would discuss with theorists from many relevant subjects. On the island, he told me that it was actually one of Baez’s talks about networks which finally inspired him to go for this huge idea. By sincerely and openly reaching out to the whole community, Jelle made many valuable connections. He invited these thinkers to share his vision—theorists from all over Europe, and some from overseas, gathered in Croatia to learn and begin to develop this project—and it was a great success.

By now you may be thinking, alright kid spill the beans already. Here they are, right into your brain—well, most will be in the next post, but we should at least have a quick overview of some of the main ideas not already discussed.

The notion of open system complements compositionality. The great difference between closure and openness, in society as well as theory, was a central theme in many of our conversations during the summit. Although we try to isolate and suspend life and cognition in abstraction, the real, concrete truth is what flows through these ethereal forms. Every system in Statebox is implicitly open, and this impels design to idealize the inner and outer connections of processes. Open systems are central to the Baez Network Theory research team. There are several ways to categorically formalize open systems; the best are still being developed, but the first main example can be found in The Algebra of Open and Interconnected Systems by Brendan Fong, an early member of the team.

Monoidal categories, as this blog knows well, represent systems with both series and parallel processes. One of the great challenge of this new era of interconnection is distributed computation—getting computers to work together as a supercomputer, and monoidal categories are essential to this. Here, objects are data types, and morphisms are computations, while composition is serial and tensor is parallel. As Dr. Baez has demonstrated with years of great original research, monoidal categories are essential to understanding the complexity of the world. If we can connect our knowledge of natural systems to social systems, we can learn to integrate valuable principles—a key example being complete resource cognizance.

Petri nets are presentations of free strict symmetric monoidal categories, and as such they are ideal models of “normal” computation, i.e. associative, unital, and commutative. Open Petri nets are the workhorses of Statebox. They are the morphisms of a category which is itself monoidal—and via openness it is even richer and more versatile. Most importantly it is compact closed, which introduces a simple but crucial duality into computation—input-output interchange—which is impossible in conventional cartesian closed computation, and actually brings the paradigm closer to quantum computation

Petri nets represent processes in an intuitive, consistent, and decentralized way. These will be multi-layered via the notion of operad and a resourceful use of Petri net tokens, representing the interacting levels of a system. Compositionality makes exploring their state space much easier: the state space of a big process can be constructed from those of smaller ones, a technique that more often than not avoids state space explosion, a long-standing problem in Petri net analysis. The correspondence between open Petri nets and a logical calculus, called place/transition calculus, allows the user to perform queries on the Petri net, and a revolutionary technique called information-gain computing greatly reduces response time.

Dependently typed functional programming is the exoskeleton of this beautiful beast; in particular, the underlying language is Idris. Dependent types arose out of both theoretical mathematics and computer science, and they are beginning to be recognized as very general, powerful, and natural in practice. Functional programming is a similarly pure and elegant paradigm for “open” computation. They are fascinating and inherently categorical, and deserve whole blog posts in the future.

Even economics has opened its mind to categories. Statebox is very fortunate to have several of these pioneers—open game theory is a categorical, compositional version of game theory, which allows the user to dynamically analyze and optimize code. Jules’ choice of the term “teleological category” is prescient; it is about more than just efficiency—it introduces the possibility of building principles into systems, by creating game-theoretical incentives which can guide people to cooperate for the greater good, and gradually lessen the influence of irrational, selfish priorities.

Categories are the language by which Petri nets, functional programming, and open games can communicate—and amazingly, all of these theories are unified in an elegant representation called string diagrams. These allow the user to forget the formalism, and reason purely in graphical terms. All the complex mathematics goes under the hood, and the user only needs to work with nodes and strings, which are guaranteed to be formally correct.

Category theory also models the data structures that are used by Statebox: Typedefs is a very lightweight—but also very expressive—data structure, that is at the very core of Statebox. It is based on initial F-algebras, and can be easily interpreted in a plethora of pre-existing solutions, enabling seamless integration with existing systems. One of the core features of Typedefs is that serialization is categorically internalized in the data structure, meaning that every operation involving types can receive a unique hash and be recorded on the blockchain public ledger. This is one of the many components that make Statebox fail-resistant: every process and event is accounted for on the public ledger, and the whole history of a process can be rolled back and analyzed thanks to the blockchain technology.

The Statebox team is currently working on a monograph that will neatly present how all the pertinent categorical theories work together in Statebox. This is a formidable task that will take months to complete, but will also be the cleanest way to understand how Statebox works, and which mathematical questions have still to be answered to obtain a working product. It will be a thorough document that also considers important aspects such as our guiding ethics.

The team members are devoted to creating something positive and different, explicitly and solely to better the world. The business paradigm is based on the principle that innovation should be open and collaborative, rather than competitive and exclusive. We want to share ideas and work with you. There are many blooming endeavors which share the ideals that have been described in this article, and we want them all to learn from each other and build off one another.

For example, Statebox contributor and visionary economist Viktor Winschel has a fantastic project called Oicos. The great proponent of applied category theory, David Spivak, has an exciting and impressive organization called Categorical Informatics. Mike Stay, a past student of Dr. Baez, has started a company called Pyrofex, which is developing categorical distributed computation. There are also somewhat related languages for blockchain, such as Simplicity, and innovative distributed systems such as Iota and RChain. Even Ethereum is beginning to utilize categories, with Casper. And of course there are research groups, such as Network Theory and Mathematically Structured Programming, as well as so many important papers, such as Algebraic Databases. This is just a slice of everything going on; as far as I know there is not yet a comprehensive account of all the great applied category theory and distributed innovations being developed. Inevitably these endeavors will follow the principle they share, and come together in a big way. Statebox is ready, willing, and able to help make this reality.

If you are interested in Statebox, you are welcomed with open arms. You can contact Jelle at jelle@statebox.io, Fabrizio at fabrizio@statebox.org, Emi at emi@statebox.io, Anton at anton@statebox.io; they can provide more information, connect you to the discussion, or anything else. There will be a second summit in 2018 in about six months, details to be determined. We hope to see you there. Future posts will keep you updated, and explain more of the theory and design of Statebox. Thank you very much for reading.

P.S. Found unexpected support in Šibenik! Great bar—once a reservoir.


Azimuth Backup Project (Part 3)

22 January, 2017


azimuth_logo

Along with the bad news there is some good news:

• Over 380 people have pledged over $14,000 to the Azimuth Backup Project on Kickstarter, greatly surpassing our conservative initial goal of $5,000.

• Given our budget, we currently aim at backing up 40 terabytes of data, and we are well on our way to this goal. You can see what we’ve done at Our Progress, and what we’re still doing at the Issue Tracker.

• I have gotten a commitment from Danna Gianforte, the head of Computing and Communications at U. C. Riverside, that eventually the university will maintain a copy of our data. (This commitment is based on my earlier estimate that we’d have 20 terabytes of data, so I need to see if 40 is okay.)

• I have gotten two offers from other people, saying they too can hold our data.

I’m hoping that the data at U. C. Riverside will be made publicly available through a server. The other offers may involve it being held ‘secretly’ until such time as it became needed; that has its own complementary advantages.

However, the interesting problem that confronts us now is: how to spend our money?

You can see how we’re currently spending it on our Budget and Spending page. Basically, we’re paying a firm called Hetzner for servers and storage boxes.

We could simply continue to do this until our money runs out. I hope that long before then, U. C. Riverside will have taken over some responsibilities. If so, there would be a long period where our money would largely pay for a redundant backup. Redundancy is good, but perhaps there is something better.

Two members of our team, Sakari Maaranen and Greg Kochanski, have thoughts on this matter which I’d like to share. Sakari posted his thoughts on Google+, while Greg posted his in an email which he’s letting me share here.

Please read these and offer us your thoughts! Maybe you can help us decide on the best strategy!

Sakari Maaranen

For the record, my views on our strategy of using the budget that the Azimuth Climate Data Backup Project now has.

People have contributed it to this effort specifically.

Some non-government entities have offered “free hosting”. Of course the project should take any and all free offers to host our data. Those would not be spending our budget however. And they are still paying for it, even if they offered it to us “for free”.

As far as it comes to spending, I think we should think in terms of 1) terabytemonths, and 2) sufficient redundancy, and do that as cost-efficiently as possible. We should not just dump the money to any takers, but think of the best bang for the buck. We owe that to the people who have contributed now.

For example, if we burn the cash quick to expensive storage, I would consider that a failure. Instead, we must plan for the best use of the budget towards our mission.

What we have promised to the people is that we back up and serve these data sets, by the money they have given to us. Let’s do exactly that.

We are currently serving the mission at approximately €0.006 per gigabytemonth at least for as long as we have volunteers to work for free. The cost could be slightly higher if we paid for professional maintenance, which should be a reasonable assumption if we plan for long term service. Volunteer work cannot be guaranteed forever, even if it works temporarily.

This is one view and the question is open to public discussion.

Greg Kochanski

Some misc thoughts.

1) As I see it, we have made some promise of serving the data (“create a better interface for getting it”) which can be an expensive thing.

UI coding isn’t all that easy, and takes some time.

Beyond that, we’ve promised to back up the data, and once you say “backup”, you’ve also made an implicit promise to make the data available.

2) I agree that if we have a backup, it is a logical extension to take continuous backups, but I wouldn’t say it’s necessary.

Perhaps the way to think about it is to ask the question, “what do our donors likely want”?

3) Clearly they want to preserve the data, in case it disappears from the Federal sites. So, that’s job 1. And, if it does disappear, we need to make it available.

3a) Making it available will require some serving CPU, disk, and network. We may need to worry about DDOS attacks, thought perhaps we could get free coverage from Akamai or Google Project Shield.

3b) Making it available may imply paying some students to write Javascript and HTML to put up a front-end to allow people to access the data we are collecting.

Not all the data we’re collecting is in strictly servable form. Some of the databases, for example aren’t usefully servable in the form we collect, and we know some links will be broken because of missing pages, or because of wget’s design flaw.*

[* Wget stores http://a/b/c as a file, a/b/c, where a/b is a directory. Wget stores http://a/b as a file a/b, where a/b is a file.

Therefore, both cannot exist simultaneously on disk. If they do, wget drops one.]

Points 3 & 3a imply that we need to keep some money in the bank until either the websites are taken down, or we decide that the threat has abated. So, we need to figure out how much money to keep as a serving reserve. It doesn’t sound like UCR has committed to serve the data, though you could perhaps ask.

Beyond the serving reserve, I think we are free to do better backups (i.e. more than one data collection), and change detection.