Vladimir Voevodsky, 1966 — 2017

6 October, 2017



Vladimir Voevodsky died last week. He won the Fields Medal in 2002 for proving the Milnor conjecture in a branch of algebra known as algebraic K-theory. He continued to work on this subject until he helped prove the more general Bloch–Kato conjecture in 2010.

Proving these results—which are too technical to easily describe to nonmathematicians!—required him to develop a dream of Grothendieck: the theory of motives. Very roughly, this is a way of taking the space of solutions of some polynomial equations and chopping it apart into building blocks. But this process of ‘chopping’ and also these building blocks, called ‘motives’, are very abstract—nothing easy to visualize.

It’s a bit like how a proton is made of quarks. You never actually see a quark in isolation, so you have to think very hard to realize they are there at all. But once you know this, a lot of things become clear.

This is wonderful, profound mathematics. But in the process of proving the Bloch-Kato conjecture, Voevodsky became tired of this stuff. He wanted to do something more useful… and more ambitious. He later said:

It was very difficult. In fact, it was 10 years of technical work on a topic that did not interest me during the last 5 of these 10 years. Everything was done only through willpower.

Since the autumn of 1997, I already understood that my main contribution to the theory of motives and motivic cohomology was made. Since that time I have been very conscious and actively looking for. I was looking for a topic that I would deal with after I fulfilled my obligations related to the Bloch-Kato hypothesis.

I quickly realized that if I wanted to do something really serious, then I should make the most of my accumulated knowledge and skills in mathematics. On the other hand, seeing the trends in the development of mathematics as a science, I realized that the time is coming when the proof of yet another conjecture won’t have much of an effect. I realized that mathematics is on the verge of a crisis, or rather, two crises.

The first is connected with the separation of “pure” and applied mathematics. It is clear that sooner or later there will be a question about why society should pay money to people who are engaged in things that do not have any practical applications.

The second, less obvious, is connected with the complication of pure mathematics, which leads to the fact that, sooner or later, the articles will become too complicated for detailed verification and the process of accumulating undetected errors will begin. And since mathematics is a very deep science, in the sense that the results of one article usually depend on the results of many and many previous articles, this accumulation of errors for mathematics is very dangerous.

So, I decided, you need to try to do something that will help prevent these crises. For the first crisis, this meant that it was necessary to find an applied problem that required for its solution the methods of pure mathematics developed in recent years or even decades.

He looked for such a problem. He studied biology and found an interesting candidate. He worked on it very hard, but then decided he’d gone down a wrong path:

Since childhood I have been interested in natural sciences (physics, chemistry, biology), as well as in the theory of computer languages, and since 1997, I have read a lot on these topics, and even took several student and post-graduate courses. In fact, I “updated” and deepened the knowledge that had to a very large extent. All this time I was looking for that I recognized open problems that would be of interest to me and to which I could apply modern mathematics.

As a result, I chose, I now understand incorrectly, the problem of recovering the history of populations from their modern genetic composition. I took on this task for a total of about two years, and in the end, already by 2009, I realized that what I was inventing was useless. In my life, so far, it was, perhaps, the greatest scientific failure. A lot of work was invested in the project, which completely failed. Of course, there was some benefit, of course—I learned a lot of probability theory, which I knew badly, and also learned a lot about demography and demographic history.

But he bounced back! He came up with a new approach to the foundations of mathematics, and helped organize a team at the Institute of Advanced Studies at Princeton to develop it further. This approach is now called homotopy type theory or univalent foundations. It’s fundamentally different from set theory. It treats the fundamental concept of equality in a brand new way! And it’s designed to be done with the help of computers.

It seems he started down this new road when the mathematician Carlos Simpson pointed out a serious mistake in a paper he’d written.

I think it was at this moment that I largely stopped doing what is called “curiosity-driven research” and started to think seriously about the future. I didn’t have the tools to explore the areas where curiosity was leading me and the areas that I considered to be of value and of interest and of beauty.

So I started to look into what I could do to create such tools. And it soon became clear that the only long-term solution was somehow to make it possible for me to use computers to verify my abstract, logical, and mathematical constructions. The software for doing this has been in development since the sixties. At the time, when I started to look for a practical proof assistant around 2000, I could not find any. There were several groups developing such systems, but none of them was in any way appropriate for the kind of mathematics for which I needed a system.

When I first started to explore the possibility, computer proof verification was almost a forbidden subject among mathematicians. A conversation about the need for computer proof assistants would invariably drift to Gödel’s incompleteness theorem (which has nothing to do with the actual problem) or to one or two cases of verification of already existing proofs, which were used only to demonstrate how impractical the whole idea was. Among the very few mathematicians who persisted in trying to advance the field of computer verification in mathematics during this time were Tom Hales and Carlos Simpson. Today, only a few years later, computer verification of proofs and of mathematical reasoning in general looks completely practical to many people who work on univalent foundations and homotopy type theory.

The primary challenge that needed to be addressed was that the foundations of mathematics were unprepared for the requirements of the task. Formulating mathematical reasoning in a language precise enough for a computer to follow meant using a foundational system of mathematics not as a standard of consistency to establish a few fundamental theorems, but as a tool that can be employed in ­everyday mathematical work. There were two main problems with the existing foundational systems, which made them inadequate. Firstly, existing foundations of mathematics were based on the languages of predicate logic and languages of this class are too limited. Secondly, existing foundations could not be used to directly express statements about such objects as, for example, the ones in my work on 2-theories.

Still, it is extremely difficult to accept that mathematics is in need of a completely new foundation. Even many of the people who are directly connected with the advances in homotopy type theory are struggling with this idea. There is a good reason: the existing foundations of mathematics—ZFC and category theory—have been very successful. Overcoming the appeal of category theory as a candidate for new foundations of mathematics was for me personally the most challenging.

Homotopy type theory is now a vital and exciting area of mathematics. It’s far from done, and to make it live up to Voevodsky’s dreams will require brand new ideas—not just incremental improvements, but actual sparks of genius. For some of the open problems, see Mike Shulman’s comment on the n-Category Café, and some replies to that.

I only met him a few times, but as far as I can tell Voevodsky was a completely unpretentious person. You can see that in the picture here.

He was also a very complex person. For example, you might not guess that he took great wildlife photos:



You also might not guess at this side of him:

In 2006-2007 a lot of external and internal events happened to me, after which my point of view on the questions of the “supernatural” changed significantly. What happened to me during these years, perhaps, can be compared most closely to what happened to Karl Jung in 1913-14. Jung called it “confrontation with the unconscious”. I do not know what to call it, but I can describe it in a few words. Remaining more or less normal, apart from the fact that I was trying to discuss what was happening to me with people whom I should not have discussed it with, I had in a few months acquired a very considerable experience of visions, voices, periods when parts of my body did not obey me, and a lot of incredible accidents. The most intense period was in mid-April 2007 when I spent 9 days (7 of them in the Mormon capital of Salt Lake City), never falling asleep for all these days.

Almost from the very beginning, I found that many of these phenomena (voices, visions, various sensory hallucinations), I could control. So I was not scared and did not feel sick, but perceived everything as something very interesting, actively trying to interact with those “beings” in the auditorial, visual and then tactile spaces that appeared around me (by themselves or by invoking them). I must say, probably, to avoid possible speculations on this subject, that I did not use any drugs during this period, tried to eat and sleep a lot, and drank diluted white wine.

Another comment: when I say “beings”, naturally I mean what in modern terminology are called complex hallucinations. The word “beings” emphasizes that these hallucinations themselves “behaved”, possessed a memory independent of my memory, and reacted to attempts at communication. In addition, they were often perceived in concert in various sensory modalities. For example, I played several times with a (hallucinated) ball with a (hallucinated) girl—and I saw this ball, and felt it with my palm when I threw it.

Despite the fact that all this was very interesting, it was very difficult. It happened for several periods, the longest of which lasted from September 2007 to February 2008 without breaks. There were days when I could not read, and days when coordination of movements was broken to such an extent that it was difficult to walk.

I managed to get out of this state due to the fact that I forced myself to start math again. By the middle of spring 2008 I could already function more or less normally and even went to Salt Lake City to look at the places where I wandered, not knowing where I was, in the spring of 2007.

In short, he was a genius akin to Cantor or Grothendieck, at times teetering on the brink of sanity, yet gripped by an immense desire for beauty and clarity, engaging in struggles that gripped his whole soul. From the fires of this volcano, truly original ideas emerge.

This last quote, and the first few quotes, are from some interviews in Russian, done by Roman Mikhailov, which Mike Stay pointed out to me. I used Google Translate and polished the results a bit:

Интервью Владимира Воеводского (часть 1), 1 July 2012. English version via Google Translate: Interview with Vladimir Voevodsky (Part 1).

Интервью Владимира Воеводского (часть 2), 5 July 2012. English version via Google Translate: Interview with Vladimir Voevodsky (Part 2).

The quote about the origins of ‘univalent foundations’ comes from his nice essay here:

• Vladimir Voevodsky, The origins and motivations of univalent foundations, 2014.

There’s also a good obituary of Voevodsky explaining its relation to Grothendieck’s idea in simple terms:

• Institute for Advanced Studies, Vladimir Voevodsky 1966–2017, 4 October 2017.

The photograph of Voevodsky is from Andrej Bauer’s website:

• Andrej Bauer, Photos of mathematicians.

To learn homotopy type theory, try this great book:

Homotopy Type Theory: Univalent Foundations of Mathematics, The Univalent Foundations Program, Institute for Advanced Study.


Azimuth Backup Project (Part 5)

5 October, 2017

I haven’t spoken much about the Azimuth Climate Data Backup Project, but it’s going well, and I’ll be speaking about it soon, here:

International Open Access Week, Wednesday 25 October 2017, 9:30–11:00 a.m., University of California, Riverside, Orbach Science Library, Room 240.

“Open in Order to Save Data for Future Research” is the 2017 event theme.

Open Access Week is an opportunity for the academic and research community to learn about the potential benefits of sharing what they’ve learned with colleagues, and to help inspire wider participation in helping to make “open access” a new norm in scholarship, research and data planning and preservation.

The Open Access movement is made of up advocates (librarians, publishers, university repositories, etc.) who promote the free, immediate, and online publication of research.

The program will provide information on issues related to saving open data, including climate change and scientific data. The panelists also will describe open access projects in which they have participated to save climate data and to preserve end-of-term presidential data, information likely to be and utilized by the university community for research and scholarship.

The program includes:

• Brianna Marshall, Director of Research Services, UCR Library: Brianna welcomes guests and introduces panelists.

• John Baez, Professor of Mathematics, UCR: John will describe his activities to save US government climate data through his collaborative effort, the Azimuth Climate Data Backup Project. All of the saved data is now open access for everyone to utilize for research and scholarship.

• Perry Willett, Digital Preservation Projects Manager, California Digital Library: Perry will discuss the open data initiatives in which CDL participates, including the end-of-term presidential web archiving that is done in partnership with the Library of Congress, Internet Archive and University of North Texas.

• Kat Koziar, Data Librarian, UCR Library: Kat will give an overview of DASH, the UC system data repository, and provide suggestions for researchers interested in making their data open.

This will be the eighth International Open Access Week program hosted by the UCR Library.

The event is free and open to the public. Light refreshments will be served.


Applied Category Theory at UCR (Part 2)

21 September, 2017

I’m running a special session on applied category theory, and now the program is available:

Applied category theory, Fall Western Sectional Meeting of the AMS, 4-5 November 2017, U.C. Riverside.

This is going to be fun.

My former student Brendan Fong is now working with David Spivak at MIT, and they’re both coming. My collaborator John Foley at Metron is also coming: we’re working on the CASCADE project for designing networked systems.

Dmitry Vagner is coming from Duke: he wrote a paper with David and Eugene Lerman on operads and open dynamical system. Christina Vaisilakopoulou, who has worked with David and Patrick Schultz on dynamical systems, has just joined our group at UCR, so she will also be here. And the three of them have worked with Ryan Wisnesky on algebraic databases. Ryan will not be here, but his colleague Peter Gates will: together with David they have a startup called Categorical Informatics, which uses category theory to build sophisticated databases.

That’s not everyone—for example, most of my students will be speaking at this special session, and other people too—but that gives you a rough sense of some people involved. The conference is on a weekend, but John Foley and David Spivak and Brendan Fong and Dmitry Vagner are staying on for longer, so we’ll have some long conversations… and Brendan will explain decorated corelations in my Tuesday afternoon network theory seminar.

Here’s the program. Click on talk titles to see abstracts. For a multi-author talk, the person with the asterisk after their name is doing the talking. All the talks will be in Room 268 of the Highlander Union Building or ‘HUB’.

Saturday November 4, 2017, 9:00 a.m.-10:50 a.m.

9:00 a.m.
A higher-order temporal logic for dynamical systems.
David I. Spivak, MIT

10:00 a.m.
Algebras of open dynamical systems on the operad of wiring diagrams.
Dmitry Vagner*, Duke University
David I. Spivak, MIT
Eugene Lerman, University of Illinois at Urbana-Champaign

10:30 a.m.
Abstract dynamical systems.
Christina Vasilakopoulou*, University of California, Riverside
David Spivak, MIT
Patrick Schultz, MIT

Saturday November 4, 2017, 3:00 p.m.-5:50 p.m.

3:00 p.m.
Black boxes and decorated corelations.
Brendan Fong, MIT

4:00 p.m.
Compositional modelling of open reaction networks.
Blake S. Pollard*, University of California, Riverside
John C. Baez, University of California, Riverside

4:30 p.m.
A bicategory of coarse-grained Markov processes.
Kenny Courser, University of California, Riverside

5:00 p.m.
A bicategorical syntax for pure state qubit quantum mechanics.
Daniel M. Cicala, University of California, Riverside

5:30 p.m.
Open systems in classical mechanics.
Adam Yassine, University of California Riverside

Sunday November 5, 2017, 9:00 a.m.-10:50 a.m.

9:00 a.m.
Controllability and observability: diagrams and duality.
Jason Erbele, Victor Valley College

9:30 a.m.
Frobenius monoids, weak bimonoids, and corelations.
Brandon Coya, University of California, Riverside

10:00 a.m.
Compositional design and tasking of networks.
John D. Foley*, Metron, Inc.
John C. Baez, University of California, Riverside
Joseph Moeller, University of California, Riverside
Blake S. Pollard, University of California, Riverside

10:30 a.m.
Operads for modeling networks.
Joseph Moeller*, University of California, Riverside
John Foley, Metron Inc.
John C. Baez, University of California, Riverside
Blake S. Pollard, University of California, Riverside

Sunday November 5, 2017, 2:00 p.m.-4:50 p.m.

2:00 p.m.
Reeb graph smoothing via cosheaves.
Vin de Silva, Department of Mathematics, Pomona College

3:00 p.m.
Knowledge representation in bicategories of relations.
Evan Patterson*, Stanford University, Statistics Department

3:30 p.m.
The multiresolution analysis of flow graphs.
Steve Huntsman*, BAE Systems

4:00 p.m.
Data modeling and integration using the open source tool Algebraic Query Language (AQL).
Peter Y. Gates*, Categorical Informatics
Ryan Wisnesky, Categorical Informatics


Applied Category Theory 2018

12 September, 2017

There will be a conference on applied category theory!

Applied Category Theory (ACT 2018). School 23–27 April 2018 and workshop 30 April–4 May 2018 at the Lorentz Center in Leiden, the Netherlands. Organized by Bob Coecke (Oxford), Brendan Fong (MIT), Aleks Kissinger (Nijmegen), Martha Lewis (Amsterdam), and Joshua Tan (Oxford).

The plenary speakers will be:

• Samson Abramsky (Oxford)
• John Baez (UC Riverside)
• Kathryn Hess (EPFL)
• Mehrnoosh Sadrzadeh (Queen Mary)
• David Spivak (MIT)

There will be a lot more to say as this progresses, but for now let me just quote from the conference website:

Applied Category Theory (ACT 2018) is a five-day workshop on applied category theory running from April 30 to May 4 at the Lorentz Center in Leiden, the Netherlands.

Towards an Integrative Science: in this workshop, we want to instigate a multi-disciplinary research program in which concepts, structures, and methods from one scientific discipline can be reused in another. The aim of the workshop is to (1) explore the use of category theory within and across different disciplines, (2) create a more cohesive and collaborative ACT community, especially among early-stage researchers, and (3) accelerate research by outlining common goals and open problems for the field.

While the workshop will host discussions on a wide range of applications of category theory, there will be four special tracks on exciting new developments in the field:

1. Dynamical systems and networks
2. Systems biology
3. Cognition and AI
4. Causality

Accompanying the workshop will be an Adjoint Research School for early-career researchers. This will comprise a 16 week online seminar, followed by a 4 day research meeting at the Lorentz Center in the week prior to ACT 2018. Applications to the school will open prior to October 1, and are due November 1. Admissions will be notified by November 15.

Sincerely,
The organizers

Bob Coecke (Oxford), Brendan Fong (MIT), Aleks Kissinger (Nijmegen), Martha Lewis (Amsterdam), and Joshua Tan (Oxford)

We welcome any feedback! Please send comments to this link.

About Applied Category Theory

Category theory is a branch of mathematics originally developed to transport ideas from one branch of mathematics to another, e.g. from topology to algebra. Applied category theory refers to efforts to transport the ideas of category theory from mathematics to other disciplines in science, engineering, and industry.

This site originated from discussions at the Computational Category Theory Workshop at NIST on Sept. 28-29, 2015. It serves to collect and disseminate research, resources, and tools for the development of applied category theory, and hosts a blog for those involved in its study.

The proposal: Towards an Integrative Science

Category theory was developed in the 1940s to translate ideas from one field of mathematics, e.g. topology, to another field of mathematics, e.g. algebra. More recently, category theory has become an unexpectedly useful and economical tool for modeling a range of different disciplines, including programming language theory [10], quantum mechanics [2], systems biology [12], complex networks [5], database theory [7], and dynamical systems [14].

A category consists of a collection of objects together with a collection of maps between those objects, satisfying certain rules. Topologists and geometers use category theory to describe the passage from one mathematical structure to another, while category theorists are also interested in categories for their own sake. In computer science and physics, many types of categories (e.g. topoi or monoidal categories) are used to give a formal semantics of domain-specific phenomena (e.g. automata [3], or regular languages [11], or quantum protocols [2]). In the applied category theory community, a long-articulated vision understands categories as mathematical workspaces for the experimental sciences, similar to how they are used in topology and geometry [13]. This has proved true in certain fields, including computer science and mathematical physics, and we believe that these results can be extended in an exciting direction: we believe that category theory has the potential to bridge specific different fields, and moreover that developments in such fields (e.g. automata) can be transferred successfully into other fields (e.g. systems biology) through category theory. Already, for example, the categorical modeling of quantum processes has helped solve an important open problem in natural language processing [9].

In this workshop, we want to instigate a multi-disciplinary research program in which concepts, structures, and methods from one discipline can be reused in another. Tangibly and in the short-term, we will bring together people from different disciplines in order to write an expository survey paper that grounds the varied research in applied category theory and lays out the parameters of the research program.

In formulating this research program, we are motivated by recent successes where category theory was used to model a wide range of phenomena across many disciplines, e.g. open dynamical systems (including open Markov processes and open chemical reaction networks), entropy and relative entropy [6], and descriptions of computer hardware [8]. Several talks will address some of these new developments. But we are also motivated by an open problem in applied category theory, one which was observed at the most recent workshop in applied category theory (Dagstuhl, Germany, in 2015): “a weakness of semantics/CT is that the definitions play a key role. Having the right definitions makes the theorems trivial, which is the opposite of hard subjects where they have combinatorial proofs of theorems (and simple definitions). […] In general, the audience agrees that people see category theorists only as reconstructing the things they knew already, and that is a disadvantage, because we do not give them a good reason to care enough” [1, pg. 61].

In this workshop, we wish to articulate a natural response to the above: instead of treating the reconstruction as a weakness, we should treat the use of categorical concepts as a natural part of transferring and integrating knowledge across disciplines. The restructuring employed in applied category theory cuts through jargon, helping to elucidate common themes across disciplines. Indeed, the drive for a common language and comparison of similar structures in algebra and topology is what led to the development category theory in the first place, and recent hints show that this approach is not only useful between mathematical disciplines, but between scientific ones as well. For example, the ‘Rosetta Stone’ of Baez and Stay demonstrates how symmetric monoidal closed categories capture the common structure between logic, computation, and physics [4].

[1] Samson Abramsky, John C. Baez, Fabio Gadducci, and Viktor Winschel. Categorical methods at the crossroads. Report from Dagstuhl Perspectives Workshop 14182, 2014.

[2] Samson Abramsky and Bob Coecke. A categorical semantics of quantum protocols. In Handbook of Quantum Logic and Quantum Structures. Elsevier, Amsterdam, 2009.

[3] Michael A. Arbib and Ernest G. Manes. A categorist’s view of automata and systems. In Ernest G. Manes, editor, Category Theory Applied to Computation and Control. Springer, Berlin, 2005.

[4] John C. Baez and Mike STay. Physics, topology, logic and computation: a Rosetta stone. In Bob Coecke, editor, New Structures for Physics. Springer, Berlin, 2011.

[5] John C. Baez and Brendan Fong. A compositional framework for passive linear networks. arXiv e-prints, 2015.

[6] John C. Baez, Tobias Fritz, and Tom Leinster. A characterization of entropy in terms of information loss. Entropy, 13(11):1945–1957, 2011.

[7] Michael Fleming, Ryan Gunther, and Robert Rosebrugh. A database of categories. Journal of Symbolic Computing, 35(2):127–135, 2003.

[8] Dan R. Ghica and Achim Jung. Categorical semantics of digital circuits. In Ruzica Piskac and Muralidhar Talupur, editors, Proceedings of the 16th Conference on Formal Methods in Computer-Aided Design. Springer, Berlin, 2016.

[9] Dimitri Kartsaklis, Mehrnoosh Sadrzadeh, Stephen Pulman, and Bob Coecke. Reasoning about meaning in natural language with compact closed categories and Frobenius algebras. In Logic and Algebraic Structures in Quantum Computing and Information. Cambridge University Press, Cambridge, 2013.

[10] Eugenio Moggi. Notions of computation and monads. Information and Computation, 93(1):55–92, 1991.

[11] Nicholas Pippenger. Regular languages and Stone duality. Theory of Computing Systems 30(2):121–134, 1997.

[12] Robert Rosen. The representation of biological systems from the standpoint of the theory of categories. Bulletin of Mathematical Biophysics, 20(4):317–341, 1958.

[13] David I. Spivak. Category Theory for Scientists. MIT Press, Cambridge MA, 2014.

[14] David I. Spivak, Christina Vasilakopoulou, and Patrick Schultz. Dynamical systems and sheaves. arXiv e-prints, 2016.


Postdoc in Applied Category Theory

8 September, 2017

guest post by Spencer Breiner

One Year Postdoc Position at Carnegie Mellon/NIST

We are seeking an early-career researcher with a background in category theory, functional programming and/or electrical engineering for a one-year post-doctoral position supported by an Early-concept Grant (EAGER) from the NSF’s Systems Science program. The position will be managed through Carnegie Mellon University (PI: Eswaran Subrahmanian), but the position itself will be located at the US National Institute for Standards and Technology (NIST), located in Gaithersburg, Maryland outside of Washington, DC.

The project aims to develop a compositional semantics for electrical networks which is suitable for system prediction, analysis and control. This work will extend existing methods for linear circuits (featured on this blog!) to include (i) probabilistic estimates of future consumption and (ii) top-down incentives for load management. We will model a multi-layered system of such “distributed energy resources” including loads and generators (e.g., solar array vs. power plant), different types of resource aggregation (e.g., apartment to apartment building), and across several time scales. We hope to demonstrate that such a system can balance local load and generation in order to minimize expected instability at higher levels of the electrical grid.

This post is available full-time (40 hours/5 days per week) for 12 months, and can begin as early as October 1st.

For more information on this position, please contact Dr. Eswaran Subrahmanian (sub@cmu.edu) or Dr. Spencer Breiner (spencer.breiner@nist.gov).


Complex Adaptive Systems (Part 5)

4 September, 2017

When we design a complex system, we often start with a rough outline and fill in details later, one piece at a time. And if the system is supposed to be adaptive, these details may need to changed as the system is actually being used!

The use of operads should make this easier. One reason is that an operad typically has more than one algebra.

Remember from Part 3: an operad has operations, which are abstract ways of sticking things together. An algebra makes these operations concrete: it specifies some sets of actual things, and how the operations in the operad get implemented as actual ways to stick these things together.

So, an operad O can have one algebra in which things are described in a bare-bones, simplified way, and another algebra in which things are described in more detail. Indeed it will typically have many algebras, corresponding to many levels of detail, but let’s just think about two for a minute.

When we have a ‘less detailed’ algebra A and a ‘more detailed’ algebra A', they will typically be related by a map

f : A' \to A

which ‘forgets the extra details’. This map should be a ‘homomorphism’ of algebras, but I’ll postpone the definition of that concept.

What we often want to do, when designing a system, is not forget extra detail, but rather add extra detail to some rough specification. There is not always a systematic way to do this. If there is, then we may have a homomorphism

g : A \to A'

going back the other way. This is wonderful, because it lets us automate the process of filling in the details. But we can’t always count on being able to do this—especially not if we want an optimal or even acceptable result. So, often we may have to start with an element of A and search for elements of A' that are mapped to it by f : A' \to A.

Let me give some examples. I’ll take the operad that I described last time, and describe some of its algebras, and homomorphisms between these.

I’ll start with an algebra that has very little detail: its elements will be simple graphs. As the name suggests, these are among the simplest possible ways of thinking about networks. They just look like this:

Then I’ll give an algebra with more detail, where the vertices of our simple graphs are points in the plane. There’s nothing special about the plane: we could replace the plane by any other set, and get another algebra of our operad. For example, we could use the set of points on the surface of the Caribbean Sea, the blue stuff in the rectangle here:

That’s what we might use in a search and rescue operation. The points could represent boats, and the edges could represent communication channels.

Then I’ll give an algebra with even more detail, where two points connected by an edge can’t be too far apart. This would be good for range-limited communication channels.

Then I’ll give an algebra with still more detail, where the locations of the points are functions of time. Now our boats are moving around!

Okay, here we go.

The operad from last time was called O_G. Here G is the network model of simple graphs. The best way to picture an operation of O_G is as a way of sticking together a list of simple graphs to get a new simple graph.

For example, an operation

f \in O_G(3,4,2;9)

is a way of sticking together a simple graph with 3 vertices, one with 4 vertices and one with 2 vertices to get one with 9 vertices. Here’s a picture of such an operation:

Note that this operation is itself a simple graph. An operation in O_G(3,4,2;9) is just a simple graph with 9 vertices, where we have labelled the vertices from 1 to 9.

This operad comes with a very obvious algebra A where the operations do just what I suggested. In this algebra, an element of A(t) is a simple graph with t vertices, listed in order. Here t is any natural number, which I’m calling ‘t’ for ‘type’.

We also need to say how the operations in O_G act on these sets A(t). If we take simple graphs in A(3), A(4), and A(2):

we can use our operation f to stick them together and get this:

But we can also make up a more interesting algebra of O_G. Let’s call this algebra A'. We’ll let an element of A'(t) be a simple graph with t vertices, listed in order, which are points in the plane.

My previous pictures can be reused to show how operations in O_G act on this new algebra A'. The only difference is that now we tread the vertices literally as points in the plane! Before you should have been imagining them as abstract points not living anywhere; now they have locations.

Now let’s make up an even more detailed algebra A''.

What if our communication channels are ‘range-limited’? For example, what if two boats can’t communicate if they are more than 100 kilometers apart?

Then we can let an element of A''(t) be a simple graph with t vertices in the plane such that no two vertices connected by an edge have distance > 100.

Now the operations of our operad O_G act in a more interesting way. If we have an operation, and we apply it to elements of our algebra, it ‘tries’ to put in new edges as it did before, but it ‘fails’ for any edge that would have length > 100. In other words, we just leave out any edges that would be too long.

It took me a while to figure this out. At first I thought the result of the operation would need to be undefined whenever we tried to create an edge that violated the length constraint. But in fact it acts in a perfectly well-defined way: we just don’t put in edges that would be too long!

This is good. This means that if you tell two boats to set up a communication channel, and they’re too far apart, you don’t get the ‘blue screen of death’: your setup doesn’t crash and burn. Instead, you just get a polite warning—‘communication channel not established’—and you can proceed.

The nontrivial part is to check that if we do this, we really get an algebra of our operad! There are some laws that must hold in any algebra. But since I haven’t yet described those laws, I won’t check them here. You’ll have to wait for our paper to come out.

Let’s do one more algebra today. For lack of creativity I’ll call it A'''. Now an element of A'''(t) is a time-dependent graph in the plane with t vertices, listed in order. Namely, the positions of the vertices depend on time, and the presence or absence of an edge between two vertices can also depend on time. Furthermore, let’s impose the requirement that any two vertices can only connected by an edge at times when their distance is ≤ 100.

When I say ‘functions of time’ here, what ‘time’? We can model time by some interval [T_1, T_2]. But if you don’t like that, you can change it.

This algebra A''' works more or less like A''. The operations of O_G try to create edges, but these edges only ‘take’ at times when the vertices they connect have distance ≤ 100.

There’s something here you might not like. Our operations can only try to create edges ‘for all times’… and succeed at times when the vertices are close enough. We can’t try to set up a communication channel for a limited amount of time.

But fear not: this is just a limitation in our chosen network model, ‘simple graphs’. With a fancier network model, we’d get a fancier operad, with fancier operations. Right now I’m trying to keep the operad simple (pun not intended), and show you a variety of different algebras.

And you might expect, we have algebra homomorphisms going from more detailed algebras to less detailed ones:

f_T : A''' \to A'', \quad h : A' \to A

The homomorphism h takes a simple graph in the plane and forgets the location of its vertices. The homomorphism f_T depends on a choice of time T \in [T_1, T_2]. For any time T, it takes a time-dependent graph in the plane and evaluates it at that time, getting a graph in the plane (which obeys the distance constraints, since the time-dependent graph obeyed those constraints at any time).

We do not have a homomorphism g: A'' \to A' that takes a simple graph in the plane obeying our distance constraints and forgets about those constraints. There’s a map g sending elements of A'' to elements of A' in this way. But it’s not an algebra homomorphism! The problem is that first trying to connect two graphs with an edge and then applying g may give a different result than first applying g and then connecting two graphs with an edge.

In short: a single operad has many algebras, which we can use to describe our desired system at different levels of detail. Algebra homomorphisms relate these different levels of detail.

Next time I’ll look at some more interesting algebras of the same operad. For example, there’s one that describes a system of interacting mobile agents, which move around in some specific way, determined by their location and the locations of the agents they’re communicating with.

Even this is just the tip of the iceberg—that is, still a rather low level of detail. We can also introduce stochasticity (that is, randomness). And to go even further, we could switch to a more sophisticated operad, based on a fancier ‘network model’.

But not today.


Voyager 1

3 September, 2017


Launched 40 years ago, the Voyagers are our longest-lived and most distant spacecraft. Voyager 2 has reached the edge of the heliosphere, the realm where the solar wind and the Sun’s magnetic field live. Voyager 1 has already left the heliosphere and entered interstellar space! A new movie, The Farthest, celebrates the Voyagers’ journey toward the stars:

What has Voyager 1 been doing lately? I’ll skip its amazing exploration of the Solar System….

Leaving the realm of planets

On February 14, 1990, Voyager 1 took the first ever ‘family portrait’ of the Solar System as seen from outside. This includes the famous image of planet Earth known as the Pale Blue Dot:

Soon afterwards, its cameras were deactivated to conserve power and computer resources. The camera software has been removed from the spacecraft, so it would now be hard to get it working again. And here on Earth, the software for reading these images is no longer available!

On February 17, 1998, Voyager 1 reached a distance of 69 AU from the Sun — 69 times farther from the Sun than we are. At that moment it overtook Pioneer 10 as the most distant spacecraft from Earth! Traveling at about 17 kilometers per second, it was moving away from the Sun faster than any other spacecraft. It still is.

That’s 520 million kilometers per year — hard to comprehend. I find it easier to think about this way: it’s 3.6 AU per year. That’s really fast… but not if you’re trying to reach other stars. It will take 20,000 years just to go one light-year.

Termination shock

As Voyager 1 headed for interstellar space, its instruments continued to study the Solar System. Scientists at the Johns Hopkins University said that Voyager 1 entered the termination shock in February 2003. This is a bit like a ‘sonic boom’, but in reverse: it’s the place where the solar wind drops to below the speed of sound. Yes, sound can move through the solar wind, but only sound with extremely long wavelengths — nothing you can hear.

Some other scientists expressed doubt about this, and the issue wasn’t resolved until other data became available, since Voyager 1’s solar-wind detector had stopped working in 1990. This failure meant that termination shock detection had to be inferred from the other instruments on board. We now think that Voyager 1 reached the termination shock on December 15, 2004 — at a distance of 94 AU from the Sun.

Heliosheath

In May 2005, a NASA press release said that Voyager 1 had reached the
heliosheath
. This is a bubble of stagnant solar wind, moving below the speed of sound. It’s outside the termination shock but inside the heliopause, where the interstelllar wind crashes against the solar wind.

On March 31, 2006, amateur radio operators in Germany tracked and received radio waves from Voyager 1 using a 20-meter dish. They
checked their data against data from the Deep Space Network station in Madrid, Spain and yes — it matched. This was the first amateur tracking of Voyager 1!

On December 13, 2010, the the Low Energy Charged Particle device
aboard Voyager 1 showed that it passed the point where the solar wind flows away from the Sun. At this point the solar wind seems to turn sideways, due to the push of the interstellar wind. On this date, the spacecraft was approximately 17.3 billion kilometers from the Sun, or 116 AU.

In March 2011, Voyager 1 was commanded to change its orientation to measure the sideways motion of the solar wind. How? I don’t know. Its solar wind detector was broken.

But anyway, a test roll done in February had confirmed the spacecraft’s ability to maneuver and reorient itself. So, in March it rotated 70 degrees counterclockwise with respect to Earth to detect the solar wind. This was the first time the spacecraft had done any major maneuvering since the family portrait photograph of the planets was taken in 1990.

After the first roll the spacecraft had no problem in reorienting itself with Alpha Centauri, Voyager 1’s guide star, and it resumed sending transmissions back to Earth.

On December 1, 2011, it was announced that Voyager 1 had detected the first Lyman-alpha radiation originating from the Milky Way galaxy. Lyman-alpha radiation had previously been detected from other galaxies, but because of interference from the Sun, the radiation from the Milky Way was not detectable.

Puzzle: What the heck is Lyman-alpha radiation?

On December 5, 2011, Voyager 1 saw that the Solar System’s magnetic field had doubled in strength, basically because it was getting compressed by the pressure of the interstellar wind. Energetic particles originating in the Solar System declined by nearly half, while the detection of high-energy electrons from outside increased 100-fold.

Heliopause and beyond

In June 2012, NASA announced that the probe was detecting even more charged particles from interstellar space. This meant that it was getting close to the heliopause: the place where the gas of interstellar space crashes into the solar wind.

Voyager 1 actually crossed the heliopause in August 2012, although it took another year to confirm this. It was 121 AU from the Sun.



What’s next?

In about 300 years Voyager 1 will reach the Oort cloud, the region of frozen comets. It will take 30,000 years to pass through the Oort cloud. Though it is not heading towards any particular star, in about 40,000 years it will pass within 1.6 light-years of the star Gliese 445.

NASA says:

The Voyagers are destined — perhaps eternally —
to wander the Milky Way.

That’s an exaggeration. The Milky Way will not last forever. In just 3.85 billion years, before our Sun becomes a red giant, the Andromeda galaxy will collide with the Milky Way. In just 100 trillion years, all the stars in the Milky Way will burn out. And in just 10 quintillion years, the Milky Way will have disintegrated, with all the dead stars either falling into black holes or being flung off into intergalactic space.

But still: the Voyagers’ journeys are just beginning. Let’s wish them a happy 40th birthday!

My story here is adapted from this Wikipedia article:

• Wikipedia, Voyager 1.

You can download PDFs of posters commemorating the Voyagers here:

• NASA, NASA and iconic museum honor Voyager spacecraft 40th anniversary, August 30, 2017.