Complex Adaptive System Design (Part 2)

Yesterday Blake Pollard and I drove to Metron’s branch in San Diego. For the first time, I met four of the main project participants: John Foley (math), Thy Tran (programming), Tom Mifflin and Chris Boner (two higher-ups involved in the project). Jeff Monroe and Tiffany Change give us a briefing on Metron’s ExAMS software. This lets you design complex systems and view them in various ways.

The most fundamental view is the ‘activity trace’, which consists of a bunch of parallel rows, one for each ‘performer’. Each row has a bunch of boxes which represent ‘activities’ that the performer can do. Two boxes are connected by a wire when one box’s activity causes another to occur. In general, time goes from left to right. Thus, if B can only occur after A, the box for B is drawn to the right of the box for A.

The wires can also merge via logic gates. For example, suppose activity D occurs whenever A and B but not C have occurred. Then wires coming out of the A, B, and C boxes merge in a logic gate and go into the A box. However, these gates are a bit more general than your ordinary Boolean logic gates. They may also involve ‘delays’, e.g. we can say that A occurs 10 minutes after B occurs.

I would like to understand the mathematics of just these logic gates, for starters. Ignoring delays for a minute (get the pun?), they seem to be giving a generalization of Petri nets. In a Petri net we only get to use the logical connective ‘and’. In other words, an activity can occur when all of some other activities have occurred. People have considered various generalizations of Petri nets, and I think some of them allow more general logical connectives, but I’m forgetting where I saw this done. Do you know?

In the full-fledged activity traces, the ‘activity’ boxes also compute functions, whose values flow along the wires and serve as inputs to other box. That is, when an activity occurs, it produces an output, which depends on the inputs entering the box along input wires. The output then appears on the wires coming out of that box.

I forget if each activity box can have multiple inputs and multiple outputs, but that’s certainly a natural thing.

The fun part is that one one can zoom in on any activity trace, seeing more fine-grained descriptions of the activities. In this more fine-grained description each box turns into a number of boxes connected by wires. And perhaps each wire becomes a number of parallel wires? That would be mathematically natural.

Activity traces give the so-called ‘logical’ description of the complex system being described. There is also a much more complicated ‘physical’ description, saying the exact mechanical functioning of all the parts. These parts are described using ‘plugins’ which need to be carefully described ahead of time—but can then simply be used when assembling a complex system.

Our little team is supposed to be designing our own complex systems using operads, but we want to take advantage of the fact that Metron already has this working system, ExAMS. Thus, one thing I’d like to do is understand ExAMS in terms of operads and figure out how to do something exciting and new using this understanding. I was very happy when Tom Mifflin embraced this goal.

Unfortunately there’s no manual for ExAMS: the US government was willing to pay for the creation of this system, but not willing to pay for documentation. Luckily it seems fairly simple, at least the part that I care about. (There are a lot of other views derived from the activity trace, but I don’t need to worry about these.) Also, ExAMS uses some DoDAF standards which I can read about. Furthermore, in some ways it resembles UML and SySML, or more precisely, certain parts of these languages.

In particular, the ‘activity diagrams’ in UML are a lot like the activity traces in ExAMS. There’s an activity diagram at the top of this page, and another below, in which time proceeds down the page.

So, I plan to put some time into understanding the underlying math of these diagrams! If you know people who have studied them using ideas from category theory, please tell me.

11 Responses to Complex Adaptive System Design (Part 2)

  1. This sounds similar to things Spencer Breiner at NIST is working on. See, for example, his NIST webpage and this Applied Category Theory page which links to slides from a talk he gave. In particular, I recall that he embeds logical relationships in the diagrams, which can encode statements such as “event a takes place at least 10 minutes after event b”.

    • John Baez says:

      Thanks! I think three different people have mentioned Spencer Breiner to me in the last couple weeks. I don’t know his work; it sounds like it’s past time to learn about it.

      I’m curious about the idea of blending logic and time in statements of the sort you mentioned. At first I thought we were dealing with a topos of sheaves over the real line—this would be a way to study ‘time-dependent sets’ and ‘time-dependent truth’. Of course one might not want the full apparatus of set theory in ones logic, so a topos could be overkill. But more importantly it’s possible that one wants to restrict to statements involving relative rather than absolute times: e.g., “event a takes places at least 10 minutes after event b” but not “event a takes place at least 2 days after January 3, 2014”. The latter could be important for some applications, but there might be a large class of systems where only the former show up. I’ll have to look at various people’s software, or theories of software, to see the common attitudes about this.

      • @whut says:

        These are of course not new ideas, and computer scientists have long been experimenting with how to represent temporal logic in terms of something that is executable. All of robotics and digital logic design has this goal.

        The one language that has time integrated into its syntax is Ada for software (and VHDL amongst others for hardware design). Ada in particular has two built-in expressions for delaying execution, one for absolute timing and one for relative timing.

        A delay_statement is used to block further execution until a specified expiration time is reached. The expiration time can be specified either as a particular point in time (in a delay_until_statement), or in seconds from the current time (in a delay_relative_statement). The language-defined package Calendar provides definitions for a type Time and associated operations, including a function Clock that returns the current time.

        B-N Syntax

        delay_statement ::= delay_until_statement | delay_relative_statement

        delay_until_statement ::= delay until delay_expression;

        delay_relative_statement ::= delay delay_expression;

        This is not really interesting until the concepts of threads are introduced, whereby Petri Net control flow can then be modeled. The foundational reference for the theory is by Hoare

        Hoare, Charles Antony Richard. Communicating sequential processes. Vol. 178. Englewood Cliffs: Prentice-hall, 1985.

        Any Petri Net diagram can be transcribed into an Ada program using the built-in syntax.

        The one caveat in this is that Ada is meant for writing real-time software. For the timer, it uses what is known as the “wall clock” which ticks away in actual time. But there is also a way to replace the wall clock with a simulated clock and thus use it in a way that is a pure executable architecture.

        This is a low-level nuts-and-bolts simulation concept that has been around for awhile. And it’s more building-block than the ExAMS software from what I can tell. Look into VHDL and your head will spin in terms of what can be constructed. A VHDL model of a logic design is by definition an “executable architecture”, and only when it is synthesized onto a chip is when you get to see it operate in the real world. All of the fortune in Silicon Valley is built on top of languages such as VHDL, Verilog, and others classified as proprietary intellectual property.

        • John Baez says:

          Thanks for the quick intro! I’ll look into this stuff. I’m getting ready to teach now, but I’ll have a lot more to say later.

          What does this mean, exactly?

          And it’s more building-block than the ExAMS software from what I can tell.

        • @whut says:

          “What does this mean, exactly?”

          Language designers provide the necessary primitives to be able to create all of the known synchronization constructs. I consider those the building blocks. Examples include guards, time-outs, semaphores, entries, mutexes, threads, The hardware design languages introduce signals, which are even more primitive.

          I haven’t seen the EXaMS tool but I doubt that they have the fine control that I would want. I could be wrong though.

  2. arch1 says:

    John, re: the hierarchical aspects, there may be quite a lot on structured/formal approaches to this in VLSI design / VLSI CAD / chip design / whatever-it’s-called-these-days literature.

    I base this on distant memories of personal computer based VLSI & electronic CAD tools which already 30+ years ago had rich functionality along the lines of your “zooming in” paragraph (e.g. n-level hierarchical design support, global and selective multilevel push/edit/pop, and yes buses exploding into wires).

  3. I’m reminded of my interpretation of PERT project scheduling in terms of enriched categories and functors, where the homs between items are the minimum elapsed time.

  4. Eugene says:

    There are papers linking temporal logic and category theory. For example “Towards a Common Categorical Semantics
    for Linear-Time Temporal Logic
    and Functional Reactive Programming” by
    Wolfgang Jeltsch (a copy is here ). Jeltsch has more papers along these lines.

  5. domenico says:

    Is there an analogy between Complex Adaptive System design and music scores?
    Each musician perform an orderly musical sequence in time, that is a established procedure (boxes for press, pinches, musical chord, etc), so that it could be possible to create new music in a more precise method (wire for tie) like a numerical function.
    Controlling an orchestra does not seem different from controlling a complex system using complex commands.

  6. John Baez says:

    There are also interesting comments on G+. A number express skepticism regarding the use of UML (Unified Modeling Language) for programming. This makes me wonder 1) how does Metron’s ExAMS software differ from UML, and 2) how does simulating networks of boats, planes and satellites differ from other sorts of programming?

    I have some ideas on these questions, but anyway, here is part of the conversation on G+:

    Carsten Führmann wrote:

    I must join the ranks of the UML sceptics here. Actually, my scepticism covers a whole range of diagrammatic techniques in software development. As you know, John, I’ve once dabbled in theories of diagrams (for logics), and had I stayed in academia, I’d probably have pursued this diagrammatic approach to logics and programming languages, premonoidal categories and such. But then I changed careers, and became a software developer. And I found to my own consternation that diagrammatic approaches to software, UML and others, are not beneficial to software engineer’s productivity. I thought a lot about the reasons. And I think I figured them out:

    Firstly, software systems quicky get so large that visualisations just look like the wires on a super-complex mainboard, or processor dye. By contrast, a programmer’s mind seems to work is work locally, in that they put their attention inside one diagram node (= function/class/module) and must forget about the rest. Then they take it from there and shift there focus to neighboring nodes. Navigation often takes place via something like project-wide full text search, but never by eye-balling an overview graph. One almost never “zooms out”, except, in the role of a software architect, to such a birds-eye perspective that the presentation (graphical vs. text) isn’t all that important. The understanding of the whole often works via verbalizing pervasive concepts rather than visualization.

    Secondly, I have yet to find a graphical language where the tool support for local change, called “refactoring” by programmers, is anywhere near as good as that for textual representation of code. I use semi-automatic refactoring tools daily that seem near unportable to diagrams.

    Then again, graphs do seem to be crucial in software engineering, but typically in algorithms executed by machines: intermediate output of a compiler represented as a graph, a graph of the heap used by a garbage collector, and so on.

    I am super-careful though to not overextend my conclusions. Just because software visualization doesn’t help me and my colleagues doesn’t mean it doesn’t help others. And there are of course areas outside of software engineering, like electronics, control theory, and biology, and physics, where the situation may be very different. And I know of course that you have tons of great materal to show that.

    Matt McIrvin wrote:

    I remember obsessively graphing the modular structure of the software I was working on when I was first starting out–but, in hindsight, to a large extent it was just because I didn’t have anything like a modern IDE, and the screens on which I did my text editing were tiny. My way of getting a slightly bigger-picture view of the code was to refer to massive paper printouts, and the pictures were a way of getting my mental model of those printouts under control.

    Carsten wrote:

    I’ve had a vaguely similar experience: I was trying to visualize the architecture of my company’s software. I have a very modern tool for this, which produces a UML-like presentation, and even lays it out automatically. The resulting graph, when printed at human-readable magnification, has maybe the size of a football field and is a mess of edges crossing each other. I then applied simplifications and different levels of granularity and zooming. But the handling of the mess is so difficult, it’s much faster to just look at the code.

    John Baez wrote:

    Carsten Führmann wrote:

    Firstly, software systems quickly get so large that visualisations just look like the wires on a super-complex mainboard, or processor dye.

    Could the reason be that the code is not organized in a hierarchical way that lets you see and work with useful “coarse-grained views” – views in which the millions of boxes and wires are grouped into fewer, larger units?

    Metron’s ExAMS software lets you simulate complex search-and-rescue operations involving many platforms—ships, aircraft, and satellites—communicating to each other and moving around. If you want, you can see down into fine-grained details of each electronic device on each platform. But you usually don’t want to. You certainly never want to see all that detail for every device on every platform all at once. What would be the point?

    It would be very interesting if ExAMS were better than SysML or UML. Unfortunately I don’t have technical documentation for ExAMS, so I’m starting by reading about those other systems.

    It would also be interesting if graphical methods are inherently better for the applications that Metron is concerned with—simulating operations involving ships, planes and satellites—than what you what you were doing—programming.

    Carsten wrote:

    Okay, let me reveal my daily work in its full mundaneness :) My company makes software for business, a rather big and versatile software that’s seen thousands of “man-years”. It’s written on the basis of Microsoft’s .Net framework, but it might as well be Java or something like that. At a technical level, the coarsest granularity is the “unit of deployment”, that is, an “assembly” file that may or may not be deployed to the customer. Each assembly contains some “namespaces”. Each namespace contains lots of “classes”. Each class lots of “methods”.

    We have maybe a thousand assemblies. They have a dependency hierarchy, in that stuff inside assembly A may need stuff inside assembly B. There are actually different types of dependency edges. The most important ones form a directed, acyclic graph. Other kinds of edges form a graphs that are also directed, but need not be acyclic. Then we can zoom inside a typical assembly. There will only be a small number of namespaces per assembly, let’s ignore those.

    Each assembly has a few hundred classes and interfaces on average. These classes and interfaces again have lots of dependencies among each other, and they have fewer, but still many, dependencies to classes and interfaces in other assemblies.

    Sometimes, but rarely, one looks at the entirety of the software, at assembly level. Then the UML-style visualization looks a bit like the map of a major city, with bundles of edges flowing between assemblies. If one then represents each bundle of edges by a single edge, one obtains a graph that’s presentable—if you have a monitor the size of a wall and very sharp eyes. But funnily enough, at that level, the visualization is hardly more informative or useful then the structure of named, and sometimes nested, file-system folders containing the assemblies.

    In daily programming, one typically focusses on changing the contents of a single assembly or a bunch of assemblies. So you look inside it, at the classes and interfaces. Now you can visualize those. Again, you get something like the map of a major city, except the road planners were on Speed and LSD combined. It’s actually much easier to just look at the (possibly nested) collection of named files, one per class, and the read them in the text editor. When one needs to follow a dependency, the IDE (integrated development environment) provides ways to click a piece of text and teleport you to the dependency.

    So, in practice, our software engineers rarely look at (architectural UML) visualizations.

    It’s not hard to imagine, though, that in areas other than software engineering, the relative sizes of the entities one deals with, and the abstraction layers, are better suited to visualization.

    I think the following paragraph in your previous comment is crucial: “It would also be interesting if graphical methods are inherently better for the applications that Metron is concerned with”. Indeed, it would be interesting! I have no idea, it might well be great! So we have this interesting question: What exactly is it that makes visualizations so useful in some domains, and so unproductive in others?

You can use Markdown or HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it.

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s