Yesterday Blake Pollard and I drove to Metron’s branch in San Diego. For the first time, I met four of the main project participants: John Foley (math), Thy Tran (programming), Tom Mifflin and Chris Boner (two higher-ups involved in the project). Jeff Monroe and Tiffany Change give us a briefing on Metron’s ExAMS software. This lets you design complex systems and view them in various ways.
The most fundamental view is the ‘activity trace’, which consists of a bunch of parallel rows, one for each ‘performer’. Each row has a bunch of boxes which represent ‘activities’ that the performer can do. Two boxes are connected by a wire when one box’s activity causes another to occur. In general, time goes from left to right. Thus, if B can only occur after A, the box for B is drawn to the right of the box for A.
The wires can also merge via logic gates. For example, suppose activity D occurs whenever A and B but not C have occurred. Then wires coming out of the A, B, and C boxes merge in a logic gate and go into the A box. However, these gates are a bit more general than your ordinary Boolean logic gates. They may also involve ‘delays’, e.g. we can say that A occurs 10 minutes after B occurs.
I would like to understand the mathematics of just these logic gates, for starters. Ignoring delays for a minute (get the pun?), they seem to be giving a generalization of Petri nets. In a Petri net we only get to use the logical connective ‘and’. In other words, an activity can occur when all of some other activities have occurred. People have considered various generalizations of Petri nets, and I think some of them allow more general logical connectives, but I’m forgetting where I saw this done. Do you know?
In the full-fledged activity traces, the ‘activity’ boxes also compute functions, whose values flow along the wires and serve as inputs to other box. That is, when an activity occurs, it produces an output, which depends on the inputs entering the box along input wires. The output then appears on the wires coming out of that box.
I forget if each activity box can have multiple inputs and multiple outputs, but that’s certainly a natural thing.
The fun part is that one one can zoom in on any activity trace, seeing more fine-grained descriptions of the activities. In this more fine-grained description each box turns into a number of boxes connected by wires. And perhaps each wire becomes a number of parallel wires? That would be mathematically natural.
Activity traces give the so-called ‘logical’ description of the complex system being described. There is also a much more complicated ‘physical’ description, saying the exact mechanical functioning of all the parts. These parts are described using ‘plugins’ which need to be carefully described ahead of time—but can then simply be used when assembling a complex system.
Our little team is supposed to be designing our own complex systems using operads, but we want to take advantage of the fact that Metron already has this working system, ExAMS. Thus, one thing I’d like to do is understand ExAMS in terms of operads and figure out how to do something exciting and new using this understanding. I was very happy when Tom Mifflin embraced this goal.
Unfortunately there’s no manual for ExAMS: the US government was willing to pay for the creation of this system, but not willing to pay for documentation. Luckily it seems fairly simple, at least the part that I care about. (There are a lot of other views derived from the activity trace, but I don’t need to worry about these.) Also, ExAMS uses some DoDAF standards which I can read about. Furthermore, in some ways it resembles UML and SySML, or more precisely, certain parts of these languages.
In particular, the ‘activity diagrams’ in UML are a lot like the activity traces in ExAMS. There’s an activity diagram at the top of this page, and another below, in which time proceeds down the page.
So, I plan to put some time into understanding the underlying math of these diagrams! If you know people who have studied them using ideas from category theory, please tell me.
The whole series of posts:
• Part 1. CASCADE: the Complex Adaptive System Composition and Design Environment.
• Part 2. Metron’s software for system design.
• Part 3. Operads: the basic idea.
• Part 4. Network operads: an easy example.
• Part 5. Algebras of network operads: some easy examples.
• Part 6. Network models.
• Part 7. Step-by-step compositional design and tasking using commitment networks.
• Part 8. Compositional tasking using category-valued network models.
• Part 9 – Network models from Petri nets with catalysts.
• Part 10 – Two papers reviewing the whole project.
This sounds similar to things Spencer Breiner at NIST is working on. See, for example, his NIST webpage and this Applied Category Theory page which links to slides from a talk he gave. In particular, I recall that he embeds logical relationships in the diagrams, which can encode statements such as “event a takes place at least 10 minutes after event b”.
Thanks! I think three different people have mentioned Spencer Breiner to me in the last couple weeks. I don’t know his work; it sounds like it’s past time to learn about it.
I’m curious about the idea of blending logic and time in statements of the sort you mentioned. At first I thought we were dealing with a topos of sheaves over the real line—this would be a way to study ‘time-dependent sets’ and ‘time-dependent truth’. Of course one might not want the full apparatus of set theory in ones logic, so a topos could be overkill. But more importantly it’s possible that one wants to restrict to statements involving relative rather than absolute times: e.g., “event a takes places at least 10 minutes after event b” but not “event a takes place at least 2 days after January 3, 2014”. The latter could be important for some applications, but there might be a large class of systems where only the former show up. I’ll have to look at various people’s software, or theories of software, to see the common attitudes about this.
These are of course not new ideas, and computer scientists have long been experimenting with how to represent temporal logic in terms of something that is executable. All of robotics and digital logic design has this goal.
The one language that has time integrated into its syntax is Ada for software (and VHDL amongst others for hardware design). Ada in particular has two built-in expressions for delaying execution, one for absolute timing and one for relative timing.
A delay_statement is used to block further execution until a specified expiration time is reached. The expiration time can be specified either as a particular point in time (in a delay_until_statement), or in seconds from the current time (in a delay_relative_statement). The language-defined package Calendar provides definitions for a type Time and associated operations, including a function Clock that returns the current time.
B-N Syntax
This is not really interesting until the concepts of threads are introduced, whereby Petri Net control flow can then be modeled. The foundational reference for the theory is by Hoare
Any Petri Net diagram can be transcribed into an Ada program using the built-in syntax.
The one caveat in this is that Ada is meant for writing real-time software. For the timer, it uses what is known as the “wall clock” which ticks away in actual time. But there is also a way to replace the wall clock with a simulated clock and thus use it in a way that is a pure executable architecture.
This is a low-level nuts-and-bolts simulation concept that has been around for awhile. And it’s more building-block than the ExAMS software from what I can tell. Look into VHDL and your head will spin in terms of what can be constructed. A VHDL model of a logic design is by definition an “executable architecture”, and only when it is synthesized onto a chip is when you get to see it operate in the real world. All of the fortune in Silicon Valley is built on top of languages such as VHDL, Verilog, and others classified as proprietary intellectual property.
Thanks for the quick intro! I’ll look into this stuff. I’m getting ready to teach now, but I’ll have a lot more to say later.
What does this mean, exactly?
“What does this mean, exactly?”
Language designers provide the necessary primitives to be able to create all of the known synchronization constructs. I consider those the building blocks. Examples include guards, time-outs, semaphores, entries, mutexes, threads, The hardware design languages introduce signals, which are even more primitive.
I haven’t seen the EXaMS tool but I doubt that they have the fine control that I would want. I could be wrong though.
John, re: the hierarchical aspects, there may be quite a lot on structured/formal approaches to this in VLSI design / VLSI CAD / chip design / whatever-it’s-called-these-days literature.
I base this on distant memories of personal computer based VLSI & electronic CAD tools which already 30+ years ago had rich functionality along the lines of your “zooming in” paragraph (e.g. n-level hierarchical design support, global and selective multilevel push/edit/pop, and yes buses exploding into wires).
I’m reminded of my interpretation of PERT project scheduling in terms of enriched categories and functors, where the homs between items are the minimum elapsed time.
https://golem.ph.utexas.edu/category/2013/03/project_planning_parallel_proc.html
There are papers linking temporal logic and category theory. For example “Towards a Common Categorical Semantics
for Linear-Time Temporal Logic
and Functional Reactive Programming” by
Wolfgang Jeltsch (a copy is here ). Jeltsch has more papers along these lines.
Jeltsch cites a couple papers by Jeffery on LTL and functional reactive programming (FRP); an older paper along these lines is Interaction Categories and the Foundations of Typed Concurrent Programming (Abramsky,Gay,Nagarajan 96), there’s also ongoing research into session types and related type systems for concurrent systems with provable “deadlock freedom”
Is there an analogy between Complex Adaptive System design and music scores?
Each musician perform an orderly musical sequence in time, that is a established procedure (boxes for press, pinches, musical chord, etc), so that it could be possible to create new music in a more precise method (wire for tie) like a numerical function.
Controlling an orchestra does not seem different from controlling a complex system using complex commands.
There are also interesting comments on G+. A number express skepticism regarding the use of UML (Unified Modeling Language) for programming. This makes me wonder 1) how does Metron’s ExAMS software differ from UML, and 2) how does simulating networks of boats, planes and satellites differ from other sorts of programming?
I have some ideas on these questions, but anyway, here is part of the conversation on G+:
Carsten Führmann wrote:
Matt McIrvin wrote:
Carsten wrote:
John Baez wrote:
Carsten wrote:
Okay, now I’m sorta finding this interesting! I’m not a software developer but I do have a bit of experience with PLCs and Saur Danfoss has a special type of function block diagram editor called GUIDE (Graphical User Interface Design Environment?) which is considered a high-level language by IEEE, ISO, and IEC:
http://powersolutions.danfoss.com/products/plus-1-software/#/
Of course it’s part of their PLUS 1 program so geared towards specific control situations but it could provide insight.
Also, you might be interested in the hypergraphs that Ben Goertzel and Company use with their Novamente and OpenCog projects:
Textbook: http://www.springer.com/us/book/9789462390263
Pre-print (free): http://lesswrong.com/lw/kq4/link_engineering_general_intelligence_the/
Of course their hypergraphs are not being used to study complex systems of systems, rather, they are what evolves in their directed evolution approach to AGI. They also have a probabilistic inference engine called PLN which they utilize to make inferences from hypergraphs; something you might find helpful.
From an analysis perspective, a couple of years ago I was reading a book of Humanity + interviews that Goertzel had put together, “Between Ape and Artilect:”
Click to access BetweenApeAndArtilect.pdf
and in one of the interviews Goertzel was discussing the problems he and his team were having mathematically modelling what they call cognitive synergy. From “The Hidden Pattern:”
Click to access HiddenPattern_march_4_06.pdf
“[…] Clustering/reasoning synergy is one among many dozens of examples of such synergies that we have discussed in the previous chapters. And the high-level mind patterns discussed in Chapter one – the dual network and the self, for example – are envisioned as emergent system patterns arising through cooperativity of all the system’s AI modules. The collection of AI modules in the system is intended to be a close-to-minimal set capable of leading to cooperative emergence of these high-level emergent structures. […]”
According to Goertzel, they were trying to use GeoMetrodynamics to model the synergy where the synergy is represented by curvature. This put the idea of synergy as curvature in my head and I came up with what I believe to be a novel idea, which I emailed to Goertzel and Duane Kouba (Dr. Kouba is an award-winning professor at UC Davis who specializes in Diffy Qs).
Basically, you start with a set of initial processes and synergies between those processes. So my thought was to encode the initial synergies in a Riemannian metric and then use Ricci Flow analogs to model each process with its relevant synergies leading to a system of Ricci Flow analogs which acts to transform the manifold on which it lives. This should tell one how the initial system evolves with time, from the perspective of synergy. To me, in designing a system of systems, the top level goal should be efficiency and it seems to me that high efficiency could be achieved via the maximization of synergy, the idea being to find a “close-to-minimal set capable of leading to cooperative emergence of” whatever one’s goal may be. If you find this idea compelling, you might want to email Goertzel and/or Kouba an inquiry as to whether or not they have done anything with it.
Now, with regards to your ethical considerations, in a sense, you can think of synergy as a type of mutually beneficial symbiosis, correct? Just recently, I posted a short comment on Wolfram’s blog suggesting the maximization of mutually beneficial symbiosis as the overall guiding principle for his AI Constitution. Constitutions are generally developed to guide relations between distinct entities and are most often deployed during conflict resolution. So imagine if your study of systems of systems, funded by DARPA or whoever – the military – led to a better and fuller understanding of general synergistic dynamics, conflict emergence, and conflict resolution which could be utilized on a global scale, eventually leading to a drastic reduction in the necessity for militaristic deployment? Now that would be the Karmic Kitty’s meow! Of course I always take the long view . . .
You might find it useful to look at IBM’s Rational XDE https://www-01.ibm.com/software/awdtools/suite/technical/features/ which connects UML and code for round-trip engineering (see wikipedia), they UML generates code skeletons and filled in code can be reversed engineered into diagrams. I found its Java precursor in 2001 to be the only IDE apart from emacs and intellij that I could stand to use; I believe it now supports multiple languages. I can’t imagine that this ExAMS s/w is more powerful. Why did Metron not use it? Possibly because Rational products did and probably do cost an awful lot per seat.
It’s been a long time since I’ve blogged about the Complex Adaptive System Composition and Design Environment or CASCADE project run by John Paschkewitz. For a reminder, read these:
• Complex adaptive system design (part 1), Azimuth, 2 October 2016.
• Complex adaptive system design (part 2), Azimuth, 18 October 2016.
A lot has happened since then, and I want to explain it.