Circuits, Bond Graphs, and Signal-Flow Diagrams

 

My student Brandon Coya finished his thesis, and successfully defended it last Tuesday!

• Brandon Coya, Circuits, Bond Graphs, and Signal-Flow Diagrams: A Categorical Perspective, Ph.D. thesis, U. C. Riverside, 2018.

It’s about networks in engineering. He uses category theory to study the diagrams engineers like to draw, and functors to understand how these diagrams are interpreted.

His thesis raises some really interesting pure mathematical questions about the category of corelations and a ‘weak bimonoid’ that can be found in this category. Weak bimonoids were invented by Pastro and Street in their study of ‘quantum categories’, a generalization of quantum groups. So, it’s fascinating to see a weak bimonoid that plays an important role in electrical engineering!

However, in what follows I’ll stick to less fancy stuff: I’ll just explain the basic idea of Brandon’s thesis, say a bit about circuits and ‘bond graphs’, and outline his main results. What follows is heavily based on the introduction of his thesis, but I’ve baezified it a little.

The basic idea

People, and especially scientists and engineers, are naturally inclined to draw diagrams and pictures when they want to better understand a problem. One example is when Feynman introduced his famous diagrams in 1949; particle physicists have been using them ever since. But some other diagrams introduced by engineers are far more important to the functioning of the modern world and its technology. It’s outrageous, but sociologically understandable, that mathematicians have figured out more about Feynman diagrams than these other kinds: circuit diagrams, bond graphs and signal-flow diagrams. This is the problem Brandon aims to fix.

I’ve been unable to track down the early history of circuit diagrams, so if you know about that please tell me! But in the 1940s, Harry Olson pointed out analogies in electrical, mechanical, thermodynamic, hydraulic, and chemical systems, which allowed circuit diagrams to be applied to a wide variety of fields. On April 24, 1959, Henry Paynter woke up and invented the diagrammatic language of bond graphs to study generalized versions of voltage and current, called ‘effort’ and ‘flow,’ which are implicit in the analogies found by Olson. Bond graphs are now widely used in engineering. On the other hand, control theorists use diagrams of a different kind, called ‘signal-flow diagrams’, to study linear open dynamical systems.

Although category theory predates some of these diagrams, it was not until the 1980s that Joyal and Street showed string digrams can be used to reason about morphisms in any symmetric monoidal category. This motivates Brandon’s first goal: viewing electrical circuits, signal-flow diagrams, and bond graphs as string diagrams for morphisms in symmetric monoidal categories.

This lets us study networks from a compositional perspective. That is, we can study a big network by describing how it is composed of smaller pieces. Treating networks as morphisms in a symmetric monoidal category lets us build larger ones from smaller ones by composing and tensoring them: this makes the compositional perspective into precise mathematics. To study a network in this way we must first define a notion of ‘input’ and ‘output’ for the network diagram. Then gluing diagrams together, so long as the outputs of one match the inputs of the other, defines the composition for a category.

Network diagrams are typically assigned data, such as the potential and current associated to a wire in an electrical circuit. Since the relation between the data tells us how a network behaves, we call this relation the ‘behavior’ of a network. The way in which we assign behavior to a network comes from first treating a network as a ‘black box’, which is a system with inputs and outputs whose internal mechanisms are unknown or ignored. A simple example is the lock on a doorknob: one can insert a key and try to turn it; it either opens the door or not, and it fulfills this function without us needing to know its inner workings. We can treat a system as a black box through the process called ‘black-boxing’, which forgets its inner workings and records only the relation it imposes between its inputs and outputs.

Since systems with inputs and outputs can be seen as morphisms in a category we expect black-boxing to be a functor out of a category of this sort. Assigning each diagram its behavior in a functorial way is formalized by functorial semantics, first introduced in Lawvere’s thesis in 1963. This consists of using categories with specific extra structure as ‘theories’ whose ‘models’ are structure-preserving functors into other such categories. We then think of the diagrams as a syntax, while the behaviors are the semantics. Thus black-boxing is actually an example of functorial semantics. This leads us to another goal: to study the functorial semantics, i.e. black-boxing functors, for electrical circuits, signal-flow diagrams, and bond graphs.

Brendan Fong and I began this type of work by showing how to describe circuits made of wires, resistors, capacitors, and inductors as morphisms in a category using ‘decorated cospans’. Jason Erbele and I, and separately Bonchi, Sobociński and Zanasi, studied signal flow diagrams as morphisms in a category. In other work Brendan Fong, Blake Pollard and I looked at Markov processes, while Blake and I studied chemical reaction networks using decorated cospans. In all of these cases, we also studied the functorial semantics of these diagram languages.

Brandon’s main tool is the framework of ‘props’, also called ‘PROPs’, introduced by Mac Lane in 1965. The acronym stands for “products and permutations”, and these operations roughly describe what a prop can do. More precisely, a prop is a strict symmetric monoidal category equipped with a distinguished object X such that every object is a tensor power X^{\otimes n}. Props arise because very often we think of a network as going between some set of input nodes and some set of output nodes, where the nodes are indistinguishable from each other. Thus we typically think of a network as simply having some natural number as an input and some natural number as an output, so that the network is actually a morphism in a prop.

Circuits and bond graphs

Now let’s take a quick tour of circuits and bond graphs. Much more detail can be found in Brandon’s thesis, but this may help you know what to picture when you hear terminology from electrical engineering.

Here is an electrical circuit made of only perfectly conductive wires:

This is just a graph, consisting of a set N of nodes, a set E of edges, and maps s,t\colon E\to N sending each edge to its source and target node. We refer to the edges as perfectly conductive wires and say that wires go between nodes. Then associated to each perfectly conductive wire in an electrical circuit is a pair of real numbers called ‘potential’, \phi, and ‘current’, I.

Typically each node gets a potential, but in the above case the potential at either end of a wire would be the same so we may as well associate the potential to the wire. Current and potential in circuits like these obey two laws due to Kirchoff. First, at any node, the sum of currents flowing into that node is equal to the sum of currents flowing out of that node. The other law states that any connected wires must have the same potential.

We say that the above circuit is closed as opposed to being open because it does not have any inputs or outputs. In order to talk about open circuits and thereby bring the ‘compositional perspective’ into play we need a notion for inputs and outputs of a circuit. We do this using two maps i\colon X\to N and o\colon Y \to N that specifiy the inputs and outputs of a circuit. Here is an example:

We call the sets X, Y, and the disjoint union X + Y the inputs, outputs, and terminals of the circuit, respectively. To each terminal we associate a potential and current. In total this gives a space of allowed potentials and currents on the terminals and we call this space the ‘behavior’ of the circuit. Since we do this association without knowing the potentials and currents inside the rest of the circuit we call this process ‘black-boxing’ the circuit. This process hides the internal workings of the circuit and just tells us the relation between inputs and outputs. In fact this association is functorial, but to understand the functoriality first requires that we say how to compose these kinds of circuits. We save this for later.

There are also electrical circuits that have ‘components’ such as resistors, inductors, voltage sources, and current sources. These are graphs as above, but with edges now labelled by elements in some set L. Here is one for example:

We call this an L-circuit. We may also black-box an L-circuit to get a space of allowed potentials and currents, i.e. the behavior of the L-circuit, and this process is functorial as well. The components in a circuit determine the possible potential and current pairs because they impose additional relationships. For example, a resistor between two nodes has a resistance R and is drawn as:

In an L-circuit this would be an edge labelled by some positive real number R. For a resistor like this Kirchhoff’s current law says I_1=I_2 and Ohm’s Law says \phi_2-\phi_1 =I_1R. This tells us how to construct the black-boxing functor that extracts the right behavior.

Engineers often work with wires that come in pairs where the current on one wire is the negative of the current on the other wire. In such a case engineers care about the difference in potential more than each individual potential. For such pairs of perfectly conductive wires:

we call V=\phi_2-\phi_1 the ‘voltage’ and I=I_1=-I_2 the ‘current’. Note the word current is used for two different, yet related concepts. We call a pair of wires like this a ‘bond’ and a pair of nodes like this a ‘port’. To summarize we say that bonds go between ports, and in a ‘bond graph’ we draw a bond as follows:

Note that engineers do not explicitly draw ports at the ends of bonds; we follow this notation and simply draw a bond as a thickened edge. Engineers who work with bond graphs often use the terms ‘effort’ and ‘flow’ instead of voltage and current. Thus a bond between two ports in a bond graph is drawn equipped with an effort and flow, rather than a voltage and current, as follows:

A bond graph consists of bonds connected together using ‘1-junctions’ and ‘0-junctions’. These two types of junctions impose equations between the efforts and flows on the attached bonds. The flows on bonds connected together with a 1-junction are all equal, while the efforts sum to zero, after sprinkling in some signs depending on how we orient the bonds. For 0-junctions it works the other way: the efforts are all equal while the flows sum to zero! The duality here is well-known to engineers but perhaps less so to mathematicians. This is one topic Brandon’s thesis explores.

Brandon explains bond graphs in more detail in Chapter 5 of his thesis, but here is an example:

The arrow at the end of a bond indicates which direction of current flow counts as positive, while the bar is called the ‘causal stroke’. These are unnecessary for Brandon’s work, so he adopts a simplified notation without the arrow or bar. In engineering it’s also important to attach general circuit components, but Brandon doesn’t consider these.

Outline

In Chapter 2 of his thesis, Brandon provides the necessary background for studying four categories as props:

• the category of finite sets and spans: \textrm{FinSpan}

• the category of finite sets and relations: \textrm{FinRel}

• the category of finite sets and cospans: \textrm{FinCospan}

• the category of finite sets and corelations: \textrm{FinCorel}.

In particular, \textrm{FinCospan} and \textrm{FinCorel} are crucial to the study of networks.

In Corollary 2.3.4 he notes that any prop has a presentation in terms of generators and equations. Then he recalls the known presentations for \textrm{FinSpan}, \textrm{FinCospan}, and \textrm{FinRel}. Proposition 2.3.7 lets us build props as quotients of other props.

He begins Chapter 3 by showing that \mathrm{FinCorel} is ‘the prop for extraspecial commutative Frobenius monoids’, based on a paper he wrote with Brendan Fong. This result also gives a presentation for \mathrm{FinCorel}.

Then he defines an “L-circuit” as a graph with specified inputs and outputs, together with a labelling set for the edges of the graph. L-circuits are morphisms in the prop \textrm{Circ}_L. In Proposition 3.2.8 he uses a result of Rosebrugh, Sabadini and Walters to show that \textrm{Circ}_L can be viewed as the coproduct of \textrm{FinCospan} and the free prop on the set L of labels.

Brandon then defines \textrm{Circ} to be the prop \textrm{Circ}_L where L consists of a single element. This example is important, because \textrm{Circ} can be seen as the category whose morphisms are circuits made of only perfectly conductive wires! From any morphism in \textrm{Circ} he extracts a cospan of finite sets and then turns the cospan into a corelation. These two processes are functorial, so he gets a method for sending a circuit made of only perfectly conductive wires to a corelation:

\textrm{Circ} \stackrel{H'}{\longrightarrow} \textrm{FinCospan} \stackrel{H}{\longrightarrow} \textrm{FinCorel}

There is also a functor

K\colon \textrm{FinCorel} \to \textrm{FinRel}_k

where \textrm{FinRel}_k is the category whose objects are finite dimensional vector spaces and whose morphisms R\colon U\to V are linear relations, that is, linear subspaces R\subseteq U \oplus V. By composing with the above functors H' and H he associates a linear relation R to any circuit made of perfectly conductive wires. On the other hand he gets a subspace for any such circuit by first assigning potential and current to each terminal, and then subjecting these variables to the appropriate physical laws.

It turns out that these two ways of assigning a subspace to a morphism in \textrm{Circ} are the same. So, he calls the linear relation associated to a circuit using the composite KHH' the “behavior” of the circuit and defines the “black-boxing” functor

\blacksquare \colon \textrm{Circ}\to \textrm{FinRel}_k

to be the composite of these:

\textrm{Circ} \stackrel{H'}{\longrightarrow} \textrm{FinCospan} \stackrel{H}{\longrightarrow} \textrm{FinCorel} \stackrel{K}{\longrightarrow} \textrm{FinRel}_k

Note that the underlying corelation of a circuit made of perfectly conductive wires completely determines the behavior of the circuit via the functor K.

In Chapter 4 he reinterprets the black-boxing functor \blacksquare as a morphism of props. He does this by introducing the category \textrm{LagRel}_k, whose objects are “symplectic” vector spaces and whose morphisms are “Lagrangian” relations. In Proposition 4.1.6 he proves that the functor K\colon \textrm{FinCorel} \to \textrm{FinRel}_k actually picks out a Lagrangian relation for any corelation and thus determines a morphism of props. So, he redefines K to be this morphism

K\colon \mathrm{FinCorel} \to \mathrm{LagRel}_k

and reinterprets black-boxing as the composite

\mathrm{Circ} \stackrel{H'}{\longrightarrow} \mathrm{FinCospan} \stackrel{H}{\longrightarrow} \mathrm{FinCorel} \stackrel{K}{\longrightarrow} \mathrm{LagRel}_k

After doing al this hard work for circuits made of perfectly conductive wires—a warmup exercises that engineers might scoff at—Brandon shows the power of his results by easily extending the black-boxing functor to circuits with arbitrary label sets in Theorem 4.2.1. He applies this result to a prop whose morphisms are circuits made of resistors, inductors, and capacitors. Then he considers a more general and mathematically more natural approach to linear circuits using the prop \textrm{Circ}_k. The morphisms here are open circuits with wires labelled by elements of some chosen field k. In Theorem 4.2.4 he prove the existence of a morphism of props

\blacksquare \colon \textrm{Circ}_k \to \textrm{LagRel}_k

that describes the black-boxing of circuits built from arbitrary linear components.

Brandon then picks up where Jason Erbele’s thesis left off, and recalls how control theorists use “signal-flow diagrams” to draw linear relations. These diagrams make up the category \textrm{SigFlow}_k, which is the free prop generated by the same generators as \textrm{FinRel}_k. Similarly he defines the prop \widetilde{\mathrm{Circ}}_k as the free prop generated by the same generators as \textrm{Circ}_k. Then there is a strict symmetric monoidal functor T\colon \widetilde{\mathrm{Circ}}_k \to \textrm{SigFlow}_k giving a commutative square:

Of course, circuits made of perfectly conductive wires are a special case of linear circuits. We can express this fact using another commutative square:

Combining the diagrams so far, Brandon gets a commutative diagram summarizing the relationship between linear circuits, cospans, corelations, and signal-flow diagrams:

Brandon concludes Chapter 4 by extending his work to circuits with voltage and current sources. These types of circuits define affine relations instead of linear relations. The prop framework lets Brandon extend black-boxing to these types of circuits by showing that affine Lagrangian relations are morphisms in a prop \textrm{AffLagRel}_k. This leads to Theorem 4.4.5, which says that for any field k and label set L there is a unique morphism of props

\blacksquare \colon \textrm{Circ}_L \to \textrm{AffLagRel}_k

extending the other black-boxing functor and sending each element of L to an arbitrarily chosen affine Lagrangian relation between potentials and currents.

In Chapter 5, Brandon studies bond graphs as morphisms in a category. His goal is to define a category \textrm{BondGraph}, whose morphisms are bond graphs, and then assign a space of efforts and flows as behavior to any bond graph using a functor. He also constructs a functor that assigns a space of potentials and currents to any bond graph, which agrees with the way that potential and current relate to effort and flow.

The subtle way he defines \textrm{BondGraph} comes from two different approaches to studying bond graphs, and the problems inherent in each approach. The first approach leads him to a subcategory \textrm{FinCorel}^\circ of \textrm{FinCorel}, while the second leads him to a subcategory \textrm{LagRel}_k^\circ of \textrm{LagRel}_k. There isn’t a commutative square relating these four categories, but Brandon obtains a pentagon that commutes up to a natural transformation by inventing a new category \textrm{BondGraph}:

This category is a way of formalizing Paynter’s idea of bond graphs.

In his first approach, Brandon views a bond graph as an electrical circuit. He takes advantage of his earlier work on circuits and corelations by taking \textrm{FinCorel} to be the category whose morphisms are circuits made of perfectly conductive wires. In this approach a terminal is the object 1 and a wire is the identity corelation from 1 to 1, while a circuit from m terminals to n terminals is a corelation from m to n.

In this approach Brandon thinks of a port as the object 2, since a port is a pair of nodes. Then he thinks of a bond as a pair of wires and hence the identity corelation from 2 to 2. Lastly, the two junctions are two different ways of connecting ports together, and thus specific corelations from 2m to 2n. It turns out that by following these ideas he can equip the object 2 with two different Frobenius monoid structures, which behave very much like 1-junctions and 0-junctions in bond graphs!

It would be great if the morphisms built from these two Frobenius monoids corresponded perfectly to bond graphs. Unfortunately there are some equations which hold between morphisms made from these Frobenius monoids that do not hold for corresponding bond graphs. So, Brandon defines a category \textrm{FinCorel}^\circ using the morphisms that come from these two Frobenius monoids and moves on to a second attempt at defining \textrm{BondGraph}.

Since bond graphs impose Lagrangian relations between effort and flow, this second approach starts by looking back at \textrm{LagRel}_k. The relations associated to a 1-junction make k\oplus k into yet another Frobenius monoid, while the relations associated to a 0-junction make k\oplus k into a different Frobenius monoid. These two Frobenius monoid structures interact to form a bimonoid! Unfortunately, a bimonoid has some equations between morphisms that do not correspond to equations between bond graphs, so this approach also does not result in morphisms that are bond graphs. Nonetheless, Brandon defines a category \textrm{LagRel}_k^\circ using the two Frobenius monoid structures k\oplus k.

Since it turns out that \textrm{FinCorel}^\circ and \textrm{LagRel}_k^\circ have corresponding generators, Brandon defines \textrm{BondGraph} as a prop that also has corresponding generators, but with only the equations found in both \textrm{FinCorel}^\circ and \textrm{LagRel}_k^\circ. By defining \textrm{BondGraph} in this way he automatically gets two functors

F\colon \textrm{BondGraph} \to \textrm{LagRel}_k^\circ

and

G\colon \textrm{BondGraph} \to \textrm{FinCorel}^\circ

The functor F associates effort and flow to a bond graph, while the functor G lets us associate potential and current to a bond graph using the previous work done on \textrm{FinCorel}. Then the Lagrangian subspace relating effort, flow, potential, and current:

\{(V,I,\phi_1,I_1,\phi_2,I_2) | V = \phi_2-\phi_1, I = I_1 = -I_2\}

defines a natural transformation in the following diagram:

Putting this together with the diagram we saw before, Brandon gets a giant diagram which encompasses the relationships between circuits, signal-flow diagrams, bond graphs, and their behaviors in category theoretic terms:

This diagram is a nice quick road map of his thesis. Of course, you need to understand all the categories in this diagram, all the functors, and also their applications to engineering, to fully appreciate what he has accomplished! But his thesis explains that.

To learn more

Coya’s thesis has lots of references, but if you want to see diagrams at work in actual engineering, here are some good textbooks on bond graphs:

• D. C. Karnopp, D. L. Margolis and R. C. Rosenberg, System Dynamics: A Unified Approach, Wiley, New York, 1990.

• F. T. Brown, Engineering System Dynamics: A Unified Graph-Centered Approach, Taylor and Francis, New York, 2007.

and here’s a good one on signal-flow diagrams:

• B. Friedland, Control System Design: An Introduction to State-Space Methods, S. W. Director (ed.), McGraw–Hill Higher Education, 1985.

6 Responses to Circuits, Bond Graphs, and Signal-Flow Diagrams

  1. Very nice post but wayyy above my pay grade. I think i got an idea of what PROP means—‘products and permutations’ .

    In set theory you just have products (multipication) and addition. In combinatorics you have permutations and combinations.
    you can get alot of mathematical logic and statistical mechanics from those concepts.

    I once met someone who studied under Lawvere (he gave me one of his papers on category thoery which i could barely understand –that was at a punk rock show) .Lawvere also had interesting political views which got him in trouble. Mathematical biologists like Rashevsy and Rosen also were influenced by him.

    Lawvere also got into trouble for teaching ‘the history of mathematics’ without permission. V Arnold (of KAM theorem) had similar issues.

    Gabriel Kron from 1940’s some cite as a kind of original source of bond graph type ideas.(Wrote down Schrodinger equation as an electrcial circuit.)
    From googling i see there is a website by F E Cellier with alot of stuff on bond graphs.
    Mayer in physical chemistry used ‘cluster integrals’ or expansions which are similar to Feynman diagrams–except they are used for chemicals not quanta. .

    • Todd Trimble says:

      That Lawvere “got in trouble” by espousing his political views is basically right and well-known, but the part about Lawvere teaching the history of mathematics “without permission” sounded weird to me; I’d never heard about that. I can see that something to that effect is mentioned in the Wikipedia article on Lawvere (not a very good one btw — it doesn’t give a proper idea of his immense influence), but I wasn’t able to follow up on this because the crucial page in the book of Waite cited in the article is hidden from me. Can someone shed some light on the story there?

  2. “the diagrams engineers like to draw”

    For electrical circuit diagrams (or schematics), it’s not really a question as to whether engineers like to draw these, as they are an exact representation of the physical circuit, with all the wires represented by lines connecting the elements.

    If no one drew the schematics (or currently, a net-list), nothing would be built.The equivalent for an architect would be a building blueprint.

    On the other hand, bond graphs are a transformation of the physical circuit, and many electrical engineers will never see one of these during their college coursework. Of course, the underlying equations for a circuit diagram and the equivalent bond graph are exactly the same. Some say the bond graph is more of a direct representation of the underlying equations.

    I have heard that mechanical engineers may work with bond graphs more often, especially for hybrid electro-mechanical systems.

    What would be interesting to see if bond graphs ever became popular in the circuit world.

    • John Baez says:

      I have the feeling bond graphs attract engineers who are seeking a broad universal formalism, more than engineers who focus on electrical systems. Just as some physicists seek “grand unified theories” or “theories of everything” while other focus on specific humbler but perhaps more practical problems, some engineers seem drawn to grand syntheses—and those seem to be the ones who like bond graphs! The titles of the textbooks I recommended hint at this:

      • D. C. Karnopp, D. L. Margolis and R. C. Rosenberg, System Dynamics: A Unified Approach, Wiley, New York, 1990.

      • F. T. Brown, Engineering System Dynamics: A Unified Graph-Centered Approach, Taylor and Francis, New York, 2007.

      These are very good and very fun to read, but I’m not sure they could convince an opponent that bond graphs are “necessary”.

      That’s okay. I bumped into bond graphs when I was starting to attempt my own grand synthesis. What I mainly like about them now is the process of “doubled doubling” that takes us from real numbers to pairs of real numbers (e.g. the potential and current on a wire) to pairs of pairs of real numbers (e.g. the potentials and currents on two wires).

      The first doubling takes us into symplectic geometry, which is heavily discussed in Brandon’s thesis and also Brendan’s thesis. Symplectic geometry arose in physics when it was realized that degrees of freedom come in pairs, e.g. position and momentum or potential and current. There are lots of nice textbooks on symplectic geometry, and it sits near the heart of mathematical physics.

      The second doubling is more mysterious and thus more interesting to me! It shows up in electrical engineering in the fact that most apparatus has wires coming in pairs, each of which in turn carries a pair of variables: potential and current. However, only the difference of potentials matters physically—this is the “voltage”—and the currents are constrained to be equal and opposite in most cases. So, we get back down to two variables: this is called “symplectic reduction”.

      There is much more to say about this, but I’ll urge people to start by looking at Brandon’s thesis!

  3. Keith Harbaugh says:

    JB, something I’ve long been curious about:
    To what extent have your circles (you certainly have a wide range of acquaintances and knowledge!) considered the 1987 article
    “Hierarchical evolutive systems: A mathematical model for complex systems” by Andree Ehresmann and J.P. Vanbremeersch
    https://doi.org/10.1007/BF02459958
    and its many successors, see
    http://ehres.pagesperso-orange.fr/ and click on “Publications”.
    As I understand it, Andree basically models complex systems as a hierarchy of colimits, which evolve over time.
    Of course, in our systems both the requirements/specifications and the implementation are such, both evolve (hopefully in sync!).
    Perhaps her papers might be of some relevance.
    If you all this is answered somewhere, would you mind pointing me to it? Thanks very much.

    • John Baez says:

      I got to know Andrée Ehresmann at a conference called Categories at the Crossroads, which I helped organize in 2014. I subsequently met her at IRCAM, the electronic music laboratory in Paris—she’s working with some people I know applying categories to music theory. But it’s mainly conversations at that conference that helped me understand her theories. As you say, she wants to build up descriptions of hierarchical structures as iterated colimits.

You can use Markdown or HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it.

This site uses Akismet to reduce spam. Learn how your comment data is processed.