*guest post by Cameron Smith*

Consider these quotes:

My thesis has been that one path to the construction of a non-trivial theory of complex systems is by way of a theory of hierarchy. Empirically, a large proportion of the complex systems we observe in nature exhibit hierarchic structure. On theoretical grounds we could expect complex systems to be hierarchies in a world in which complexity had to evolve from simplicity.– Herbert Simon, 1962

(Many of the concepts that) have dominated scientific thinking for three hundred years, are based upon the understanding that at smaller and smaller scales—both in space and in time—physical systems become simple, smooth and without detail. A more careful articulation of these ideas would note that the fine scale structure of planets, materials and atoms is not without detail. However, for many problems, such detail becomes irrelevant at the larger scale. Since the details (become) irrelevant (at such larger scales), formulating theories in a way that assumes that the detail does not exist yields the same results as (theories that do not make this assumption).– Yaneer Bar-Yam

Thoughts like these lead me to believe that, as a whole, we humans need to reassess some of our approaches to understanding. I’m not opposed to reductionism, but I think it would be useful to try to characterize those situations that might require something more than an exclusively reductionist approach. One way to do that is to break down some barriers that we’ve constructed between *disciplines*. So I’m here on Azimuth trying to help out this process.

Indeed, Azimuth is just one of many endeavors people are beginning to work on that might just lead to the unification of humanity into a superorganism. Regardless of the external reality, a fear of climate change could have a unifying effect. And, if we humans are simply a set of constituents of the superorganism that is Earth’s biosphere, it appears we are its only candidate germ line. So, assuming we’d like our descendants to have a chance at existence in the universe, we need to figure out either how to keep this superorganism alive or help it reproduce.

We each have to recognize our own individual limitations of time, commitment, and brainpower. So, I’m trying to limit my work to the study of biological evolution rather than conjuring up a ‘pet theory of everything’. However, I’m also trying not to let those disciplinary and institutional barriers limit the tools I find valuable, or the people I interact with.

So, the more I’ve thought about the complexity of evolution (for now let’s just say ‘complexity’ = ‘anything humans don’t yet understand’), the more I’ve been driven to search for new languages. And in that search, I’ve been driven toward pure mathematics, where there are many exciting languages lurking around. Perhaps one of these languages has already obviated the need to invent new ideas to understand biological evolution… or perhaps an altogether new language needs to be constructed.

The prospects of a general theory of evolution point to the same intellectual challenge that we see in the quote above from Bar-Yam: assuming we’d like to be able to consistently manipulate the universe, when can we neglect *details* and when can’t we?

Consider the *level of organization* concept. Since different details of a system can be effectively ignored at different scales, our scientific theories have themselves become ‘stratified’:

• G. L. Farre, The energetic structure of observation: a philosophical disquisition, *American Behavioral Scientist* **40** (May 1997), 717-728.

In other words, science tends to be organized in ‘layers’. These layers have come to be conceived of as levels of organization, and each scientific theory tends to address only one of these levels (click the image to see the flash animation that ascends through many scales or levels):

It might be useful to work explicitly on connecting theories that tell us about particular levels of organization in order to attempt to develop some theories that *transcend* levels of organization. One type of insight that could be gained from this approach is an understanding of the mutual development of bottom-up *ostensibly mechanistic* models of simple systems and top-down *initially phenomenological* models of complex ones.

Simon has written an interesting discussion of the quasi-continuum that ranges from simple systems to complex ones:

• H. A. Simon, The architecture of complexity, *Proceedings of the American Philosophical Society* **106** (1962), 467–482.

But if we take an ideological perspective on science that says “let’s unify everything!” (scientific monism), a significant challenge is the development of a language able to unify our descriptions of simple and complex systems. Such a language might help communication among scientists who work with complex systems that apparently involve multiple levels of organization. Something like category theory may provide the nucleus of the framework necessary to formally address this challenge. But, in order to head in that direction, I’ll try out a few examples in a series of posts, albeit from the somewhat limited perspective of a biologist, from which some patterns might begin to surface.

In this introductory post, I’ll try to set a basis for thinking about this tension between simple and complex systems without wading through any treatises on ‘complexity’. It will be remarkably imprecise, but I’ll try to describe the ways in which I think it provides a useful metaphor for thinking about how we humans have dealt with this simple ↔ complex tension in science.

Another tack that I think could accomplish a similar goal, perhaps in a clearer way, would be to discuss fractals, power laws and maybe even renormalization. I might try that out in a later post if I get a little help from my new Azimuth friends, but I don’t think I’m qualified yet to do it alone.

#### Simple and complex systems

What is the organizational structure of the products of evolutionary processes? Herbert Simon provides a perspective that I find intuitive in his parable of two watchmakers.

He argues that the systems containing modules that don’t instantaneously fall apart (‘stable intermediates’) and can be assembled hierarchically take less time to evolve complexity than systems that lack stable intermediates. Given a particular set of internal and environmental constraints that can only be satisfied by some relatively complex system, a hierarchically organized one will be capable of meeting those constraints with the fewest resources and in the least time (i.e. most efficiently). The constraints any system is subject to determine the types of structures that can form. If *hierarchical* organization is an unavoidable outcome of evolutionary processes, it should be possible to characterize the causes that lead to its emergence.

Simon describes a property that some complex systems have in common, which he refers to as ‘near decomposability’:

• H. A. Simon, Near decomposability and the speed of evolution, *Industrial and Corporate Change* **11** (June 2002), 587-599.

A system is **nearly decomposable** if it’s made of parts that interact rather weakly with each other; these parts in turn being made of smaller parts with the same property.

For example, suppose we have a system modelled by a first-order linear differential equation. To be concrete, consider the fictitious building imagined by Simon: the Mellon Institute, with 12 rooms. Suppose the temperature of the *i*th room at time *t* is . Of course most real systems seem to be nonlinear, but for the sake of this metaphor we can imagine that the temperatures of these rooms interact in a linear way, like this:

where are some numbers. Suppose also that the matrix looks like this:

For the sake of the metaphor I’m trudging through here, let’s also assume

Then our system is nearly decomposable. Why? It has three ‘layers’, with two cells at the top level, each divided into two subcells, and each of these subdivided into three sub-subcells. The numbers of the rows and columns designate the cells, cells 1–6 and 7–12 constitute the two top-level subsystems, cells 1–3, 4–6, 7–9 and 10–12 the four second-level sub- systems. The interactions within the latter subsystems have intensity , those within the former two subsystems, intensity , and those between components of the largest subsystems, intensity . This is why Simon states that this matrix is in **near-diagonal form**. Another, probably more common, terminology for this would be **near block diagonal form**. This terminology is a bit sloppy, but it basically means that we have a square matrix whose diagonal entries are square matrices and all other elements are *approximately* zero. That ‘approximately’ there is what differentiates *near block diagonal matrices* from honest block diagonal matrices whose off diagonal matrix elements are precisely zero.

This is a trivial system, but it illustrates that the near decomposability of the coefficient matrix allows these equations to be solved in a *near* hierarchical fashion. As an approximation, rather than simulating all the equations at once (e.g. all twelve in this example) one can take a recursive approach and solve the four systems of three equations (each of the blocks containing *a*‘s), and average the results to produce initial conditions for two systems of two equations with coefficients:

and then average those results to produce initial conditions for a single system of two equations with coefficients:

This example of simplification indicates that the study of a nearly decomposable systems system can be reduced to a series of smaller modules, which can be simulated in less computational time, if the error introduced in this approximation is tolerable. The degree to which this method saves time depends on the relationship between the size of the whole system and the size and number of hierarchical levels. However, as an example, given that the time complexity for matrix inversion (i.e. solving a system of linear equations) is , then the hierarchical decomposition would lead to an algorithm with time complexity

where *L* is the number of levels in the decomposition. (For example, *L*=4 in the Mellon Institute, assuming the individual rooms are the lowest level).

All of this deserves to be made much more precise. However, there are some potential metaphorical consequences for the evolution of complex systems:

If we begin with a population of systems of comparable complexity, some of which are nearly decomposable and some of which are not, the nearly decomposable systems will, on average, increase their fitness through evolutionary processes much faster than the remaining systems, and will soon come to dominate the entire population. Notice that the claim is not that more complex systems will evolve more rapidly than less complex systems, but that, at any level of complexity, nearly decomposable systems will evolve much faster than systems of comparable complexity that are not nearly decomposable.– Herbert Simon, 2002

The point I’d like to make is that in this system, the idea of switching back and forth between simple and complex perspectives is made explicit: we get a sort of conceptual parallax:

In this simple case, the approximation that Simon suggests works well; however, for some other systems, it wouldn’t work at all. If we aren’t careful, we might even become victims of the Dunning-Kruger effect. In other words: if we don’t understand a system well from the start, we may overestimate how well we understand the limitations inherent to the simplifications we employ in studying it.

But if we at least recognize the potential of falling victim to the Dunning-Kruger effect, we can vigilantly guard against it in trying to understand, for example, the currently paradoxical tension between ‘groups’ and ‘individuals’ that lies at the heart of evolutionary theory… and probably also the caricatures of evolution that breed social controversy.

Keeping this in mind, my starting point in the next post in this series will be to provide some examples of hierarchical organization in biological systems. I’ll also set the stage for a discussion of evolution viewed as a dynamic process involving structural and functional transitions in hierarchical organization—or for the physicists out there, something like phase transitions!

Before I disabled comments on this post over at Google+, Rob Seaman wrote:

I replied:

Thank you for your comments Rob. Could you explain a bit more about what you mean with respect to Shannon saying “why” hierarchies are a preferred type of organizational structure? I am familiar with information theory, but that is not completely clear to me.

I can’t access the usual link right now to Shannon’s 1948 Bell Labs paper, so let me point to the web page (http://heasarc.nasa.gov/fitsio/fpack/) for an ongoing collaboration on astronomical data compression issues and tools. Recent progress in compression in astronomy has focused on hierarchical (“tiled”) data structures both for large image arrays and tabular column-major marshalling of databases.

The Shannon entropy and compression are two sides of the same coin, of course. Perhaps I was being a bit glib – it’s just as reasonable to have Shannon say “what” and Kolmogorov say “why”. The point being that compression is the same thing as efficient representation, and efficiency most definitely doesn’t mean coughing up a single monolithic lump like an owl pellet.

You are the second person to mention to me in the past week the potential connection between data compression and hierarchy theory. I will look into this in more detail than I have before.

Regarding a link between data compression and hierarchy theory, you might look at a couple of our recent papers (empirical not theoretical) related to astronomical data compression:

http://arxiv.org/pdf/0903.2140

http://arxiv.org/pdf/1007.1179

The first discusses the role of noise (see the appendix to relate this to entropy) and the second issues of quantization. I would think a robust hierarchical architecture will tend to have some fuzziness to it similar to lossy compression. (Subtractive dithering is a neat way to tame fuzziness.)

There will be theoretical limits as placed by entropy to the efficiency that can be realized from any hierarchical scheme. There may be biases introduced as per quantization noise (aka “Sheppard’s corrections”).

Note that it is simple to design an algorithm to achieve very high compression ratios – simply sort the values first (and many sort algorithms are intrinsically hierarchical). The hard part, of course, is decompression :-) …unless you preserve information conveying the original ordering. I wonder if the clever Burrows-Wheeler transform might have a hierarchical use.

Thanks for those links to your papers Rob. I’ll probably have some questions as I try to wade through them.

It seems there had been some confusion about my previous reply to this post, which was just containing the sentence:

As John emailed me, people felt irritated by this reply and some even thought that my account may have been hacked. I don’t know whether my account is hacked, but the sentence “Gaddafi says hi” was from me. I also don’t know whether this is of interest or whether this makes things clearer (in fact John suggested to watch this movie), but here is my email reply to John’s email:

I guess I started this. In my defense, I started it over on Google+. Not only does the real world have a way of knocking down even the most robust theoretical positions, but in this case it seems just a little – well – “enthusiastic” to assert that hierarchal systems are not only intrinsically more robust and evolve better, but they also freshen your breath.

My personal design heroes are doyens of simplicity such as Henry Petroski, who wrote revelatory books about “The Pencil” and “The Toothpick”. Hierarchical architectures aren’t intrinsically good, rather they benefit from the ability to enhance more basic desirable qualities such as simplicity and robustness. A hierarchy is a hypothetical imperative for a particular project (until the system engineer does a trade-off study), while robustness is a categorical imperative for every project.

The interested student of SF might compare and contrast the Watchmaker Moties of “The Mote in God’s Eye”, who construct artificial technology with no regard for hierarchical organization – with the depiction of Robert Hooke in the Baroque Cycle, revealing level after level of naturally evolved perfection with his microscope.

Rudyard gets the last word: http://www.kipling.org.uk/poems_serving.htm

Rob wrote:

I’m not sure about what’s meant by artificial technology since I don’t know that SF, but I find it also difficult to understand that hierarchical systems are automatically more robust.

The sentence “The mote in God’s eye” reminds me of a poem by the modern german poet Micky Zickelig, its a quite silly and I can’t translate it, but its about (last) words:

Zappeliga Rap

Der Sepp, der zappte schon ganz zappelig

in seinem Kopf herum

Er suchte die Worte gar rappelig

und fand sich ziemlich dumm

Die Worte aber saßen kippelig

und warteten sich wund

bis Sepp dann lieber allzu hippelig

ein Wort warf aus dem Mund

Das Wort schwebte winzig und witzelig

dem andern rein ins Ohr

Dort tobte das Hirn – es war kitzelig

für Sepp wars wohl ein Tor.

I was similarly titillated by “hierarchy” – eg the societal interpretation, especially in conjunction with “robustness” – as armies are the epitome of social hierarchies and ostensibly formed to avert wars (“si vis pacem, para bellum”) but they regularly reveal less than robust control structures for the stated purpose (while simultaneously helping tyrannies).

The topic may well belong under the umbrella of “saving the planet” but it is challenging to say the least, to interface it properly to Cameron Smith’s discussion. Not the proper frame, imo.

…but maybe then just call for an explicit answer to the question : “what would most critically distinguish armies from cases of hierarchical organization most relevant to Cameron Smith’s title ?”

Short of teleology, not obvious what defines “better evolution”. Is natural selection, red in tooth and claw, at its “best” when more or fewer individuals die? Presumably the point here is some sort of efficiency argument – that more changes can happen per generation, thus more closely tracing the waveform of change in the environment? Or would it rather be better to accommodate environmental drift by evolving insensitivity to the external changes? Both evolutionary strategies seem equally “good”.

Jumping back to the software metaphor, it is certainly true that well separated (decomposable) interfaces permit independent development to occur simultaneously on different layers. One interesting aspect is that once the levels are separated different methodologies can be applied to different levels, e.g., kernel programmers and application developers may use different tools and engineering paradigms.

Hi Rob. I think you are correct. This is definitely an efficiency argument. I have stated this explicitly here (and so has Herbert Simon in the papers of his I referenced):

Rob wrote:

There’s been a lot of work on the evolution of evolvability—I’d hope that some of the more theoretical work along these lines would try to tackle that question.

Of the work I am aware of in this field, what I hope is that we will find out that the computational models that have been used to investigate evolvability and robustness are unnecessarily complicated. The reason I hope for this is that these models have not seemed to reveal, though many would disagree with me, clear, universal principles but are essentially as caveat-laden as most experiments in biology.

Neat post. I’m looking forward to the next one. For those who haven’t seen it, the American Journal of Physics just published a Resource Letter in the August issue on Complex Systems by [M. E. J. Newman](http://ajp.aapt.org/resource/1/ajpias/v79/i8/p800_s1?isAuthorized=no). Unfortunately, it’s behind a pay wall and there doesn’t seem to be an arXiv version I can find. But for those with library access it might be worth checking out.

Thank you for the link Dan. There were a few references there I hadn’t seen, and it’s nice to have this set organized in one place.

Newman’s paper is here.

It seems as though path is the salient feature, the path particular that emerges through the mutual accommodation of the Traveler and the Terrain. It is the dynamical integrity of the little knots in path (‘stable intermediates’) that knit it into more complex and enduring, hierarchical structures.

What makes the equation particularly complex is that the Traveler is always part Terrain and the Terrain is always part Traveler.

The utility of this metaphoric view may not be readily apparent, but it is very much rooted in earthy example.

Thanks for your website.

Hi Don. I think that is a very nice way of stating the problem. One thing I am interested in is trying to understand how we can formalize the notion you have beautifully described. If we can do this, we may be able to learn how to reliably manipulate complex systems.

One of the more extensive dynamical “knots” affecting life on earth is an orbital mechanics that produces diurnal rhythms. It would be interesting to know how evolution would have been effected if one side of the earth always faced the sun.

One documented feature of evolution is that organisms can begin at very different places and come to occupy the same ecological niche (forget the name for that). Seems like this is also true of concepts.

It may be inelegant to ask, but clicking on my name here will bring up “Information: A Field Study”. Curious to know if it has any utility in this discussion. Regards.

Don wrote:

You might be talking about convergent evolution, which can lead distantly related organisms to adopt similar body forms:

Convergent evolution gives rise to traits that are analogous rather than truly homologous. For example, wings have evolved separately several times:

John, I confess that seeing your reply was like a second sunrise this morning, thank you. And yes convergent evolution is the term I was looking for, not so much as to similar internal body forms, but more in relation to the external ecological niche the organism occupies. The marsupial related sabre-toothed Thylacosmilus (“pouch sabre”) would be an example.

In any case, my point was that there are instances where the evolution of ideas can begin in very different places and come to similar conceptual terrains. The ultimate goal is always the utility of being able to say with some certainty that “this” is like “that”, that this notion/equation/iconograph is in some measure congruent with whatever slice of creation is presently under the microscope.

There is no escaping the truth that “one man’s ceiling is another man’s floor”. I don’t want to muddy the water here with spurious or sophomoric notions. My apology if that’s the case.

I am drawn to this discussion by a line of inquiry that began literally shovel in hand, not much of a recommendation given the marvel of mathematical sophistication evident here. (While I have only a Braille-like appreciation of the rough textures of mathematical argument, I am truly in awe of its accomplishment and potential)

Anyway the hand that held the shovel had also turned the pages of Howard T. Odum’s pioneering book on a general systems view of ecology, Environment, Power and Society, 1971 and one thing led to another. So now, knocking on the ceiling, I would like to offer some defense of my little sandbox treatise and make a case for its utility in this discussion:

1) It has at least some heuristic utility as a model/parable that argues from work-a-day phenomenon to a graphic representation of the elemental ‘stitch’ in the complex weave of living systems.

2) Whereas Herbert Simon uses the term ‘stable intermediates’ I have used the term ‘device’ (from the French root, ‘to divide’). This helps remind us that ‘stable intermediates’ have an integral energy regimen that is diverted and somewhat sequestered from larger energy dynamics and that, at least in biological systems, this diversion is accomplished by a particular material form or ‘device’.

3) It’s a view that is richly hierarchical, a complex nested network of energy pathways and the turning points thereof.

4) It has potential for the application of metrics and mathematical modeling.

Well that’s as good as it gets. Not sure how far the present moment of Azimuth’s inquiry has moved beyond these ideas, but thanks for a place to put them forward. Regards.

I’ve joined John Baez on his blog Azimuth. My first in a series of posts on hierarchical organization as an organizing principle of biological evolution showed up there today. My favorite part of this post is actually this quote from Carl Sagan linked to from the pale blue dot picture of the Earth …

I’d be interested if anyone knows of serious work on ‘hierarchical near block-diagonal linear differential equations’ like the one Cameron presented here:

where is nearly block diagonal, with each block being nearly block diagonal, etc.

Because they’re linear, these equations are easy to study. But because they’re examples of hierarchical structures, there could be interesting questions to ask. For example: how well can you approximate their solutions by ignoring the lower levels of the hierarchy? How does information flow up and down the hierarchy? And so on.

The particular equation Cameron discussed is a discretized version of the heat equation. Heat flows from room to room, but the rooms are organized in groups, which are organized in larger groups. So, the rooms are actually leaves of a tree: a tree that describes the hierarchical structure of the problem.

I can imagine having studied this kind of equation already.

John,

I think this is related to the study of

local computation. The computer science guys are interested in making the computations as efficient as possible and use ideas such as join trees and semi-ring algebra structures to group the valuations together. This is not approximation-based thoughIn addition to WebHubTel’s comment, here’s one that may be

toospecific. For equations like the heat equation, one has The Fast Gauss Transform. The idea is that the key component is calculating the sum of the Gaussian interactions between a group of points and a different point. Because the gaussian function decays so quickly, one can divide the space into rectangular boxes and just need to sum up “close enough” boxes to get very, very accurate results. The other very clever trick is to note that one can approximate a Gaussian for small arguments with a few terms of its series expansion, then one can “shift” the series to a common variable getting a series again and then add them all together to get one function that is a very, very good approximation to the sum-of-gaussians. This function can be evaluated for many nearby points, and overall this is a signficant savings in computational effort.This generalises into what are called “fast multipole methods” for certain types of function (eg, another example that comes to mind is calculating gravitational motions of many, many bodies).

However, this is based on exploiting full details of the interaction function in order to do “almost full accuracy” simulation, whereas I think Cameron’s problem is looking for more about high-level features of solutions.

Other than Hierarchy this form suggests (to me) some form of locality, that is the fact that subsystems mostly interact only with neighbor subsystems. But i guess that Hierarchy must imply some kind of locality.

Also i am seeing a growing interest in finding numerical efficient ways to solve differential equation in which the system matrices are sparse. Intuitively, sparsity could make it easier to distribute the simulation on parallel machines. But this is really not my area so i am not sure.

That example with rooms reminded me asymptotic methods used to solve differential equations. As an introduction I can only suggest the Wikipedia article.

This method is spread in hydrodynamics and as far as I know in kinetic theory. They yield approximate analytical solutions, often with a great accuracy. In hydrodynamics this is frequently take the form of and there is not necessarily only one and only two regions.

Despite my irrational and weird dislike for differential equations in physics, I really was stunned by the power of the method of matched asymptotic expansions.

You should take a look at this paper by McInerney and Farmer from Santa Fe.

“The Role of Design Complexity in Technology

Improvement”

They are basically extending the ideas of Muth and Simon in interesting ways. I am trying to use these in my real job, because it is all about how to to a job faster and more efficiently. The basic tool is the Design Structure Matrix, which is the same thing as the rooms example.

Another comment that may be too specific: again the key to numerically solving the exact “constant hierarchical block structured matrix” equation that Cameron posted is evaluating that matrix multiplying such a matrix by a vector, or evaluating a finite vector under a finite linear mapping. Now obviously a “co-ordinate-based” way of looking at this is that each component of the input vector results in a linear combination of the output basis vectors.

So if we apply a 2-D discrete wavelet transform to the matrix, in this particular case the 2-D discrete Haar transform, “all” we’re doing is chosing a different (but common) basis for both the input space and the output spaces (which in a diff. eqn. happen to actually be the same). But in this “in-out” basis combination such a matrix will be very sparse (most of the entries will be 0). So if one knows anything useful about inferring phenomena in a set of sparse differential equations this can be transferred to this “constant hierarchical block structured matrix” system.

This doesn’t directly address information flow up and down “detail scales”, but I bet people in the wavelet community have thought about it! (Ironically I didn’t initially think of this because with wavelets one often thinks about the boundaries between different regions as being important, whilst the description was in terms of the constantness being important apart from at some block boundaries.)

One of the interesting discussions I am having covers the residence time of CO2 in the atmosphere. This could be just a definitional disagreement but the arguments range from a residence time of a few years with a thin-tail, to numbers ranging from 100’s to 1000’s of years with a fat power-law tail.

The complexity arises from considerations of what exactly prevents the CO2 from settling back into a steady state. Is it the rarity of permanent sequestration sites? Or does the earth just adapt quickly through buffer zones?

In case people are interested, I have been testing some of my ideas out on the ClimateEtc blog. It’s not for the weak-hearted though because the attacks come from all angles.

The latest approach I have been trying is a sparse matrix of coupled rate equations which models slab layers as a compartment or box model.

This is the last chart I offered up:

CO2 residence time discussion thread

“It might be useful to work explicitly on connecting theories that tell us about particular levels of organization in order to attempt to develop some theories that transcend levels of organization. ”

I am looking forward to your next parts.

Thank you Uwe. If it is even possible, this is an enormous task that I do not pretend to have the capacity to accomplish alone. Some might argue that statistical physics is an example where effective theories have been formulated that transcend multiple scales. I think this is true; however, it requires that the objects at each scale be treated as identical, even if distinguishable. Though objects can access different states within a set, they cannot access orthogonal state sets. It seems in biology we have an analog of phase transitions where some of the objects that coalesce to form an ensemble can access different state sets. I would like anyone with more expertise in statistical physics than I have to criticize this statement as I might be a Dunning-Kruger victim here.

To what extent are hierarchies in biology defined by their energy spectrum? Must there be energy gaps between levels, where few interactions occur, for a hierarchy to exist? Or is this stretching the physics analogy too far?

The watchmaker parable seems to argue that it’s more efficient to evolve a factory that mass produces watch parts, than to repeatedly evolve watch parts. For example both watchmakers bought metal for the gears, they didn’t make it themselves. This seems true in biology, I wonder what is the corresponding argument in physics!

It’s interesting that you ask this Jon:

I had stated it as such in an earlier version of this post that didn’t make the cut. The Farre paper I cite gives some insight on this, but is not focused on biology. My suspicion is that it is indeed “stretching the physics analogy too far.” One problem I see at hand is to figure out, if that is so, precisely why it is.

If you want to read a philosopher on complex systems and robustness, then William Wimsatt is one of your best bets. In 2007 he brought out Re-Engineering Philosophy for Limited Beings: Piecewise Approximations to Reality, Cambridge: Harvard University Press. You can read chap. 10 here.

Thank you very much David for this reference. I have heard of Wimsatt, but have not read his work. I would usually be vehemently opposed to such, but that book might be worth buying just for the cover! This reference also looks quite relevant to me:

Wimsatt, W.C. Aggregate, composed, and evolved systems: Reductionistic heuristics as means to more holistic theories.

Biology & Philosophy21, 667-702 (2007).I am reminded to quote Leibniz via Chaitin in my attitude toward this type of work:

…where I would expand mathematics to include science

This continues to be an interesting thread that I’ve been returning to roughly daily. Speaking of diurnal effects, someone was musing about evolutionary trends on tidally locked planets. Readers may be interested to learn that a historic proposal to disconnect clocks world wide (UTC) from Earth rotation will be voted on in Geneva in January 2012. A meeting has been organized (http://futureofutc.org/) to discuss the wide-ranging implications.

Regarding connections between convergent evolution and hierarchical theory, it seems to me that the pictures provided by John likely represent two distinct paradigms of convergence. Wings are certainly analogous structures – just add insect wings to the pictures of the varying wing structures of the tetrapod reptiles, birds and mammals to see this. (Although the word “convergent” per se may be restricted to structures similar in appearance not function.)

But convergence as in succulents may reflect deep similarities in genetic structuring as with the homeobox genes of Evo Devo. (Or if not succulents, perhaps other examples.) My knowledge here is that of a programmer reading Sean B Carroll’s excellent books, but the examples in those books of homologous functions and structures across widely separated species aligns well with similar computer science design issues. A change in the kernel can effect changes perceptible to users.

The Arizona-Sonora Desert Museum has a wonderful garden of convergent cacti and euphorbia. The ultimate top-level hierarchy on planet Earth over the past few hundred million years has been continental drift. Cactus are from the New World. Euphorbia primarily from Africa. I recommend this Ted talk for perspective, both large and small:

The nanoscale can create emergent similarities on the macroscale. The petascale will in turn influence the evolution of the nanoscale – given enough time.

There is a very interesting looking thesis that your posts keep reminding me of Rob.

Mills, R. How micro-evolution can guide macro-evolution: multi-scale search via evolved modular variation. (2010).

I would love to read it in detail if I can just make the time!

Hi,

I realize I am heaviiy necrotizing this post, but I wanted to share a recent thought. Since having read this post, I have been thinking about it in terms of categories as interfaces. You know, the thing about interfaces is that you can always cover an interface with another interface. Here is a thought about the simplest of all interfaces:

An apparatus has knobs, buttons and a readout. Those things comprise an interface. Furthermore, this interface is itself a category in the sense that turning a knob is a structure preserving map of the interface. (Very technically, we start with a monoidal category and produce the category of internal comonoids and then, humorously, categories themselves are monads in this category…eek!).

Turning the knob on your apparatus is a really complicated morphism. We generally liken it to a point in a 1-d manifold and also a field i.e. the real numbers. That’s way too much structure. Furthermore, there was a time in the history of our species when we had no apparatus at all. My feeling is that the apparatus has evolved over time and increased in complexity. At first, however, the apparatus was really simple.

How simple? How about a category with just one morphism. How is that for simple?!

So how do we interpret the category with just one morphism? Well, in the category of categories, the one morphism category is useful because any functor to it is a tool for saying “Hey! I have a transformation! Eh! There’s a transformation here!”. If we interpret the transformation as an event in our universe like the decoupling of EM radiation from the early cosmic soup, then our little one-morphism category is not very expressive. It only tells us that something happened. It does not tell us what happened.

Anton Zeilinger gave a talk once in which he exressed resect for the succinctness of a finite physics based on detector clicks. Simply put, a click is an event. But a click is not quite an event in a one morphism category. A click is more like a noise that is at one time there and then not there. It is more like a category with two morphisms: “Noise on” and “Noise off”. These are transformations of the same object and they are also inverses of one another.

So, what about our little one-morphism category? He seems somewhat more fundamental to the physics of causality. This category is like a pregnancy test. If the stick changes colour, then something happened. Otherwise, no baby. If the test changes colour, you throw it in the garbage and consider what to tell your husband.

Now that you know what the one morphism category is used for, you can start to think of richer structure….more morphisms!

For those of you who are really keen, you might consider the role of those categories that have no finite set of axioms, otherwise called locally finitely presentable.

Thank you for sharing your thoughts here Bem. I don’t mean to be rude, but if you’d tell me who you are I’d like very much to continue this conversation perhaps outside of this comment thread. “Bem” doesn’t give me much to go on. You should be able to find my e-mail address if you click my name or if you are involved in the Azimuth forum we could discuss it there.

It might not matter, but I’m not sure how you would define an interface. It seems to me that in the example you give you’re already looking at an interface that lies on top of a lot of other interfaces. That could just be due to something idiosyncratic in the manner that I conceptualize interfaces. If we dig down to what is physically being used to represent information in the device you have conceived, then I think we have to build a lot of interfaces on interfaces to get up to the point of knobs, buttons, and a readout.

In any case, I think there are at least a few different ways to imagine representing functional hierarchies using categories. You could think of hierarchical levels as being limits or colimits within a particular category. This seems to be the approach of Ehresmann (Andrée not Charles). Another option I can imagine is to think about how higher order morphisms (e.g. functors or natural transformations) could emerge from lower order ones. The latter may involve defining a process operating on a category. Even if you start with something like a monoidal category where you can internally represent the interplay between serial and parallel processes it seems to also be necessary to define a meta-process that describes how those processes themselves evolve in time. If we define a meta-process a priori, however, then we cannot claim that it emerged from an implicit representation embedded in lower level processes. To me this sounds impossible on the surface, but without it we don’t have a very good intuitive match to our current understanding of biological evolution.

Hey Cameron,

My name is Ben Sprott and I have included my home page with this post. I am posting a few ideas at nforum. Is there a bio or complexity forum you like, as I am iterested in seeing what other bio people might be doing and we could start a thread there.

Thank you for providing your coordinates Ben! I started a thread on the forum for us and anyone else who is interested to continue the discussion. I would continue here, but I prefer the convenient syntax over on the forum.

Since you’re both already on the Azimuth Forum, I encourage you to talk about your ideas there! Or here – that’s fine too. Anyway, I’d like to listen in.

Michael Polanyi

Life’s Irreducible Structure

Live mechanisms and information in DNA are boundary conditions with a sequence of boundaries above them

Science, New Series, 160 (3834)

June 21, 1968, 1308-1312.

http://www.compilerpress.ca/Competitiveness/Anno/Anno%20Polanyi%20Lifes%20Irreducible%20Structure%20Acience%201968.htm

This article presents some ideas that are relevant to

your topic.

Thanks, I’ll take a look!