This book could be interesting. If you read it, could you tell us what you think?

• Gordon Woo, *Calculating Catastrophe*, World Scientific Press, Singapore, 2011.

Apparently Dr. Gordon Woo was trained in mathematical physics at Cambridge, MIT and Harvard, and has made his career as a ‘calculator of catastrophes’. He has consulted for the IAEA on the seismic safety of nuclear plants and for BP on offshore oil well drilling—it’ll be fun to see what he has to say about his triumphant success in preventing disasters in *both* those areas. He now works at a company called Risk Management Solutions, where he works on modelling catastrophes for insurance purposes, and has designed a model for terrorism risk.

According to the blurb I got:

This book has been written to explain, to a general readership, the underlying philosophical ideas and scientific principles that govern catastrophic events, both natural and man-made. Knowledge of the broad range of catastrophes deepens understanding of individual modes of disaster. This book will be of interest to anyone aspiring to understand catastrophes better, but will be of particular value to those engaged in public and corporate policy, and the financial markets.

The table of contents lists: Natural Hazards; Societal Hazards; A Sense of Scale; A Measure of Uncertainty; A Matter of Time; Catastrophe Complexity; Terrorism; Forecasting; Disaster Warning; Disaster Scenarios; Catastrophe Cover; Catastrophe Risk Securitization; Risk Horizons.

Maybe you know other good books on the same subject?

For a taste of his thinking, you can try this:

• Gordon Woo, Terrorism risk.

Terrorism sounds like a particularly difficult risk to model, since it involves intelligent agents who try to do unexpected things. But maybe there are still some guiding principles. Woo writes:

It turns out that the number of operatives involved in planning and preparing attacks has a tipping point in respect of the ease with which the dots might be joined by counter-terrorism forces. The opportunity for surveillance experts to spot a community of terrorists, and gather sufficient evidence for courtroom convictions, increases nonlinearly with the number of operatives – above a critical number, the opportunity improves dramatically. This nonlinearity emerges from analytical studies of networks, using modern graph theory methods (Derenyi et al. [21]). Below the tipping point, the pattern of terrorist links may not necessarily betray much of a signature to the counter-terrorism services. However, above the tipping point, a far more obvious signature may become apparent in the guise of a large connected network cluster of dots, which reveals the presence of a form of community. The most ambitious terrorist plans, involving numerous operatives, are thus liable to be thwarted. As exemplified by the audacious attempted replay in 2006 of the Bojinka spectacular, too many terrorists spoil the plot (Woo, [22]).

Intelligence surveillance and eavesdropping of terrorist networks thus constrain the pipeline of planned attacks that logistically might otherwise seem almost boundless. Indeed, such is the capability of the Western forces of counterterrorism, that most planned attacks, as many as 80% to 90%, are interdicted. For example, in the three years before the 7/7/05 London attack, eight plots were interdicted. Yet any non-interdicted planned attack is construed as a significant intelligence failure. The public expectation of flawless security is termed the ‘90-10 paradox.’ Even if 90% of plots are foiled, it is by the 10% which succeed that the security services are ultimately remembered.

Of course the reference to “modern graph theoretical methods” will be less intimidating or impressive to many readers here than to the average, quite possibly innumerate reader of this document. But here’s the actual reference, in case you’re curious:

• I. Derenyi, G. Palla and T. Vicsek, Clique percolation in random networks, *Phys. Rev. Lett.* **94** (2005), 160202.

Just for fun, let me summarize the main result, so you can think about how relevant it might be to terrorist networks.

A graph is roughly a bunch of dots connected by edges. A **clique** in a graph is some subset of dots each of which is connected to every other. So, if dots are people and we draw an edge when two people are friends, a clique is a bunch of people who are all friends with each other—hence the name ‘clique’. But we might also use a clique to represent a bunch of people who are all engaged in the same activity, like a terrorist plot.

We’ve talked here before about Erdős–Rényi random graphs. These are graphs formed by taking a bunch of dots and randomly connecting each pair by an edge with some fixed probability . In the paper above, the authors argue that for an Erdős–Rényi random graph with vertices, the chance that most of the cliques with elements all touch each other and form one big fat ‘giant component’ shoots up suddenly when

This sort of effect is familiar in many different contexts: it’s called a ‘percolation threshold’. I can guess the implications for terrorist networks that Gordon Woo is alluding to. However, doubt the details of the math are very important here, since social networks are *not* well modeled by Erdős–Rényi random graphs.

In the real world, if you and I have a mutual friend, that will increase the chance that we’ll be friends. Similarly, if we share a conspirator, that increases the chance that we’re in the same conspiracy. But in a world where friendship was described by an Erdős–Rényi random graph, that would not be the case!

So, while I agree that large terrorist networks are easier to catch than small ones, I don’t think the math of Erdős–Rényi random graphs give any *quantitative* insight into *how much* easier it is.

This post reminded me of an article on the NYTimes in Aug. 2007: In Nature’s Casino.

Maybe I should mention that one central point of the pro-contra nuclear power debate in Germany was that the companies who operate the power plants were unable to get a full coverage insurance for the plants, for foreseeable incidents like a plane crash.

This put politicians in an awkward position who had to explain why the plants are perfectly safe, while no private insurance company was willing to sign an insurance contract. This was especially aggravating for some people after the big financial crisis: “And again the profits go to the big companies why the taxpayer has to cover all the risks!”

Understanding distributions like this is much easier from the point-of-view of simple dispersion following a growth function such as a diminishing return learning curve model. For scientific citation links the agreement is remarkable:

http://img807.imageshack.us/img807/9771/scicitations.gif

It also works pretty well for web links

http://img812.imageshack.us/img812/7305/weblinks.gif

A learning curve model is essentially the growth model:

dg/dt = k/g

which says that growth is very easy to come by initially but slows down inversely proportional to the cumulative growth. Maximum entropy dispersion is then applied to the growth rate proportionality constant and also to overall effort expended. This is so simple to explain yet I haven’t seen this behavior referenced anywhere.

I agree John with your final conclusion. But you can also do this in Sage:

10 is the number of vertices

0.2 is the fixed probability p

g = graphs.RandomGNP(10,0.2)

# this returns order of graph(largest click) and the list of clicks

g.clique_number(), g.cliques()

one test run:

(4, [[6, 0, 5, 3], [6, 0, 7, 3], [6, 0, 7, 4], [6, 0, 7, 8], [6, 1, 2], [6, 2, 4], [6, 2, 5], [6, 2, 8], [9, 0, 3], [9, 0, 4], [9, 0, 8], [9, 1, 2], [9, 2, 4], [9, 2, 8]])

the test run was run with k=0.6, vertices=10

Here’s an example of a terrorist plot that got caught. It illustrates what seems to be a very basic point that doesn’t require ‘Erdős–Rényi random graphs’ or ‘percolation thresholds’ to understand:

the bigger a conspiracy, the more likely it is to get caught. The more people are involved, the greater the chance that a small mistake will occur.• Carrie Johnson, Hearing to examine terrorist recruitment in prisons,

Morning Edition, National Public Radio, 15 June 2011.Of course it’s obviously dumb to use bank robberies to fund a terrorist attack, unless there’s no other way.