Here you can see the slides of a talk I’m giving:

• The dodecahedron, the icosahedron and E_{8}, Annual General Meeting of the Hong Kong Mathematical Society, Hong Kong University of Science and Technology.

It’ll take place on 10:50 am Saturday May 20th in Lecture Theatre G. You can see the program for the whole meeting here.

The slides are in the form of webpages, and you can see references and some other information tucked away at the bottom of each page.

In preparing this talk I learned more about the geometric McKay correspondence, which is a correspondence between the simply-laced Dynkin diagrams (also known as ADE Dynkin diagrams) and the finite subgroups of

There are different ways to get your hands on this correspondence, but the *geometric* way is to resolve the singularity in where is such a finite subgroup. The variety has a singularity at the origin–or more precisely, the point coming from the origin in To make singularities go away, we ‘resolve’ them. And when you take the ‘minimal resolution’ of this variety (a concept I explain here), you get a smooth variety with a map

which is one-to-one except at the origin. The points that map to the origin lie on a bunch of Riemann spheres. There’s one of these spheres for each dot in some Dynkin diagram—and two of these spheres intersect iff their two dots are connected by an edge!

In particular, if is the double cover of the rotational symmetry group of the dodecahedron, the Dynkin diagram we get this way is :

The basic reason is connected to the icosahedron is that the icosahedral group is generated by rotations of orders 2, 3 and 5 while the Dynkin diagram has ‘legs’ of length 2, 3, and 5 if you count right:

In general, whenever you have a triple of natural numbers obeying

you get a finite subgroup of that contains rotations of orders and a simply-laced Dynkin diagram with legs of length The three most exciting cases are:

• : the tetrahedron, and

• : the octahedron, and

• : the icosahedron, and

But the puzzle is this: why does resolving the singular variety gives a smooth variety with a bunch of copies of the Riemann sphere sitting over the singular point at the origin, with these copies intersecting in a pattern given by a Dynkin diagram?

It turns out the best explanation is in here:

• Klaus Lamotke, *Regular Solids and Isolated Singularities*, Vieweg & Sohn, Braunschweig, 1986.

In a nutshell, you need to start by blowing up at the origin, getting a space containing a copy of on which acts. The space has further singularities coming from the rotations of orders and in . When you resolve these, you get more copies of which intersect in the pattern given by a Dynkin diagram with legs of length and

I would like to understand this better, and more vividly. I want a really clear understanding of the minimal resolution For this I should keep rereading Lamotke’s book, and doing more calculations.

I do, however, have a nice vivid picture of the singular space For that, read my talk! I’m hoping this will lead, someday, to an equally appealing picture of its minimal resolution.

]]>

and there are about two dozen allotropes of sulfur, with a phase diagram like this:

So I should have guessed that sulfur and phosphorus combine to make many different compounds. But I never thought about this until yesterday!

I’m a great fan of diamonds, not for their monetary value but for the math of their crystal structure:

In a diamond the carbon atoms do not form a lattice in the strict mathematical sense (which is more restrictive than the sense of this word in crystallography). The reason is that there aren’t translational symmetries carrying any atom to any other. Instead, there are two lattices of atoms, shown as red and blue in this picture by Greg Egan. Each atom has 4 nearest neighbors arranged at the vertices of a regular tetrahedron; the tetrahedra centered at the blue atoms are ‘right-side up’, while those centered at the red atoms are ‘upside down’.

Having thought about this a lot, I was happy to read about adamantane. It’s a compound with 10 carbons and 16 hydrogens. There are 4 carbons at the vertices of a regular tetrahedron, and 6 along the edges—but the edges bend out in such a way that the carbons form a tiny piece of a diamond crystal:

or more abstractly, focusing on the carbons and their bonds:

Yesterday I learned that phosphorus decasulfide, P_{4}S_{10}, follows the same pattern:

The angles deviate slightly from the value of

that we’d have in a fragment of a mathematically ideal diamond crystal, but that’s to be expected.

It turns out there are lots of other phosphorus sulfides! Here are some of them:

**Puzzle 1.** Why do each of these compounds have exactly 4 phosphorus atoms?

I don’t know the answer! I can’t believe it’s impossible to form phosphorus–sulfur compounds with some other number of phosphorus atoms, but the Wikipedia article containing this chart says

All known molecular phosphorus sulfides contain a tetrahedral array of four phosphorus atoms. P

_{4}S_{2}is also known but is unstable above −30 °C.

All these phosphorus sulfides contain at most 10 sulfur atoms. If we remove one sulfur from phosphorus decasulfide we can get this:

This is the ‘alpha form’ of P_{4}S_{9}. There’s also a beta form, shown in the chart above.

Some of the phosphorus sulfides have pleasing symmetries, like the

alpha form of P_{4}S_{4}:

or the epsilon form of P_{4}S_{6}:

Others look awkward. The alpha form of P_{4}S_{5} is an ungainly beast:

They all seem to have a few things in common:

• There are 4 phosphorus atoms.

• Each phosphorus atom is connected to 3 or 4 atoms, at most one of which is phosphorus.

• Each sulfur atom is connected to 1 or 2 atoms, which must all be phosphorus.

The pictures seem pretty consistent about showing a ‘double bond’ when a sulfur atom is connected to just 1 phosphorus. However, they don’t show a double bond when a phosphorus atom is connected to just 3 sulfurs.

**Puzzle 2.** Can you draw molecules obeying the 3 rules listed above that aren’t on the chart?

Of all the phosphorus sulfides, P_{4}S_{10} is not only the biggest and most symmetrical, it’s also the most widely used. Humans make thousands of tons of the stuff! It’s used for producing organic sulfur compounds.

People also make P_{4}S_{3}: it’s used in strike-anywhere matches. This molecule is not on the chart I showed you, and it also violates one of the rules I made up:

Somewhat confusingly, P_{4}S_{10} is not only called phosphorus decasulfide: it’s also called phosphorus pentasulfide. Similarly, P_{4}S_{3} is called phosphorus sesquisulfide. Since the prefix ‘sesqui-’ means ‘one and a half’, there seems to be some kind of division by 2 going on here.

]]>

I have a new favorite molecule: adamantane. As you probably know, someone is said to be ‘adamant’ if they are unshakeable, immovable, inflexible, unwavering, uncompromising, resolute, resolved, determined, firm, rigid, or steadfast. But ‘adamant’ is also a legendary mineral, and the etymology is the same as that for ‘diamond’.

The molecule adamantane, shown above, features 10 carbon atoms arranged just like a small portion of a diamond crystal! It’s a bit easier to see this if you ignore the 16 hydrogen atoms and focus on the carbon atoms and bonds between those:

It’s a somewhat strange shape.

**Puzzle 1.** Give a clear, elegant description of this shape.

**Puzzle 2.** What is its symmetry group? This is really two questions: I’m asking about the symmetry group of this shape as an abstract graph, but also the symmetry group of this graph as embedded in 3d Euclidean space, counting both rotations and reflections.

**Puzzle 3.** How many ‘kinds’ of carbon atoms does adamantane have? In other words, when we let the symmetry group of this graph act on the set of vertices, how many orbits are there? (Again this is really two questions, depending on which symmetry group we use.)

**Puzzle 4.** How many kinds of bonds between carbon atoms does adamantane have? In other words, when we let the symmetry group of this graph act on the set of edges, how many orbits are there? (Again, this is really two questions.)

You can see the relation between adamantane and a diamond if you look carefully at a diamond crystal, as shown in this image by H. K. D. H. Bhadeshia:

or this one by Greg Egan:

Even with these pictures at hand, I find it a bit tough to see the adamantane pattern lurking in the diamond! Look again:

Adamantane has an interesting history. The possibility of its existence was first suggested by a chemist named Decker at a conference in 1924. Decker called this molecule ‘decaterpene’, and registered surprise that nobody had made it yet. After some failed attempts, it was first synthesized by the Croatian-Swiss chemist Vladimir Prelog in 1941. He later won the Nobel prize for his work on stereochemistry.

However, long before it was synthesized, adamantane was isolated from petroleum by the Czech chemists Landa, Machacek and Mzourek! They did it in 1932. They only managed to make a few milligrams of the stuff, but we now know that petroleum naturally contains between .0001% and 0.03% adamantane!

Adamantane can be crystallized:

but ironically, the crystals are rather soft. It’s all that hydrogen. It’s also amusing that adamantane has an *odor*: supposedly it smells like camphor!

Adamantane is just the simplest of the molecules called diamondoids.

These are a few:

1 is **adamantane**.

2 is called **diamantane**.

3 is called **triamantane**.

4 is called **isotetramantane**, and it comes in two mirror-image forms.

Here are some better pictures of diamantane:

People have done lots of chemical reactions with diamondoids. Here are some things they’ve done with the next one, **pentamantane**:

Many different diamondoids occur naturally in petroleum. Though the carbon in diamonds is not biological in origin, the carbon in diamondoids found in petroleum is. This was shown by studying ratios of carbon isotopes.

Eric Drexler has proposed using diamondoids for nanotechnology, but he’s talking about larger molecules than those shown here.

For more fun along these lines, try:

• Diamonds and triamonds, *Azimuth*, 11 April 2016.

]]>

You can see slides here:

• Biology as information dynamics.

Abstract.If biology is the study of self-replicating entities, and we want to understand the role of information, it makes sense to see how information theory is connected to the ‘replicator equation’ — a simple model of population dynamics for self-replicating entities. The relevant concept of information turns out to be the information of one probability distribution relative to another, also known as the Kullback–Liebler divergence. Using this we can get a new outlook on free energy, see evolution as a learning process, and give a clearer, more general formulation of Fisher’s fundamental theorem of natural selection.

I’d given a version of this talk earlier this year at a workshop on Quantifying biological complexity, but I’m glad this second try got videotaped and not the first, because I was a lot happier about my talk this time. And as you’ll see at the end, there were a lot of interesting questions.

]]>

• David S. Wilson and Alan Kirman, editors, *Complexity and Evolution: Toward a New Synthesis for Economics*, MIT Press, Cambridge Mass., 2016.

You can get some chapters for free here. I’ve only looked carefully at this one:

• Joshua M. Epstein and Julia Chelen, Advancing Agent_Zero.

**Agent_Zero** is a simple toy model of an agent that’s not the idealized rational actor often studied in economics: rather, it has emotional, deliberative, and social modules which interact with each other to make decisions. Epstein and Chelen simulate collections of such agents and see what they do:

Abstract.Agent_Zero is a mathematical and computational individual that can generate important, but insufficiently understood, social dynamics from the bottom up. First published by Epstein (2013), this new theoretical entity possesses emotional, deliberative, and social modules, each grounded in contemporary neuroscience. Agent_Zero’s observable behavior results from the interaction of these internal modules. When multiple Agent_Zeros interact with one another, a wide range of important, even disturbing, collective dynamics emerge. These dynamics are not straightforwardly generated using the canonical rational actor which has dominated mathematical social science since the 1940s. Following a concise exposition of the Agent_Zero model, this chapter offers a range of fertile research directions, including the use of realistic geographies and population levels, the exploration of new internal modules and new interactions among them, the development of formal axioms for modular agents, empirical testing, the replication of historical episodes, and practical applications. These may all serve to advance the Agent_Zero research program.

It sounds like a fun and productive project as long as one keeps ones wits about one. It’s hard to draw conclusions about *human* behavior from such simplified agents. One can argue about this, and of course economists will. But regardless of this, one *can* draw conclusions about which kinds of simplified agents will engage in which kinds of collective behavior under which conditions.

Basically, one can start mapping out a small simple corner of the huge ‘phase space’ of possible societies. And that’s bound to lead to interesting new ideas that one wouldn’t get from either 1) empirical research on human and animal societies or 2) pure theoretical pondering without the help of simulations.

Here’s an article whose title, at least, takes a vastly more sanguine attitude toward benefits of such work:

• Kate Douglas, Orthodox economics is broken: how evolution, ecology, and collective behavior can help us avoid catastrophe, *Evonomics*, 22 July 2016.

I’ll quote just a bit:

For simplicity’s sake, orthodox economics assumes that

Homo economicus, when making a fundamental decision such as whether to buy or sell something, has access to all relevant information. And because our made-up economic cousins are so rational and self-interested, when the price of an asset is too high, say, they wouldn’t buy—so the price falls. This leads to the notion that economies self-organise into an equilibrium state, where supply and demand are equal.Real humans—be they Wall Street traders or customers in Walmart—don’t always have accurate information to hand, nor do they act rationally. And they certainly don’t act in isolation. We learn from each other, and what we value, buy and invest in is strongly influenced by our beliefs and cultural norms, which themselves change over time and space.

“Many preferences are dynamic, especially as individuals move between groups, and completely new preferences may arise through the mixing of peoples as they create new identities,” says anthropologist Adrian Bell at the University of Utah in Salt Lake City. “Economists need to take cultural evolution more seriously,” he says, because it would help them understand who or what drives shifts in behaviour.

Using a mathematical model of price fluctuations, for example, Bell has shown that prestige bias—our tendency to copy successful or prestigious individuals—influences pricing and investor behaviour in a way that creates or exacerbates market bubbles.

We also adapt our decisions according to the situation, which in turn changes the situations faced by others, and so on. The stability or otherwise of financial markets, for instance, depends to a great extent on traders, whose strategies vary according to what they expect to be most profitable at any one time. “The economy should be considered as a complex adaptive system in which the agents constantly react to, influence and are influenced by the other individuals in the economy,” says Kirman.

This is where biologists might help. Some researchers are used to exploring the nature and functions of complex interactions between networks of individuals as part of their attempts to understand swarms of locusts, termite colonies or entire ecosystems. Their work has provided insights into how information spreads within groups and how that influences consensus decision-making, says Iain Couzin from the Max Planck Institute for Ornithology in Konstanz, Germany—insights that could potentially improve our understanding of financial markets.

Take the popular notion of the “wisdom of the crowd”—the belief that large groups of people can make smart decisions even when poorly informed, because individual errors of judgement based on imperfect information tend to cancel out. In orthodox economics, the wisdom of the crowd helps to determine the prices of assets and ensure that markets function efficiently. “This is often misplaced,” says Couzin, who studies collective behaviour in animals from locusts to fish and baboons.

By creating a computer model based on how these animals make consensus decisions, Couzin and his colleagues showed last year that the wisdom of the crowd works only under certain conditions—and that contrary to popular belief, small groups with access to many sources of information tend to make the best decisions.

That’s because the individual decisions that make up the consensus are based on two types of environmental cue: those to which the entire group are exposed—known as high-correlation cues—and those that only some individuals see, or low-correlation cues. Couzin found that in larger groups, the information known by all members drowns out that which only a few individuals noticed. So if the widely known information is unreliable, larger groups make poor decisions. Smaller groups, on the other hand, still make good decisions because they rely on a greater diversity of information.

So when it comes to organising large businesses or financial institutions, “we need to think about leaders, hierarchies and who has what information”, says Couzin. Decision-making structures based on groups of between eight and 12 individuals, rather than larger boards of directors, might prevent over-reliance on highly correlated information, which can compromise collective intelligence. Operating in a series of smaller groups may help prevent decision-makers from indulging their natural tendency to follow the pack, says Kirman.

Taking into account such effects requires economists to abandon one-size-fits-all mathematical formulae in favour of “agent-based” modelling—computer programs that give virtual economic agents differing characteristics that in turn determine interactions. That’s easier said than done: just like economists, biologists usually model relatively simple agents with simple rules of interaction. How do you model a human?

It’s a nut we’re beginning to crack. One attendee at the forum was Joshua Epstein, director of the Center for Advanced Modelling at Johns Hopkins University in Baltimore, Maryland. He and his colleagues have come up with Agent_Zero, an open-source software template for a more human-like actor influenced by emotion, reason and social pressures. Collections of Agent_Zeros think, feel and deliberate. They have more human-like relationships with other agents and groups, and their interactions lead to social conflict, violence and financial panic. Agent_Zero offers economists a way to explore a range of scenarios and see which best matches what is going on in the real world. This kind of sophistication means they could potentially create scenarios approaching the complexity of real life.

Orthodox economics likes to portray economies as stately ships proceeding forwards on an even keel, occasionally buffeted by unforeseen storms. Kirman prefers a different metaphor, one borrowed from biology: economies are like slime moulds, collections of single-celled organisms that move as a single body, constantly reorganising themselves to slide in directions that are neither understood nor necessarily desired by their component parts.

For Kirman, viewing economies as complex adaptive systems might help us understand how they evolve over time—and perhaps even suggest ways to make them more robust and adaptable. He’s not alone. Drawing analogies between financial and biological networks, the Bank of England’s research chief Andrew Haldane and University of Oxford ecologist Robert May have together argued that we should be less concerned with the robustness of individual banks than the contagious effects of one bank’s problems on others to which it is connected. Approaches like this might help markets to avoid failures that come from within the system itself, Kirman says.

To put this view of macroeconomics into practice, however, might mean making it more like weather forecasting, which has improved its accuracy by feeding enormous amounts of real-time data into computer simulation models that are tested against each other. That’s not going to be easy.

]]>

Here’s the idea. Everyone likes to say that biology is all about information. There’s something true about this—just think about DNA. But what does this insight actually do for us, quantitatively speaking? To figure this out, we need to do some work.

Biology is also about things that make copies of themselves. So it makes sense to figure out how information theory is connected to the replicator equation—a simple model of population dynamics for self-replicating entities.

To see the connection, we need to use ‘relative information’: the information of one probability distribution *relative to another*, also known as the Kullback–Leibler divergence. Then everything pops into sharp focus.

It turns out that free energy—energy in forms that can actually be *used*, not just waste heat—is a special case of relative information Since the decrease of free energy is what drives chemical reactions, biochemistry is founded on relative information.

But there’s a lot more to it than this! Using relative information we can also see evolution as a learning process, fix the problems with Fisher’s fundamental theorem of natural selection, and more.

So this what I’ll talk about! You can see my slides here:

• John Baez, Biology as information dynamics.

but my talk will be videotaped, and it’ll eventually be put here:

• Stanford complexity group, YouTube.

You can already see lots of cool talks at this location!

]]>

A peptide is basically a small protein: a chain of made of fewer than 50 amino acids. If you plot the number of peptides of different masses found in various organisms, you see peculiar oscillations:

These oscillations have a frequency of about 14 daltons, where a ‘dalton’ is roughly the mass of a hydrogen atom—or more precisely, 1/12 the mass of a carbon atom.

Biologists had noticed these oscillations in databases of peptide masses. But they didn’t understand them.

Can you figure out what causes these oscillations?

It’s a math puzzle, actually.

Next I’ll give you the answer, so stop looking if you want to think about it first.

Almost all peptides are made of 20 different amino acids, which have different masses, which are almost integers. So, to a reasonably good approximation, the puzzle amounts to this: if you have 20 natural numbers how many ways can you write any natural number as a finite ordered sum of these numbers? Call it and graph it. It oscillates! Why?

(We count *ordered* sums because the amino acids are stuck together in a linear way to form a protein.)

There’s a well-known way to write down a formula for . It obeys a linear recurrence:

and we can solve this using the ansatz

Then the recurrence relation will hold if

for all But this is fairly easy to achieve! If is the biggest mass, we just need this polynomial equation to hold:

There will be a bunch of solutions, about of them. (If there are repeated roots things get a bit more subtle, but let’s not worry about.) To get the actual formula for we need to find the right linear combination of functions where ranges over all the roots. That takes some work. Craciun and his collaborator Shane Hubler did that work.

But we can get a pretty good understanding with a lot less work. In particular, the root with the largest magnitude will make grow the fastest.

If you haven’t thought about this sort of recurrence relation it’s good to look at the simplest case, where we just have two masses Then the numbers are the Fibonacci numbers. I hope you know this: the th Fibonacci number is the number of ways to write as the sum of an ordered list of 1’s and 2’s!

1

1+1, 2

1+1+1, 1+2, 2+1

1+1+1+1, 1+1+2, 1+2+1, 2+1+1, 2+2

If I drew edges between these sums in the right way, forming a ‘family tree’, you’d see the connection to Fibonacci’s original rabbit puzzle.

In this example the recurrence gives the polynomial equation

and the root with largest magnitude is the golden ratio:

The other root is

With a little more work you get an explicit formula for the Fibonacci numbers in terms of the golden ratio:

But right now I’m more interested in the qualitative aspects! In this example both roots are real. The example from biology is different.

**Puzzle 1.** For which lists of natural numbers are all the roots of

real?

I don’t know the answer. But apparently this kind of polynomial equation always one root with the largest possible magnitude, which is real and has multiplicity one. I think it turns out that is asymptotically proportional to where is this root.

But in the case that’s relevant to biology, there’s also a pair of roots with the *second* largest magnitude, which are *not* real: they’re complex conjugates of each other. And these give rise to the oscillations!

For the masses of the 20 amino acids most common in life, the roots look like this:

The aqua root at right has the largest magnitude and gives the dominant contribution to the exponential growth of The red roots have the second largest magnitude. These give the main oscillations in which have period 14.28.

For the full story, read this:

• Shane Hubler and Gheorghe Craciun, Periodic patterns in distributions of peptide masses, *BioSystems* **109** (2012), 179–185.

Most of the pictures here are from this paper.

My main question is this:

**Puzzle 2.** Suppose we take many lists of natural numbers and draw all the roots of the equations

What pattern do we get in the complex plane?

I suspect that this picture is an approximation to the answer you’d get to Puzzle 2:

If you stare carefully at this picture, you’ll see some patterns, and I’m guessing those are hints of something very beautiful.

Earlier on this blog we looked at roots of polynomials whose coefficients are all 1 or -1:

The pattern is very nice, and it repays deep mathematical study. Here it is, drawn by Sam Derbyshire:

But now we’re looking at polynomials where the leading coefficient is 1 and all the rest are -1 or 0. How does that change things? A lot, it seems!

By the way, the 20 amino acids we commonly see in biology have masses ranging between 57 and 186. It’s not really true that all their masses are different. Here are their masses:

57, 71, 87, 97, 99, 101, 103, 113, 113, 114, 115, 128, 128, 129, 131, 137, 147, 156, 163, 186

I pretended that none of the masses are equal in Puzzle 2, and I left out the fact that only about 1/9th of the coefficients of our polynomial are nonzero. This may affect the picture you get!

]]>

The goal is to start a conversation about applications of category theory, not within pure math or fundamental physics, but to other branches of science and engineering—especially those where the use of category theory is not already well-established! For example, my students and I have been applying category theory to chemistry, electrical engineering, control theory and Markov processes.

Alas, we have no funds for travel and lodging. If you’re interested in giving a talk, please submit an abstract here:

• General information about abstracts, American Mathematical Society.

More precisely, please read the information there and then click on the link on that page to submit an abstract. It should then magically fly through cyberspace to me! Abstracts are due September 12th, but the sooner you submit one, the greater the chance that we’ll have space.

For the program of the whole conference, go here:

• Fall Western Sectional Meeting, U. C. Riverside, Riverside, California, 4–5 November 2017.

We’ll be having some interesting plenary talks:

• Paul Balmer, UCLA, An invitation to tensor-triangular geometry.

• Pavel Etingof, MIT, Double affine Hecke algebras and their applications.

• Monica Vazirani, U.C. Davis, Combinatorics, categorification, and crystals.

]]>

The positions are open to applicants who have PhD or will have a PhD by the beginning of the term from all research areas in mathematics. The teaching load is six courses per year (i.e. 2 per quarter). In addition to teaching, the applicants will be responsible for attending advanced seminars and working on research projects.

This is initially a one-year appointment, and with successful annual teaching review, it is renewable for up to a third year term.

For more details, including how to apply, go here:

https://www.mathjobs.org/jobs/jobs/10162

]]>

In 49 hours, the National Park Service will stop taking comments on an important issue: whether to reintroduce grizzly bears into the North Cascades near Seattle. If you leave a comment on their website before then, you can help make this happen! Follow the easy directions here:

http://theoatmeal.com/blog/grizzlies_north_cascades

Please go ahead! Then tell your friends to join in, and give them this link. This can be your good deed for the day.

But if you want more details:

Grizzly bears are traditionally the apex predator in the North Cascades. Without the apex predator, the whole ecosystem is thrown out of balance. I know this from my childhood in northern Virginia, where deer are stripping the forest of all low-hanging greenery with no wolves to control them. *With* the top predator, the whole ecosystem springs to life and starts humming like a well-tuned engine! For example, when wolves were reintroduced in Yellowstone National Park, it seems that even riverbeds were affected:

There are several plans to restore grizzlies to the North Cascades. On the link I recommended, Matthew Inman supports **Alternative C — Incremental Restoration**. I’m not an expert on this issue, so I went ahead and supported that. There are actually 4 alternatives on the table:

**Alternative A — No Action.** They’ll keep doing what they’re already doing. The few grizzlies already there would be protected from poaching, the local population would be advised on how to deal with grizzlies, and the bears would be monitored. All other alternatives will do these things and more.

**Alternative B — Ecosystem Evaluation Restoration.** Up to 10 grizzly bears will be captured from source populations in northwestern Montana and/or south-central British Columbia and released at a single remote site on Forest Service lands in the North Cascades. This will take 2 years, and then they’ll be monitored for 2 years before deciding what to do next.

**Alternative C — Incremental Restoration.** 5 to 7 grizzly bears will be captured and released into the North Casades each year over roughly 5 to 10 years, with a goal of establishing an initial population of 25 grizzly bears. Bears would be released at multiple remote sites. They can be relocated or removed if they cause trouble. Alternative C is expected to reach the restoration goal of approximately 200 grizzly bears within 60 to 100 years.

**Alternative D — Expedited Restoration.** 5 to 7 grizzly bears will be captured and released into the North Casades each year until the population reaches about 200, which is what the area can easily support.

So, pick your own alternative if you like!

By the way, the remaining grizzly bears in the western United States live within six recovery zones:

• the Greater Yellowstone Ecosystem (GYE) in Wyoming and southwest Montana,

• the Northern Continental Divide Ecosystem (NCDE) in northwest Montana,

• the Cabinet-Yaak Ecosystem (CYE) in extreme northwestern Montana and the northern Idaho panhandle,

• the Selkirk Ecosystem (SE) in northern Idaho and northeastern Washington,

• the Bitterroot Ecosystem (BE) in central Idaho and western Montana,

• and the North Cascades Ecosystem (NCE) in northwestern and north-central Washington.

The North Cascades Ecosystem consists of 24,800 square kilometers in Washington, with an additional 10,350 square kilometers in British Columbia. In the US, 90% of this ecosystem is managed by the US Forest Service, the US National Park Service, and the State of Washington, and approximately 41% falls within Forest Service wilderness or the North Cascades National Park Service Complex.

For more, read this:

• National Park Service, *Draft Grizzly Bear Restoration Plan / Environmental Impact Statement: North Cascades Ecosystem*.

The picture of grizzlies is from this article:

• Ron Judd, Why returning grizzlies to the North Cascades is the right thing to do, *Pacific NW Magazine*, 23 November 2015.

If you’re worried about reintroducing grizzly bears, read it!

The map is from here:

• Krista Langlois, Grizzlies gain ground, *High Country News*, 27 August 2014.

Here you’ll see the huge obstacles this project has overcome so far.

]]>