The Dodecahedron, the Icosahedron and E8

16 May, 2017


Here you can see the slides of a talk I’m giving:

The dodecahedron, the icosahedron and E8, Annual General Meeting of the Hong Kong Mathematical Society, Hong Kong University of Science and Technology.

It’ll take place on 10:50 am Saturday May 20th in Lecture Theatre G. You can see the program for the whole meeting here.

The slides are in the form of webpages, and you can see references and some other information tucked away at the bottom of each page.

In preparing this talk I learned more about the geometric McKay correspondence, which is a correspondence between the simply-laced Dynkin diagrams (also known as ADE Dynkin diagrams) and the finite subgroups of \mathrm{SU}(2).

There are different ways to get your hands on this correspondence, but the geometric way is to resolve the singularity in \mathbb{C}^2/\Gamma where \Gamma \subset \mathrm{SU}(2) is such a finite subgroup. The variety \mathbb{C}^2/\Gamma has a singularity at the origin–or more precisely, the point coming from the origin in \mathbb{C}^2. To make singularities go away, we ‘resolve’ them. And when you take the ‘minimal resolution’ of this variety (a concept I explain here), you get a smooth variety S with a map

\pi \colon S \to \mathbb{C}^2/\Gamma

which is one-to-one except at the origin. The points that map to the origin lie on a bunch of Riemann spheres. There’s one of these spheres for each dot in some Dynkin diagram—and two of these spheres intersect iff their two dots are connected by an edge!

In particular, if \Gamma is the double cover of the rotational symmetry group of the dodecahedron, the Dynkin diagram we get this way is E_8:

The basic reason \mathrm{E}_8 is connected to the icosahedron is that the icosahedral group is generated by rotations of orders 2, 3 and 5 while the \mathrm{E}_8 Dynkin diagram has ‘legs’ of length 2, 3, and 5 if you count right:

In general, whenever you have a triple of natural numbers a,b,c obeying

\displaystyle{ \frac{1}{a} + \frac{1}{b} + \frac{1}{c} > 1}

you get a finite subgroup of \mathrm{SU}(2) that contains rotations of orders a,b,c, and a simply-laced Dynkin diagram with legs of length a,b,c. The three most exciting cases are:

(a,b,c) = (2,3,3): the tetrahedron, and E_6,

(a,b,c) = (2,3,4): the octahedron, and E_7,

(a,b,c) = (2,3,5): the icosahedron, and E_8.

But the puzzle is this: why does resolving the singular variety \mathbb{C}^2/\Gamma gives a smooth variety with a bunch of copies of the Riemann sphere \mathbb{C}\mathrm{P}^1 sitting over the singular point at the origin, with these copies intersecting in a pattern given by a Dynkin diagram?

It turns out the best explanation is in here:

• Klaus Lamotke, Regular Solids and Isolated Singularities, Vieweg & Sohn, Braunschweig, 1986.

In a nutshell, you need to start by blowing up \mathbb{C}^2 at the origin, getting a space X containing a copy of \mathbb{C}\mathrm{P}^1 on which \Gamma acts. The space X/\Gamma has further singularities coming from the rotations of orders a, b and c in \Gamma. When you resolve these, you get more copies of \mathbb{C}\mathrm{P}^1, which intersect in the pattern given by a Dynkin diagram with legs of length a,b and c.

I would like to understand this better, and more vividly. I want a really clear understanding of the minimal resolution S. For this I should keep rereading Lamotke’s book, and doing more calculations.

I do, however, have a nice vivid picture of the singular space \mathbb{C}^2/\Gamma. For that, read my talk! I’m hoping this will lead, someday, to an equally appealing picture of its minimal resolution.


Phosphorus Sulfides

5 May, 2017

I think of sulfur and phosphorus as clever chameleons of the periodic table: both come in many different forms, called allotropes. There’s white phosphorus, red phosphorus, violet phosphorus and black phosphorus:



and there are about two dozen allotropes of sulfur, with a phase diagram like this:



So I should have guessed that sulfur and phosphorus combine to make many different compounds. But I never thought about this until yesterday!

I’m a great fan of diamonds, not for their monetary value but for the math of their crystal structure:



In a diamond the carbon atoms do not form a lattice in the strict mathematical sense (which is more restrictive than the sense of this word in crystallography). The reason is that there aren’t translational symmetries carrying any atom to any other. Instead, there are two lattices of atoms, shown as red and blue in this picture by Greg Egan. Each atom has 4 nearest neighbors arranged at the vertices of a regular tetrahedron; the tetrahedra centered at the blue atoms are ‘right-side up’, while those centered at the red atoms are ‘upside down’.

Having thought about this a lot, I was happy to read about adamantane. It’s a compound with 10 carbons and 16 hydrogens. There are 4 carbons at the vertices of a regular tetrahedron, and 6 along the edges—but the edges bend out in such a way that the carbons form a tiny piece of a diamond crystal:

or more abstractly, focusing on the carbons and their bonds:


Yesterday I learned that phosphorus decasulfide, P4S10, follows the same pattern:



The angles deviate slightly from the value of

\arccos (-1/3) \approx 109.4712^\circ

that we’d have in a fragment of a mathematically ideal diamond crystal, but that’s to be expected.

It turns out there are lots of other phosphorus sulfides! Here are some of them:



Puzzle 1. Why do each of these compounds have exactly 4 phosphorus atoms?

I don’t know the answer! I can’t believe it’s impossible to form phosphorus–sulfur compounds with some other number of phosphorus atoms, but the Wikipedia article containing this chart says

All known molecular phosphorus sulfides contain a tetrahedral array of four phosphorus atoms. P4S2 is also known but is unstable above −30 °C.

All these phosphorus sulfides contain at most 10 sulfur atoms. If we remove one sulfur from phosphorus decasulfide we can get this:


This is the ‘alpha form’ of P4S9. There’s also a beta form, shown in the chart above.

Some of the phosphorus sulfides have pleasing symmetries, like the
alpha form of P4S4:


or the epsilon form of P4S6:


Others look awkward. The alpha form of P4S5 is an ungainly beast:

They all seem to have a few things in common:

• There are 4 phosphorus atoms.

• Each phosphorus atom is connected to 3 or 4 atoms, at most one of which is phosphorus.

• Each sulfur atom is connected to 1 or 2 atoms, which must all be phosphorus.

The pictures seem pretty consistent about showing a ‘double bond’ when a sulfur atom is connected to just 1 phosphorus. However, they don’t show a double bond when a phosphorus atom is connected to just 3 sulfurs.

Puzzle 2. Can you draw molecules obeying the 3 rules listed above that aren’t on the chart?

Of all the phosphorus sulfides, P4S10 is not only the biggest and most symmetrical, it’s also the most widely used. Humans make thousands of tons of the stuff! It’s used for producing organic sulfur compounds.

People also make P4S3: it’s used in strike-anywhere matches. This molecule is not on the chart I showed you, and it also violates one of the rules I made up:



Somewhat confusingly, P4S10 is not only called phosphorus decasulfide: it’s also called phosphorus pentasulfide. Similarly, P4S3 is called phosphorus sesquisulfide. Since the prefix ‘sesqui-’ means ‘one and a half’, there seems to be some kind of division by 2 going on here.


Diamondoids

2 May, 2017

I have a new favorite molecule: adamantane. As you probably know, someone is said to be ‘adamant’ if they are unshakeable, immovable, inflexible, unwavering, uncompromising, resolute, resolved, determined, firm, rigid, or steadfast. But ‘adamant’ is also a legendary mineral, and the etymology is the same as that for ‘diamond’.

The molecule adamantane, shown above, features 10 carbon atoms arranged just like a small portion of a diamond crystal! It’s a bit easier to see this if you ignore the 16 hydrogen atoms and focus on the carbon atoms and bonds between those:


It’s a somewhat strange shape.

Puzzle 1. Give a clear, elegant description of this shape.

Puzzle 2. What is its symmetry group? This is really two questions: I’m asking about the symmetry group of this shape as an abstract graph, but also the symmetry group of this graph as embedded in 3d Euclidean space, counting both rotations and reflections.

Puzzle 3. How many ‘kinds’ of carbon atoms does adamantane have? In other words, when we let the symmetry group of this graph act on the set of vertices, how many orbits are there? (Again this is really two questions, depending on which symmetry group we use.)

Puzzle 4. How many kinds of bonds between carbon atoms does adamantane have? In other words, when we let the symmetry group of this graph act on the set of edges, how many orbits are there? (Again, this is really two questions.)

You can see the relation between adamantane and a diamond if you look carefully at a diamond crystal, as shown in this image by H. K. D. H. Bhadeshia:


or this one by Greg Egan:



Even with these pictures at hand, I find it a bit tough to see the adamantane pattern lurking in the diamond! Look again:


Adamantane has an interesting history. The possibility of its existence was first suggested by a chemist named Decker at a conference in 1924. Decker called this molecule ‘decaterpene’, and registered surprise that nobody had made it yet. After some failed attempts, it was first synthesized by the Croatian-Swiss chemist Vladimir Prelog in 1941. He later won the Nobel prize for his work on stereochemistry.

However, long before it was synthesized, adamantane was isolated from petroleum by the Czech chemists Landa, Machacek and Mzourek! They did it in 1932. They only managed to make a few milligrams of the stuff, but we now know that petroleum naturally contains between .0001% and 0.03% adamantane!

Adamantane can be crystallized:

but ironically, the crystals are rather soft. It’s all that hydrogen. It’s also amusing that adamantane has an odor: supposedly it smells like camphor!

Adamantane is just the simplest of the molecules called diamondoids.
These are a few:


1 is adamantane.

2 is called diamantane.

3 is called triamantane.

4 is called isotetramantane, and it comes in two mirror-image forms.

Here are some better pictures of diamantane:


People have done lots of chemical reactions with diamondoids. Here are some things they’ve done with the next one, pentamantane:



Many different diamondoids occur naturally in petroleum. Though the carbon in diamonds is not biological in origin, the carbon in diamondoids found in petroleum is. This was shown by studying ratios of carbon isotopes.

Eric Drexler has proposed using diamondoids for nanotechnology, but he’s talking about larger molecules than those shown here.

For more fun along these lines, try:

Diamonds and triamonds, Azimuth, 11 April 2016.


Biology as Information Dynamics (Part 2)

27 April, 2017

Here’s a video of the talk I gave at the Stanford Complexity Group:

You can see slides here:

Biology as information dynamics.

Abstract. If biology is the study of self-replicating entities, and we want to understand the role of information, it makes sense to see how information theory is connected to the ‘replicator equation’ — a simple model of population dynamics for self-replicating entities. The relevant concept of information turns out to be the information of one probability distribution relative to another, also known as the Kullback–Liebler divergence. Using this we can get a new outlook on free energy, see evolution as a learning process, and give a clearer, more general formulation of Fisher’s fundamental theorem of natural selection.

I’d given a version of this talk earlier this year at a workshop on Quantifying biological complexity, but I’m glad this second try got videotaped and not the first, because I was a lot happier about my talk this time. And as you’ll see at the end, there were a lot of interesting questions.


Complexity Theory and Evolution in Economics

24 April, 2017

This book looks interesting:

• David S. Wilson and Alan Kirman, editors, Complexity and Evolution: Toward a New Synthesis for Economics, MIT Press, Cambridge Mass., 2016.

You can get some chapters for free here. I’ve only looked carefully at this one:

• Joshua M. Epstein and Julia Chelen, Advancing Agent_Zero.

Agent_Zero is a simple toy model of an agent that’s not the idealized rational actor often studied in economics: rather, it has emotional, deliberative, and social modules which interact with each other to make decisions. Epstein and Chelen simulate collections of such agents and see what they do:

Abstract. Agent_Zero is a mathematical and computational individual that can generate important, but insufficiently understood, social dynamics from the bottom up. First published by Epstein (2013), this new theoretical entity possesses emotional, deliberative, and social modules, each grounded in contemporary neuroscience. Agent_Zero’s observable behavior results from the interaction of these internal modules. When multiple Agent_Zeros interact with one another, a wide range of important, even disturbing, collective dynamics emerge. These dynamics are not straightforwardly generated using the canonical rational actor which has dominated mathematical social science since the 1940s. Following a concise exposition of the Agent_Zero model, this chapter offers a range of fertile research directions, including the use of realistic geographies and population levels, the exploration of new internal modules and new interactions among them, the development of formal axioms for modular agents, empirical testing, the replication of historical episodes, and practical applications. These may all serve to advance the Agent_Zero research program.

It sounds like a fun and productive project as long as one keeps ones wits about one. It’s hard to draw conclusions about human behavior from such simplified agents. One can argue about this, and of course economists will. But regardless of this, one can draw conclusions about which kinds of simplified agents will engage in which kinds of collective behavior under which conditions.

Basically, one can start mapping out a small simple corner of the huge ‘phase space’ of possible societies. And that’s bound to lead to interesting new ideas that one wouldn’t get from either 1) empirical research on human and animal societies or 2) pure theoretical pondering without the help of simulations.

Here’s an article whose title, at least, takes a vastly more sanguine attitude toward benefits of such work:

• Kate Douglas, Orthodox economics is broken: how evolution, ecology, and collective behavior can help us avoid catastrophe, Evonomics, 22 July 2016.

I’ll quote just a bit:

For simplicity’s sake, orthodox economics assumes that Homo economicus, when making a fundamental decision such as whether to buy or sell something, has access to all relevant information. And because our made-up economic cousins are so rational and self-interested, when the price of an asset is too high, say, they wouldn’t buy—so the price falls. This leads to the notion that economies self-organise into an equilibrium state, where supply and demand are equal.

Real humans—be they Wall Street traders or customers in Walmart—don’t always have accurate information to hand, nor do they act rationally. And they certainly don’t act in isolation. We learn from each other, and what we value, buy and invest in is strongly influenced by our beliefs and cultural norms, which themselves change over time and space.

“Many preferences are dynamic, especially as individuals move between groups, and completely new preferences may arise through the mixing of peoples as they create new identities,” says anthropologist Adrian Bell at the University of Utah in Salt Lake City. “Economists need to take cultural evolution more seriously,” he says, because it would help them understand who or what drives shifts in behaviour.

Using a mathematical model of price fluctuations, for example, Bell has shown that prestige bias—our tendency to copy successful or prestigious individuals—influences pricing and investor behaviour in a way that creates or exacerbates market bubbles.

We also adapt our decisions according to the situation, which in turn changes the situations faced by others, and so on. The stability or otherwise of financial markets, for instance, depends to a great extent on traders, whose strategies vary according to what they expect to be most profitable at any one time. “The economy should be considered as a complex adaptive system in which the agents constantly react to, influence and are influenced by the other individuals in the economy,” says Kirman.

This is where biologists might help. Some researchers are used to exploring the nature and functions of complex interactions between networks of individuals as part of their attempts to understand swarms of locusts, termite colonies or entire ecosystems. Their work has provided insights into how information spreads within groups and how that influences consensus decision-making, says Iain Couzin from the Max Planck Institute for Ornithology in Konstanz, Germany—insights that could potentially improve our understanding of financial markets.

Take the popular notion of the “wisdom of the crowd”—the belief that large groups of people can make smart decisions even when poorly informed, because individual errors of judgement based on imperfect information tend to cancel out. In orthodox economics, the wisdom of the crowd helps to determine the prices of assets and ensure that markets function efficiently. “This is often misplaced,” says Couzin, who studies collective behaviour in animals from locusts to fish and baboons.

By creating a computer model based on how these animals make consensus decisions, Couzin and his colleagues showed last year that the wisdom of the crowd works only under certain conditions—and that contrary to popular belief, small groups with access to many sources of information tend to make the best decisions.

That’s because the individual decisions that make up the consensus are based on two types of environmental cue: those to which the entire group are exposed—known as high-correlation cues—and those that only some individuals see, or low-correlation cues. Couzin found that in larger groups, the information known by all members drowns out that which only a few individuals noticed. So if the widely known information is unreliable, larger groups make poor decisions. Smaller groups, on the other hand, still make good decisions because they rely on a greater diversity of information.

So when it comes to organising large businesses or financial institutions, “we need to think about leaders, hierarchies and who has what information”, says Couzin. Decision-making structures based on groups of between eight and 12 individuals, rather than larger boards of directors, might prevent over-reliance on highly correlated information, which can compromise collective intelligence. Operating in a series of smaller groups may help prevent decision-makers from indulging their natural tendency to follow the pack, says Kirman.

Taking into account such effects requires economists to abandon one-size-fits-all mathematical formulae in favour of “agent-based” modelling—computer programs that give virtual economic agents differing characteristics that in turn determine interactions. That’s easier said than done: just like economists, biologists usually model relatively simple agents with simple rules of interaction. How do you model a human?

It’s a nut we’re beginning to crack. One attendee at the forum was Joshua Epstein, director of the Center for Advanced Modelling at Johns Hopkins University in Baltimore, Maryland. He and his colleagues have come up with Agent_Zero, an open-source software template for a more human-like actor influenced by emotion, reason and social pressures. Collections of Agent_Zeros think, feel and deliberate. They have more human-like relationships with other agents and groups, and their interactions lead to social conflict, violence and financial panic. Agent_Zero offers economists a way to explore a range of scenarios and see which best matches what is going on in the real world. This kind of sophistication means they could potentially create scenarios approaching the complexity of real life.

Orthodox economics likes to portray economies as stately ships proceeding forwards on an even keel, occasionally buffeted by unforeseen storms. Kirman prefers a different metaphor, one borrowed from biology: economies are like slime moulds, collections of single-celled organisms that move as a single body, constantly reorganising themselves to slide in directions that are neither understood nor necessarily desired by their component parts.

For Kirman, viewing economies as complex adaptive systems might help us understand how they evolve over time—and perhaps even suggest ways to make them more robust and adaptable. He’s not alone. Drawing analogies between financial and biological networks, the Bank of England’s research chief Andrew Haldane and University of Oxford ecologist Robert May have together argued that we should be less concerned with the robustness of individual banks than the contagious effects of one bank’s problems on others to which it is connected. Approaches like this might help markets to avoid failures that come from within the system itself, Kirman says.

To put this view of macroeconomics into practice, however, might mean making it more like weather forecasting, which has improved its accuracy by feeding enormous amounts of real-time data into computer simulation models that are tested against each other. That’s not going to be easy.

 


Stanford Complexity Group

19 April, 2017

Aaron Goodman of the Stanford Complexity Group invited me to give a talk there on Thursday April 20th. If you’re nearby—like in Silicon Valley—please drop by! It will be in Clark S361 at 4:20 pm.

Here’s the idea. Everyone likes to say that biology is all about information. There’s something true about this—just think about DNA. But what does this insight actually do for us, quantitatively speaking? To figure this out, we need to do some work.

Biology is also about things that make copies of themselves. So it makes sense to figure out how information theory is connected to the replicator equation—a simple model of population dynamics for self-replicating entities.

To see the connection, we need to use ‘relative information’: the information of one probability distribution relative to another, also known as the Kullback–Leibler divergence. Then everything pops into sharp focus.

It turns out that free energy—energy in forms that can actually be used, not just waste heat—is a special case of relative information Since the decrease of free energy is what drives chemical reactions, biochemistry is founded on relative information.

But there’s a lot more to it than this! Using relative information we can also see evolution as a learning process, fix the problems with Fisher’s fundamental theorem of natural selection, and more.

So this what I’ll talk about! You can see my slides here:

• John Baez, Biology as information dynamics.

but my talk will be videotaped, and it’ll eventually be put here:

Stanford complexity group, YouTube.

You can already see lots of cool talks at this location!

 


Periodic Patterns in Peptide Masses

6 April, 2017

Gheorghe Craciun is a mathematician at the University of Wisconsin who recently proved the Global Attractor Conjecture, which since 1974 was the most famous conjecture in mathematical chemistry. This week he visited U. C. Riverside and gave a talk on this subject. But he also told me about something else—something quite remarkable.

The mystery

A peptide is basically a small protein: a chain of made of fewer than 50 amino acids. If you plot the number of peptides of different masses found in various organisms, you see peculiar oscillations:

These oscillations have a frequency of about 14 daltons, where a ‘dalton’ is roughly the mass of a hydrogen atom—or more precisely, 1/12 the mass of a carbon atom.

Biologists had noticed these oscillations in databases of peptide masses. But they didn’t understand them.

Can you figure out what causes these oscillations?

It’s a math puzzle, actually.

Next I’ll give you the answer, so stop looking if you want to think about it first.

The solution

Almost all peptides are made of 20 different amino acids, which have different masses, which are almost integers. So, to a reasonably good approximation, the puzzle amounts to this: if you have 20 natural numbers m_1, ... , m_{20}, how many ways can you write any natural number N as a finite ordered sum of these numbers? Call it F(N) and graph it. It oscillates! Why?

(We count ordered sums because the amino acids are stuck together in a linear way to form a protein.)

There’s a well-known way to write down a formula for F(N). It obeys a linear recurrence:

F(N) = F(N - m_1) + \cdots + F(N - m_{20})

and we can solve this using the ansatz

F(N) = x^N

Then the recurrence relation will hold if

x^N = x^{N - m_1} + x^{N - m_2} + \dots + x^{N - m_{20}}

for all N. But this is fairly easy to achieve! If m_{20} is the biggest mass, we just need this polynomial equation to hold:

x^{m_{20}} = x^{m_{20} - m_1} + x^{m_{20} - m_2} + \dots + 1

There will be a bunch of solutions, about m_{20} of them. (If there are repeated roots things get a bit more subtle, but let’s not worry about.) To get the actual formula for F(N) we need to find the right linear combination of functions x^N where x ranges over all the roots. That takes some work. Craciun and his collaborator Shane Hubler did that work.

But we can get a pretty good understanding with a lot less work. In particular, the root x with the largest magnitude will make x^N grow the fastest.

If you haven’t thought about this sort of recurrence relation it’s good to look at the simplest case, where we just have two masses m_1 = 1, m_2 = 2. Then the numbers F(N) are the Fibonacci numbers. I hope you know this: the Nth Fibonacci number is the number of ways to write N as the sum of an ordered list of 1’s and 2’s!

1

1+1,   2

1+1+1,   1+2,   2+1

1+1+1+1,   1+1+2,   1+2+1,   2+1+1,   2+2

If I drew edges between these sums in the right way, forming a ‘family tree’, you’d see the connection to Fibonacci’s original rabbit puzzle.

In this example the recurrence gives the polynomial equation

x^2 = x + 1

and the root with largest magnitude is the golden ratio:

\Phi = 1.6180339...

The other root is

1 - \Phi = -0.6180339...

With a little more work you get an explicit formula for the Fibonacci numbers in terms of the golden ratio:

\displaystyle{ F(N) = \frac{1}{\sqrt{5}} \left( \Phi^{N+1} - (1-\Phi)^{N+1} \right) }

But right now I’m more interested in the qualitative aspects! In this example both roots are real. The example from biology is different.

Puzzle 1. For which lists of natural numbers m_1 < \cdots < m_k are all the roots of

x^{m_k} = x^{m_k - m_1} + x^{m_k - m_2} + \cdots + 1

real?

I don’t know the answer. But apparently this kind of polynomial equation always one root with the largest possible magnitude, which is real and has multiplicity one. I think it turns out that F(N) is asymptotically proportional to x^N where x is this root.

But in the case that’s relevant to biology, there’s also a pair of roots with the second largest magnitude, which are not real: they’re complex conjugates of each other. And these give rise to the oscillations!

For the masses of the 20 amino acids most common in life, the roots look like this:

The aqua root at right has the largest magnitude and gives the dominant contribution to the exponential growth of F(N). The red roots have the second largest magnitude. These give the main oscillations in F(N), which have period 14.28.

For the full story, read this:

• Shane Hubler and Gheorghe Craciun, Periodic patterns in distributions of peptide masses, BioSystems 109 (2012), 179–185.

Most of the pictures here are from this paper.

My main question is this:

Puzzle 2. Suppose we take many lists of natural numbers m_1 < \cdots < m_k and draw all the roots of the equations

x^{m_k} = x^{m_k - m_1} + x^{m_k - m_2} + \cdots + 1

What pattern do we get in the complex plane?

I suspect that this picture is an approximation to the answer you’d get to Puzzle 2:

If you stare carefully at this picture, you’ll see some patterns, and I’m guessing those are hints of something very beautiful.

Earlier on this blog we looked at roots of polynomials whose coefficients are all 1 or -1:

The beauty of roots.

The pattern is very nice, and it repays deep mathematical study. Here it is, drawn by Sam Derbyshire:


But now we’re looking at polynomials where the leading coefficient is 1 and all the rest are -1 or 0. How does that change things? A lot, it seems!

By the way, the 20 amino acids we commonly see in biology have masses ranging between 57 and 186. It’s not really true that all their masses are different. Here are their masses:

57, 71, 87, 97, 99, 101, 103, 113, 113, 114, 115, 128, 128, 129, 131, 137, 147, 156, 163, 186

I pretended that none of the masses m_i are equal in Puzzle 2, and I left out the fact that only about 1/9th of the coefficients of our polynomial are nonzero. This may affect the picture you get!