Chaitin’s Theorem and the Surprise Examination Paradox

6 October, 2011

If you followed the business about Edward Nelson’s claim to have proved the inconsistency of arithmetic, you might have heard people mention this paper:

• Shira Kritchman and Ran Raz, The surprise examination paradox and the second incompleteness theorem, AMS Notices 57 (December 2010), 1454–1458.

It’s got a great idea in it, which I’d like to explain in a really sloppy way so everyone can understand it. Logic is cool, but most people never get to the cool part because they can’t fight their way through the rather dry technicalities.

You all know the surprise examination paradox, right? The teacher says one day he’ll give a quiz and it will be a surprise. So the kids think “well, it can’t be on the last day then—we’d know it was coming.” And then they think “well, so it can’t be on the day before the last day, either!—we’d know it was coming.” And so on… and they convince themselves it can’t happen at all.

But then the teacher gives it the very next day, and they’re completely surprised.

People still argue a lot about how to settle this paradox. But anyway, Kritchman and Raz use a rigorous version of this paradox together with Chaitin’s incompleteness theorem to demonstrate Gödel’s second incompleteness theorem—which says, very roughly, that:

If math can prove itself consistent, it’s not.

If you’re a logician, I bet this sort of sloppy statement really annoys you. Yeah? Does it? Take a chill pill, dude: this post isn’t for you—it’s for ordinary mortals. I said I want to summarize Kritchman and Raz’s argument in a really sloppy way. If you want precision and details, read their paper.

Okay, here we go:

Chaitin’s theorem, which is already astounding, says there’s some length L such that you can’t prove any particular string of bits needs a program longer than L to print it out. At least, this is so if math is consistent. If it’s not consistent, you can prove anything!

On the other hand, there’s some finite number of programs of length ≤ L. So if you take a list of more numbers than that, say 1, 2, …, N, there’s got to be at least one that needs a program longer than L to print it out.

Okay: there’s got to be at least one. How many? Suppose just one. Then we can go through all programs of length ≤ L, find those that print all the other numbers on our list… and thus, by a process of elimination, find the culprit.

But that means we’ve proved that this culprit is a number can only be computed by a program of length > L. But Chaitin’s theorem says that’s impossible! At least, not if math is consistent.

So there can’t be just one. At least, not if math is consistent.

Okay: suppose there are just two. Well, we can pull the same trick and find out who they are! So there can’t be just two, either. At least, not if math is consistent.

We can keep playing this game until we rule out all the possibilities… and then we’re really stuck. We get a contradiction. At least, if math is consistent.

So if we could prove math is consistent, we’d know it’s not!


A Bet Concerning Neutrinos (Part 2)

5 October, 2011

We negotiated it, and now we’ve agreed:

This bet concerns whether neutrinos can go faster than light. John Baez bets they cannot. For the sake of the environment and out of scientific curiosity, Frederik De Roo bets that they can.

At any time before October 2021, either John or Frederik can claim they have won this bet. When that happens, they will try to agree whether it’s true beyond a reasonable doubt, false beyond a reasonable doubt, or uncertain that neutrinos can (under some conditions) go faster than light. If they cannot agree, the situation counts as uncertain.

If they decide it’s true, John is only allowed to take one round-trip airplane trip during one of the next 5 years. John is allowed to choose which year this is. He can make his choice at any time (before 4 years have passed).

If they decide it’s false, Frederik has to produce 10 decent Azimuth Library articles during one of the next 5 years—where ‘decent’ means ‘deserving of three thumbs up emoticons on the Azimuth Forum’. He is allowed to choose which year this is. He can make his choice at any time (before 4 years have passed).

If they decide it’s uncertain, they can renegotiate the bet (or just decide not to continue it).


Network Theory (Part 11)

4 October, 2011

jointly written with Brendan Fong

Noether proved lots of theorems, but when people talk about Noether’s theorem, they always seem to mean her result linking symmetries to conserved quantities. Her original result applied to classical mechanics, but today we’d like to present a version that applies to ‘stochastic mechanics’—or in other words, Markov processes.

What’s a Markov process? We’ll say more in a minute—but in plain English, it’s a physical system where something hops around randomly from state to state, where its probability of hopping anywhere depends only on where it is now, not its past history. Markov processes include, as a special case, the stochastic Petri nets we’ve been talking about.

Our stochastic version of Noether’s theorem is copied after a well-known quantum version. It’s yet another example of how we can exploit the analogy between stochastic mechanics and quantum mechanics. But for now we’ll just present the stochastic version. Next time we’ll compare it to the quantum one.

Markov processes

We should and probably will be more general, but let’s start by considering a finite set of states, say X. To describe a Markov process we then need a matrix of real numbers H = (H_{i j})_{i, j \in X}. The idea is this: suppose right now our system is in the state i. Then the probability of being in some state j changes as time goes by—and H_{i j} is defined to be the time derivative of this probability right now.

So, if \psi_i(t) is the probability of being in the state i at time t, we want the master equation to hold:

\displaystyle{ \frac{d}{d t} \psi_i(t) = \sum_{j \in X} H_{i j} \psi_j(t) }

This motivates the definition of ‘infinitesimal stochastic’, which we recall from Part 5:

Definition. Given a finite set X, a matrix of real numbers H = (H_{i j})_{i, j \in X} is infinitesimal stochastic if

i \ne j \implies H_{i j} \ge 0

and

\displaystyle{ \sum_{i \in X} H_{i j} = 0 }

for all j \in X.

The inequality says that if we start in the state i, the probability of being found in some other state, which starts at 0, can’t go down, at least initially. The equation says that the probability of being somewhere or other doesn’t change. Together, these facts imply that that:

H_{i i} \le 0

That makes sense: the probability of being in the state $i$, which starts at 1, can’t go up, at least initially.

Using the magic of matrix multiplication, we can rewrite the master equation as follows:

\displaystyle{\frac{d}{d t} \psi(t) = H \psi(t) }

and we can solve it like this:

\psi(t) = \exp(t H) \psi(0)

If H is an infinitesimal stochastic operator, we will call \exp(t H) a Markov process, and H its Hamiltonian.

(Actually, most people call \exp(t H) a Markov semigroup, and reserve the term Markov process for another way of looking at the same idea. So, be careful.)

Noether’s theorem is about ‘conserved quantities’, that is, observables whose expected values don’t change with time. To understand this theorem, you need to know a bit about observables. In stochastic mechanics an observable is simply a function assigning a number O_i to each state i \in X.

However, in quantum mechanics we often think of observables as matrices, so it’s nice to do that here, too. It’s easy: we just create a matrix whose diagonal entries are the values of the function O. And just to confuse you, we’ll also call this matrix O. So:

O_{i j} = \left\{ \begin{array}{ccl}  O_i & \textrm{if} & i = j \\ 0 & \textrm{if} & i \ne j  \end{array} \right.

One advantage of this trick is that it lets us ask whether an observable commutes with the Hamiltonian. Remember, the commutator of matrices is defined by

[O,H] = O H - H O

Noether’s theorem will say that [O,H] = 0 if and only if O is ‘conserved’ in some sense. What sense? First, recall that a stochastic state is just our fancy name for a probability distribution \psi on the set X. Second, the expected value of an observable O in the stochastic state \psi is defined to be

\displaystyle{ \sum_{i \in X} O_i \psi_i }

In Part 5 we introduced the notation

\displaystyle{ \int \phi = \sum_{i \in X} \phi_i }

for any function \phi on X. The reason is that later, when we generalize X from a finite set to a measure space, the sum at right will become an integral over X. Indeed, a sum is just a special sort of integral!

Using this notation and the magic of matrix multiplication, we can write the expected value of O in the stochastic state \psi as

\int O \psi

We can calculate how this changes in time if \psi obeys the master equation… and we can write the answer using the commutator [O,H]:

Lemma. Suppose H is an infinitesimal stochastic operator and O is an observable. If \psi(t) obeys the master equation, then

\displaystyle{ \frac{d}{d t} \int O \psi(t) = \int [O,H] \psi(t) }

Proof. Using the master equation we have

\displaystyle{ \frac{d}{d t} \int O \psi(t) = \int O \frac{d}{d t} \psi(t) = \int O H \psi(t) } \qquad \qquad \qquad \; (1)

But since H is infinitesimal stochastic,

\displaystyle{ \sum_{i \in X} H_{i j} = 0  }

so for any function \phi on X we have

\displaystyle{ \int H \phi = \sum_{i, j \in X} H_{i j} \phi_j = 0 }

and in particular

\int H O \psi(t) = 0   \quad \; \qquad \qquad \qquad \qquad   \qquad \qquad \qquad \qquad (2)

Since [O,H] = O H - H O , we conclude from (1) and (2) that

\displaystyle{ \frac{d}{d t} \int O \psi(t) = \int [O,H] \psi(t) }

as desired.   █

The commutator doesn’t look like it’s doing much here, since we also have

\displaystyle{ \frac{d}{d t} \int O \psi(t) = \int O H \psi(t) }

which is even simpler. But the commutator will become useful when we get to Noether’s theorem!

Noether’s theorem

Here’s a version of Noether’s theorem for Markov processes. It says an observable commutes with the Hamiltonian iff the expected values of that observable and its square don’t change as time passes:

Theorem. Suppose H is an infinitesimal stochastic operator and O is an observable. Then

[O,H] =0

if and only if

\displaystyle{ \frac{d}{d t} \int O\psi(t) = 0 }

and

\displaystyle{ \frac{d}{d t} \int O^2\psi(t) = 0 }

for all \psi(t) obeying the master equation.

If you know Noether’s theorem from quantum mechanics, you might be surprised that in this version we need not only the observable but also its square to have an unchanging expected value! We’ll explain this, but first let’s prove the theorem.

Proof. The easy part is showing that if [O,H]=0 then \frac{d}{d t} \int O\psi(t) = 0 and \frac{d}{d t} \int O^2\psi(t) = 0. In fact there’s nothing special about these two powers of t; we’ll show that

\displaystyle{ \frac{d}{d t} \int O^n \psi(t) = 0 }

for all n. The point is that since H commutes with O, it commutes with all powers of O:

[O^n, H] = 0

So, applying the Lemma to the observable O^n, we see

\displaystyle{ \frac{d}{d t} \int O^n \psi(t) =  \int [O^n, H] \psi(t) = 0 }

The backward direction is a bit trickier. We now assume that

\displaystyle{ \frac{d}{d t} \int O\psi(t) = \frac{d}{d t} \int O^2\psi(t) = 0 }

for all solutions \psi(t) of the master equation. This implies

\int O H\psi(t) = \int O^2 H\psi(t) = 0

or since this holds for all solutions,

\displaystyle{ \sum_{i \in X} O_i H_{i j} = \sum_{i \in X} O_i^2H_{i j} = 0 }  \qquad \qquad \qquad \qquad  \qquad \qquad (3)

We wish to show that [O,H]= 0.

First, recall that we can think of O is a diagonal matrix with:

O_{i j} = \left\{ \begin{array}{ccl}  O_i & \textrm{if} & i = j \\ 0 & \textrm{if} & i \ne j  \end{array} \right.

So, we have

\begin{array}{ccl} [O,H]_{i j} &=& \displaystyle{ \sum_{k \in X} (O_{i k}H_{k j} - H_{i k} O_{k j}) } \\ \\ &=& O_i H_{i j} - H_{i j}O_j \\ \\ &=& (O_i-O_j)H_{i j} \end{array}

To show this is zero for each pair of elements i, j \in X, it suffices to show that when H_{i j} \ne 0, then O_j = O_i. That is, we need to show that if the system can move from state j to state i, then the observable takes the same value on these two states.

In fact, it’s enough to show that this sum is zero for any j \in X:

\displaystyle{ \sum_{i \in X} (O_j-O_i)^2 H_{i j} }

Why? When i = j, O_j-O_i = 0, so that term in the sum vanishes. But when i \ne j, (O_j-O_i)^2 and H_{i j} are both non-negative—the latter because H is infinitesimal stochastic. So if they sum to zero, they must each be individually zero. Thus for all i \ne j, we have (O_j-O_i)^2H_{i j}=0. But this means that either O_i = O_j or H_{i j} = 0, which is what we need to show.

So, let’s take that sum and expand it:

\displaystyle{ \sum_{i \in X} (O_j-O_i)^2 H_{i j} = \sum_i (O_j^2 H_{i j}- 2O_j O_i H_{i j} +O_i^2 H_{i j}) }

which in turn equals

\displaystyle{  O_j^2\sum_i H_{i j} - 2O_j \sum_i O_i H_{i j} + \sum_i O_i^2 H_{i j} }

The three terms here are each zero: the first because H is infinitesimal stochastic, and the latter two by equation (3). So, we’re done!   █

Markov chains

So that’s the proof… but why do we need both O and its square to have an expected value that doesn’t change with time to conclude [O,H] = 0? There’s an easy counterexample if we leave out the condition involving O^2. However, the underlying idea is clearer if we work with Markov chains instead of Markov processes.

In a Markov process, time passes by continuously. In a Markov chain, time comes in discrete steps! We get a Markov process by forming \exp(t H) where H is an infinitesimal stochastic operator. We get a Markov chain by forming the operator U, U^2, U^3, \dots where U is a ‘stochastic operator’. Remember:

Definition. Given a finite set X, a matrix of real numbers U = (U_{i j})_{i, j \in X} is stochastic if

U_{i j} \ge 0

for all i, j \in X and

\displaystyle{ \sum_{i \in X} U_{i j} = 1 }

for all j \in X.

The idea is that U describes a random hop, with U_{i j} being the probability of hopping to the state i if you start at the state j. These probabilities are nonnegative and sum to 1.

Any stochastic operator gives rise to a Markov chain U, U^2, U^3, \dots . And in case it’s not clear, that’s how we’re defining a Markov chain: the sequence of powers of a stochastic operator. There are other definitions, but they’re equivalent.

We can draw a Markov chain by drawing a bunch of states and arrows labelled by transition probabilities, which are the matrix elements U_{i j}:

Here is Noether’s theorem for Markov chains:

Theorem. Suppose U is a stochastic operator and O is an observable. Then

[O,U] =0

if and only if

\displaystyle{  \int O U \psi = \int O \psi }

and

\displaystyle{ \int O^2 U \psi = \int O^2 \psi }

for all stochastic states \psi.

In other words, an observable commutes with U iff the expected values of that observable and its square don’t change when we evolve our state one time step using U.

You can probably prove this theorem by copying the proof for Markov processes:

Puzzle. Prove Noether’s theorem for Markov chains.

But let’s see why we need the condition on the square of observable! That’s the intriguing part. Here’s a nice little Markov chain:

where we haven’t drawn arrows labelled by 0. So, state 1 has a 50% chance of hopping to state 0 and a 50% chance of hopping to state 2; the other two states just sit there. Now, consider the observable O with

O_i = i

It’s easy to check that the expected value of this observable doesn’t change with time:

\displaystyle{  \int O U \psi = \int O \psi }

for all \psi. The reason, in plain English, is this. Nothing at all happens if you start at states 0 or 2: you just sit there, so the expected value of O doesn’t change. If you start at state 1, the observable equals 1. You then have a 50% chance of going to a state where the observable equals 0 and a 50% chance of going to a state where it equals 2, so its expected value doesn’t change: it still equals 1.

On the other hand, we do not have [O,U] = 0 in this example, because we can hop between states where O takes different values. Furthermore,

\displaystyle{  \int O^2 U \psi \ne \int O^2 \psi }

After all, if you start at state 1, O^2 equals 1 there. You then have a 50% chance of going to a state where O^2 equals 0 and a 50% chance of going to a state where it equals 4, so its expected value changes!

So, that’s why \int O U \psi = \int O \psi for all \psi is not enough to guarantee [O,U] = 0. The same sort of counterexample works for Markov processes, too.

Finally, we should add that there’s nothing terribly sacred about the square of the observable. For example, we have:

Theorem. Suppose H is an infinitesimal stochastic operator and O is an observable. Then

[O,H] =0

if and only if

\displaystyle{ \frac{d}{d t} \int f(O) \psi(t) = 0 }

for all smooth f: \mathbb{R} \to \mathbb{R} and all \psi(t) obeying the master equation.

Theorem. Suppose U is a stochastic operator and O is an observable. Then

[O,U] =0

if and only if

\displaystyle{  \int f(O) U \psi = \int f(O) \psi }

for all smooth f: \mathbb{R} \to \mathbb{R} and all stochastic states \psi.

These make the ‘forward direction’ of Noether’s theorem stronger… and in fact, the forward direction, while easier, is probably more useful! However, if we ever use Noether’s theorem in the ‘reverse direction’, it might be easier to check a condition involving only O and its square.


The Network of Global Corporate Control

3 October, 2011

While protesters are trying to occupy Wall Street and spread their movement to other cities…

… others are trying to mathematically analyze the network of global corporate control:

• Stefania Vitali, James B. Glattfelder and Stefano Battiston, The network of global corporate control.

Here’s a little ‘directed graph’:

Very roughly, a directed graph consists of some vertices and some edges with arrows on them. Vitali, Glattfelder and Battiston built an enormous directed graph by taking 43,060 transnational corporations and seeing who owns a stake in whom:


If we zoom in on the financial sector, we can see the companies those protestors are upset about:


Zooming out again, we could check that the graph as a whole consists of many pieces. But the largest piece contains 3/4 of all the corporations studied, including all the top by economic value, and accounting for 94.2% of the total operating revenue.

Within this there is a large ‘core’, containing 1347 corporations each of whom owns directly and/or indirectly shares in every other member of the core. On average, each member of the core has direct ties to 20 others. As a result, about 3/4 of the ownership of firms in the core remains in the hands of firms of the core itself. As the authors put it:

This core can be seen as an economic “super-entity” that raises new important issues both for researchers and policy makers.

If you’ve never thought much about modern global capitalism, the existence of this ‘core’ may seem shocking and scary… like an enormous invisible spiderweb wrapping around the globe, dominating us, controlling every move we make. Or maybe you can see a tremendous new business opportunity, waiting to be exploited!

But if you’ve already thought about these things, the existence of this core probably seems obvious. What’s new here is the use of certain ideas in math—graph theory, to be precise—to study it quantitatively.

So, let me say a bit more about the math! What’s a directed graph, exactly? It’s a set V and a subset E of V \times V. We call the elements of V vertices and the elements of E edges. Since an edge is an ordered pair of vertices, it has a ‘starting point’ and an ‘endpoint’—that’s why we call this kind of graph ‘directed’.

(Note that we can have an edge going from a vertex to itself, but we cannot have more than one edge going from some vertex v to some vertex v'. If you don’t like this, use some other kind of graph: there are many kinds!)

I spoke about ‘pieces’ of a directed graph, but that’s not a precise term, since there are various kinds of pieces:

• A connected component is a maximal set of vertices such that we can get from any one to any other by an undirected path, meaning a path of edges where we don’t care which way the arrows point.

• A strongly connected component is a maximal set of vertices such that we can get from any one to any other by an directed path, meaning a path of edges where at each step we walk ‘forwards’, along with the arrow.

I didn’t state these definitions very precisely, but I hope you can fill in the details. Maybe an example will help! This graph has three strongly connected components, shaded in blue, but just one connected component:

So when I said this:

The graph consists of many pieces, but the largest contains 3/4 of all the corporations studied, including all the top by economic value, and accounting for 94.2% of the total operating revenue.

I was really talking about the largest connected component. But when I said this:

Within this there is a large ‘core’ containing 1347 corporations each of whom owns directly and/or indirectly shares in every other member of the core.

I was really talking about a strongly connected component. When you look at random directed graphs, there often turns out to be one strongly connected component that’s a lot bigger than all the rest. This is called the core, or the giant strongly connected component.

In fact there’s a whole study of random directed graphs, which is relevant not only to corporations, but also to webpages! Webpages link to other webpages, giving a directed graph. (True, one webpage can link to another more than once, but we can either ignore that subtlety or use a different concept of graph that handles this.)

And it turns out that for various types of random directed graphs, we tend to get a so-called ‘bowtie structure’, like this:

In the middle you see the core, or giant strongly connected component, labelled SCC. (Yes, that’s where Exxon sits, like a spider in the middle of the web!)

Connected to this by paths going in, we have the left half of the bowtie, labelled IN. Connected to the core by paths going out, we have the right half of the bowtie, labelled OUT

There are also usually some IN-tendrils going out of the IN region, and some OUT-tendrils going into the ‘OUT’ region.

There may also be tubes going from IN to OUT while avoiding the core.

All this is one connected component: the largest one. But finally, not shown here, there may be a bunch of other smaller connected components. Presumably if these are large enough they have a similar structure.

Now: can we use this knowledge to do something good? Or it all too obvious so far? After all, so far we’re just saying the network of global corporate control is a fairly ordinary sort of random directed graph. Maybe we need to go beyond this, and think about ways in which it’s not ordinary. In fact, I should reread the paper with that in mind.

Or… well, maybe you have some ideas.

(By the way, I don’t think ‘overthrowing’ the network of global corporate control is a feasible or even desirable project. I’m not espousing any sort of revolutionary ideology, and I’m not interested in discussing politics here. I’m more interested in understanding the world and looking for some leverage points where we can gently nudge things in slightly better directions. If there were a way to do this by taking advantage of the power of corporations, that would be cool.)


A Bet Concerning Neutrinos

27 September, 2011

Over on Google+ I wrote:

I’m willing to take bets that this faster-than-light neutrino business will turn out to be wrong. We can negotiate the detailed terms, the odds, and the stakes.

But beware: I’m still enjoying the case of scotch I won from David Ring. I bet there’d be no “strong evidence for supersymmetry” within the first year of operation of the Large Hadron Collider.

It took a couple of days, but I finally got someone willing to take me up on this. And—surprise!—it was none other than Frederick De Roo, one of the key contributors to the Azimuth Project.

But he’s playing for higher stakes than I’d expected:

Hi John,

actually I’m willing to take a bet.

I propose to bet (even though I don’t believe it) that

neutrinos can go faster than light

The loser of the bet will promise to the winner not to fly for one whole year! (for a year chosen within a specified number of years after the bet has expired)

How about that? The earth wins regardless who’s right ;-)

I asked him if we could discuss the details here, and he said okay.

It’s a tricky business. While I’ve got the odds on my side, I’ve also got more to lose!

Frederik lives in Europe, where there are lots of trains. His idea of a fun vacation is a month-long bike trip. What’s he got to lose?

I could easily survive a year of not flying to conferences. It would hurt a bit. Still, I’d say yes in a minute if it were just up to me. But Lisa and I have permanent positions at the University of California in Riverside, and we’re trying to work out a deal where we work in Singapore every summer. So, I can’t really agree to this bet unless I get her okay!

How do I convince a non-physicist—and not just any non-physicist, but my wife—that it’s really, really safe to bet a summer of being together on the possibility that neutrinos go faster than light?

We spent seven years on opposite sides of the country before she got a job at UC Riverside. We promised we’d never do something like that again. And now I’m saying “oh, don’t worry, dear: special relativity is very well tested.” If you haven’t been in this situation, you don’t know how unconvincing that sounds.

Should I look into cruises from Southern California to Singapore? How long do those take, anyway? It would be a bummer to get there only have to head straight back.

What would you say, Frederik, if I changed the the terms of the bet to something like this? If I lose the bet, for each plane trip I take during the specified year, I’ll donate $10,000 to your favorite environmental organization. Carbon offsets, or whatever you like. That way if I lose, I suffer, but not my marriage.


American Oil Boom?

26 September, 2011

If this is for real, it’s the biggest news I’ve heard for a long time:

Two years ago, America was importing about two thirds of its oil. Today, according to the Energy Information Administration, it imports less than half. And by 2017, investment bank Goldman Sachs predicts the US could be poised to pass Saudi Arabia and overtake Russia as the world’s largest oil producer.

This is from:

New boom reshapes oil world, rocks North Dakota, All Things Considered, National Public Radio, 25 September, 2011.

The new boom is due to technologies like fracking (short for hydraulic fracturing) and directional drilling. According to an estimate in this article, in the last few years advances in these technologies have made available up to 11 billion barrels of oil in the Bakken formation under North Dakota and Montana. There’s also a lot under the Canadian side of the border:

This map is from:

• Jerry Langton, Bakken Formation: Will it fuel Canada’s oil industry?, CBC News, 27 June 2008.

How big is this boom going to be? What will it mean? The National Public Radio story says this:

Amy Myers Jaffe of Rice University says in the next decade, new oil in the US, Canada and South America could change the center of gravity of the entire global energy supply.

“Some are now saying, in five or 10 years’ time, we’re a major oil-producing region, where our production is going up,” she says.

The US, Jaffe says, could have 2 trillion barrels of oil waiting to be drilled. South America could hold another 2 trillion. And Canada? 2.4 trillion. That’s compared to just 1.2 trillion in the Middle East and north Africa.

Jaffe says those new oil reserves, combined with growing turmoil in the Middle East, will “absolutely propel more and more investment into the energy resources in the Americas.”

Russia is already feeling the growth of American energy, Jaffe says. As the U.S. produces more of its own natural gas, Europe is free to purchase liquefied natural gas the US is no longer buying.

“They’re buying less natural gas from Russia,” Jaffe says. “So Russia would only supply 10 percent of European natural gas demand by 2030. That means the Russians are no longer powerful.”

The American energy boom, Jaffe says, could endanger many green-energy initiatives that have gained popularity in recent years. But royalties and revenue from U.S. production of oil and natural gas, she adds, could be used to invest in improving green technology.

What do you know about this news? Is it for real, is it being hyped? What do the smartest of the ‘peak oil’ crowd say?

I’ve read about the environmental impacts of fracking, and the consequences for global warming are evident. Since ‘carbon is forever’, to reduce carbon dioxide levels we need to either stop burning carbon or figure out a way to sequester CO2. A new oil boom won’t help us with that. And in the long run, we’ll still run out.

But the short run could last decades. Suppose people go ahead, ignore the dangers, and ‘drill, baby, drill’. How will geopolitics, the world economy, and the environment be affected?

Opinions are fine—everyone’s got one—but facts are better… and facts with references are the best.


Azimuth on Google Plus (Part 2)

24 September, 2011

Here are some of the tidbits I’ve posted to my Azimuth circle on Google+ recently. If you want to join this circle, just let me know!

First, here’s a random example of the fun stuff I’ve been bumping into over on Google+. This is a video of the Aurora Australis taken by the crew of the International Space Station on 17 September 2011 as they passed from south of Madagascar to just north of Australia over the Indian Ocean:

Note how the aurora lights up the bottom of the space station!

Next, the serious stuff:

• Unlike some US presidential candidates, the CIA takes climate change seriously. After all, intelligence is the CIA’s middle name. Two years ago they created the Center on Climate Change and National Security to study the political ramifications of a hotter world.

However, they just said that all of the work being done at this center is classified.

• Barry Brooks just wrote about the Azimuth Project in his BraveNewClimate blog. He likes it! He says he’s found it to be “highly useful”, and he invites you all to join:

Why bother? Because if done credibly, it may well be that resources like this will become one-stop-shops that you can recommend to your family, friends, business associates or even politicians, to make informed rather than evidence-free choices about our future options.

If you want to know more, visit the Azimuth Forum and see what we’re doing Even better, join it and tell us what you’re doing!

• Sheril Turlington writes about ocean acidification on Science Progress:

An international team of marine biologists recently traveled to Papua New Guinea where excess CO2 released from volcanic activity has already decreased local ocean pH to the levels that are expected globally by 2100. In this area, they found that more than 90 percent of the region’s coral reef species were lost.

Climate Communication is a new science and outreach organization dedicated to improving public understanding of climate change science. The director explains the idea here.

For scientists, we’re offering workshops in communicating climate science that go far beyond typical media training. We focus on the specific challenges of communicating about climate change. We go beyond problems of language to consider psychological and cultural issues. Our Science Director, Richard Somerville, and I led a climate communication workshop at the American Geophysical Union meeting in December 2010 and we’ll both be speaking there again this year. We led a workshop at NASA Jet Propulsion Lab on communicating about climate change. And we have more workshops planned. We welcome inquires about holding additional workshops and professional development sessions.

For journalists, we’re making the latest science available in a more accessible form and helping them identify the best experts to interview on particular topics. In a fast-paced and challenging media environment, we’re bringing the science to journalists in ways that are credible and helpful. Last week we held a telephone press conference featuring leading climate scientists discussing the linkages between extreme weather and climate change. We also posted a summary of the latest peer-reviewed science on that subject. Journalists are welcome to contact us and we’ll do our best to help. 

For the public, we’re producing clear, brief summaries of the most important things they need to know about climate change, using not only words but also videos and animations. We’re providing concise answers to the key questions people ask: What’s happening to climate and why? How will it affect us? And what can we do about it? 

The Yale and George Mason Universities’ studies tell us the questions most Americans want answered. Our science advisors answer those questions and more, simply and clearly, at our website in both text and videos.

Our Science Advisors include many of the world’s leading climate scientists, who are also great communicators: Ken Caldeira, Julia Cole, Robert Corell, Kerry Emanuel, Katharine Hayhoe, Greg Holland, Jeff Kiehl, Michael MacCracken, Michael Mann, Jeff Masters, Jerry Meehl, Jonathan Overpeck, Camille Parmesan, Barrett Rock, Benjamin Santer, Kevin Trenberth, Warren Washington, and Don Wuebbles.

You can read their bios, learn what they do outside of science, and even see them in action on our website, in brief bio videos. We also put together a short video on what the public really needs to know about climate change. And there are many more videos on common climate questions, extreme weather and climate change, and other topics. We hope to help amplify their voices and bring more clarity to public discussions of this great challenge.

• You can get a lot of climate and geological data in the OPenDAP format by going here. Temperature data, solar radiation data, coral reef data, comparisons between the present and the Last Glacial Maximum, and much much more!

R is a software environment optimized for doing statistics. Want to use it to analyze time series data? Seasonal adjustments and all that? Then this book is for you!

• The Institute for New Economic Thinking is giving out grants to study issues including “Sustainable Economics” and “Models of Economic Development, Innovation and Growth”.


Follow

Get every new post delivered to your Inbox.

Join 3,095 other followers