The Faculty of 1000

31 January, 2012

As of this minute, 1890 scholars have signed a pledge not to cooperate with the publisher Elsevier. People are starting to notice. According to this Wired article, the open-access movement is “catching fire”:

• David Dobbs, Testify: the open-science movement catches fire, Wired, 30 January 2012.


Now is a good time to take more substantial actions. But what?

Many things are being discussed, but it’s good to spend a bit of time thinking about the root problems and the ultimate solutions.

The world-wide web has made journals obsolete: it would be better to put papers on freely available archives and then let boards of top scholars referee them. But how do we get to this system?

In math and physics we have the arXiv, but nobody referees those papers. In biology and medicine, a board called the Faculty of 1000 chooses and evaluates the best papers, but there’s no archive: they get those papers from traditional journals.

Whoops—never mind! That was yesterday. Now the Faculty of 1000 has started an archive!

• Rebecca Lawrence, F1000 Research – join us and shape the future of scholarly communication, F1000, 30 January 2012.

• Ivan Oransky, An arXiv for all of science? F1000 launches new immediate publication journal, Retraction Watch, 30 January 2012.

This blog article says “an arXiv for all science”, but it seems the new F1000 Research archive is just for biology and medicine. So now it’s time for the mathematicians and physicists to start catching up.


Ban Elsevier

26 January, 2012

Please take the pledge not to do business with Elsevier. 404 scientists have done it so far:

The cost of knowledge.

You can separately say you

1) won’t publish with them,
2) won’t referee for them, and/or
3) won’t do editorial work for them.

At least do number 2): how often can you do something good by doing less work? When a huge corporation relies so heavily on nasty monopolistic practices and unpaid volunteer labor, they leave themselves open to this.

This pledge website is the brainchild of Tim Gowers, a Fields medalist and prominent math blogger:

• Tim Gowers, Elsevier: my part in its downfall and http://thecostofknowledge.com.

In case you’re not familiar with the Elsevier problem, here’s something excerpted from my website. This does not yet mention Elsevier’s recent support of the Research Works Act, which would try to roll back the US government’s requirement that taxpayer-funded medical research be made freely available online. Nor does it mention the fake medical journals created by Elsevier, where what looked like peer-reviewed papers were secretly advertisements paid for by drug companies! Nor does it mention the Chaos, Solitons and Fractals fiasco. Indeed, it’s hard keeping up with Elsevier’s dirty deeds!

The problem and the solutions

The problem of highly priced science journals is well-known. A wave of mergers in the publishing business has created giant firms with the power to extract ever higher journal prices from university libraries. As a result, libraries are continually being forced to cough up more money or cut their journal subscriptions. It’s really become a crisis.

Luckily, there are also two counter-trends at work. In mathematics and physics, more and more papers are available from a free electronic database called the arXiv, and journals are beginning to let papers stay on this database even after they are published. In the life sciences, PubMed Central plays a similar role.

There are also a growing number of free journals. Many of these are peer-reviewed, and most are run by academics instead of large corporations.

The situation is worst in biology and medicine: the extremely profitable spinoffs of research in these subjects has made it easy for journals to charge outrageous prices and limit the free nature of discourse. A non-profit organization called the Public Library of Science was formed to fight this, and circulated an open letter calling on publishers to adopt reasonable policies. 30,000 scientists signed this and pledged to:

publish in, edit or review for, and personally subscribe to only those scholarly and scientific journals that have agreed to grant unrestricted free distribution rights to any and all original research reports that they have published, through PubMed Central and similar online public resources, within 6 months of their initial publication date.

Unsurprisingly, the response from publishers was chilly. As a result, the Public Library of Science started its own free journals in biology and medicine, with the help of a 9 million dollar grant from the Gordon and Betty Moore Foundation.

A number of other organizations are also pushing for free access to scholarly journals, such as Create Change, the Scholarly Publishing and Academic Resources Coalition, and the Budapest Open Access Initiative, funded by George Soros.

Editorial boards are beginning to wise up, too. On August 10, 2006, all the editors of the math journal Topology resigned to protest the outrageous prices of the publisher, Reed Elsevier. In August of this year, the editorial board of the Springer journal K-Theory followed suit. The Ecole Normale Superieure has also stopped having Elsevier publish the journal Annales Scientifiques de l’École Normale Supérieure.

So, we may just win this war! But only if we all do our part.

What we can do

What can we do to keep academic discourse freely available to all? Here are some things:

1. Don’t publish in overpriced journals.

2. Don’t do free work for overpriced journals (like refereeing and editing).

3. Put your articles on the arXiv or a similar site before publishing them.

4. Only publish in journals that let you keep your articles on the arXiv or a similar site.

5. Support free journals by publishing in them, refereeing for them, editing them… even starting your own!

6. Help make sure free journals and the arXiv stay free.

7. Help start a system of independent ‘referee boards‘ for arXiv papers. These can referee papers and help hiring, tenure and promotion committees to assess the worth of papers, eliminating the last remaining reasons for the existence of traditional for-profit journals.

The nice thing is that most of these are easy to do! Only items 5 through 7 require serious work. As for item 4, a lot of math and physics journals not only let you keep your article on the arXiv, but let you submit it by telling them its arXiv number! In math it’s easy to find these journals, because there’s a public list of them.

Of course, you should read the copyright agreement that you’ll be forced to sign before submitting to a journal or publishing a book. Check to see if you can keep your work on the arXiv, on your own website, etcetera. You can pretty much assume that any rights you don’t explicitly keep, your publisher will get. Eric Weisstein didn’t do this, and look what happened to him: he got sued and spent over a year in legal hell!

Luckily it’s not hard to read these copyright agreements: you can get them off the web. An extensive list is available from Sherpa, an organization devoted to free electronic archives.

If you think maybe you want to start your own journal, or move an existing journal to a cheaper publisher, read Joan Birman’s article about this. Go to the Create Change website and learn what other people are doing. Also check out SPARC—the Scholarly Publishing and Academic Resources Coalition. They can help. And try the Budapest Open Access Initiative—they give out grants.

You can also support the Public Library of Science or join the Open Archives Initiative.

Also: if you like mathematics, tell your librarian about Mathematical Sciences Publishers, a nonprofit organization run by mathematicians for the purpose of publishing low-cost, high-quality math journals.

Which journals are overpriced?

In 1997 Robion Kirby urged mathematicians not to submit papers to, nor edit for, nor referee for overpriced journals. I think this suggestion is great, and it applies not just to mathematics but all disciplines. There is really no good reason for us to donate our work to profit-making corporations who sell it back to us at exorbitant prices! Indeed in climate science this has a terrible effect: crackpot bloggers distribute their misinformation free of charge, while lots of important serious climate science papers are hidden, available only to people who work at institutions with expensive subscriptions.

But how can you tell if a journal is overpriced? In mathematics, Up-to-date information on the rise of journal prices is available from the American Mathematical Society. They even include an Excel spreadsheet that lets you do your own calculations with this data! Some of this information is nicely summarized on a webpage by Ulf Rehmann. Using these tools you can make up your own mind which journals are too expensive to be worth supporting with your free volunteer labor.

What about other subjects? I don’t know. Maybe you do?

When I first learned how bad the situation was, I started by boycotting all journals published by Reed Elsevier. This juggernaut was formed by merger of Reed Publishing and Elsevier Scientific Press in 1993. In August 2001 it bought Harcourt Press—which in turn owned Academic Press, which ran a journal I helped edit, Advances in Mathematics. I don’t work for that journal anymore! The reason is that Reed Elsevier is a particularly bad culprit when it comes to charging high prices. You can see this from the above lists of journal prices, and you can also see it in the business news. In 2002, Forbes magazine wrote:

If you are not a scientist or a lawyer, you might never guess which company is one of the world’s biggest in online revenue. Ebay will haul in only $1 billion this year. Amazon has $3.5 billion in revenue but is still, famously, losing money. Outperforming them both is Reed Elsevier, the London-based publishing company. Of its $8 billion in likely sales this year, $1.5 billion will come from online delivery of data, and its operating margin on the internet is a fabulous 22%.

Credit this accomplishment to two things. One is that Reed primarily sells not advertising or entertainment but the dry data used by lawyers, doctors, nurses, scientists and teachers. The other is its newfound marketing hustle: Its CEO since 1999 has been Crispin Davis, formerly a soap salesman.

But Davis will have to keep hustling to stay out of trouble. Reed Elsevier has fat margins and high prices in a business based on information—a commodity, and one that is cheaper than ever in the internet era. New technologies and increasingly universal access to free information make it vulnerable to attack from below. Today pirated music downloaded from the web ravages corporate profits in the music industry. Tomorrow could be the publishing industry’s turn.

Some customers accuse Reed Elsevier of price gouging. Daniel DeVito, a patent lawyer with Skadden, Arps, Slate, Meagher & Flom, is a fan of Reed’s legal-search service, but he himself does free science searches on the Google site before paying for something like Reed’s ScienceDirect—and often finds what he’s looking for at no cost. Reed can ill afford to rest.

Why should we slave away unpaid to keep Crispin Davis and his ilk rolling in dough? There’s really no good reason.

Sneaky tricks

To fight against the free journals and the arXiv, publishing companies are playing sneaky tricks like these:

Proprietary Preprint Archives. Examples included ChemWeb and something they called "The Mathematics Preprint Server". The latter was especially devious, because mathematicians used to call the arXiv "the mathematics preprint server".

However, the Mathematics Preprint Server didn’t fool many smart people, so lots of the papers they got were crap, like a supposed proof of Goldbach’s conjecture, and a claim that the rotation of a galactic supercluster is due to a "topological defect" in spacetime. Eventually Elsevier gave up and stopped accepting new papers on their preprint server. Now it’s a laughable shadow of its former self. Similarly, ChemWeb was sold off.

Web Spamming. More recently, publishers have tried a new trick: “web spamming”, also known as “search engine spamming” or “cloaking”. The company gives search engine crawlers access to full-text articles — but when you try to read these articles, you get a "doorway page" demanding a subscription or payment. Sometimes you’ll even be taken to a page that has nothing to do with the paper you thought you were about to see!

Culprits include Springer, Reed Elsevier, and the Institute of Electrical and Electronic Engineers. The last one seems to have quit — but check out their powerpoint presentation on this subject, courtesy of Carl Willis.

If you see pages like this, report them to Google or your favorite search engine.

Journal Bundling. Worse still is the strategy of "bundling" subscriptions into huge all-or-nothing packages, so libraries can’t save money by ceasing to subscribe to a single journal. It’s a clever trap, especially because these bundled subscriptions look like a good deal at first. The cost becomes apparent only later. Now universities libraries are being bankrupted as the prices of these bundles keep soaring. The library of my own university, U.C. Riverside, barely has money for any books anymore!

Luckily, people are catching on. In 2003, Cornell University bravely dropped their subscription to 930 Elsevier journals. Four North Carolina universities have joined the revolt, and the University of California has also been battling Elsevier. For other actions universities have taken, read Peter Suber’s list.

Legal bullying. Large corporations like to scare people by means of threats of legal action backed up by deep pockets. A classic example is the lawsuit launched by Gordon and Breach against the American Physical Society for publishing lists of journal prices. Luckily they lost this suit.

Hiring a Dr. Evil lookalike as their PR consultant.

Click either of the pictures for an explanation.


Classical Mechanics versus Thermodynamics (Part 2)

23 January, 2012

I showed you last time that in many branches of physics—including classical mechanics and thermodynamics—we can see our task as minimizing or maximizing some function. Today I want to show how we get from that task to symplectic geometry.

So, suppose we have a smooth function

S: Q \to \mathbb{R}

where Q is some manifold. A minimum or maximum of S can only occur at a point where

d S = 0

Here the differential d S which is a 1-form on Q. If we pick local coordinates q^i in some open set of Q, then we have

\displaystyle {d S = \frac{\partial S}{\partial q^i} dq^i }

and these derivatives \displaystyle{ \frac{\partial S}{\partial q^i} } are very interesting. Let’s see why:

Example 1. In classical mechanics, consider a particle on a manifold X. Suppose the particle starts at some fixed position at some fixed time. Suppose that it ends up at the position x at time t. Then the particle will seek to follow a path that minimizes the action given these conditions. Assume this path exists and is unique. The action of this path is then called Hamilton’s principal function, S(x,t). Let

Q = X \times \mathbb{R}

and assume Hamilton’s principal function is a smooth function

S : Q \to \mathbb{R}

We then have

d S = p_i dq^i - H d t

where q^i are local coordinates on X,

\displaystyle{ p_i = \frac{\partial S}{\partial q^i} }

is called the momentum in the ith direction, and

\displaystyle{ H = - \frac{\partial S}{\partial t} }

is called the energy. The minus signs here are basically just a mild nuisance. Time is different from space, and in special relativity the difference comes from a minus sign, but I don’t think that’s the explanation here. We could get rid of the minus signs by working with negative energy, but it’s not such a big deal.

Example 2. In thermodynamics, consider a system with the internal energy U and volume V. Then the system will choose a state that maximizes the entropy given these constraints. Assume this state exists and is unique. Call the entropy of this state S(U,V). Let

Q = \mathbb{R}^2

and assume the entropy is a smooth function

S : Q \to \mathbb{R}

We then have

d S = \displaystyle{\frac{1}{T} d U - \frac{P}{T} d V }

where T is the temperature of the system, and P is the pressure. The slight awkwardness of this formula makes people favor other setups.

Example 3. In thermodynamics there are many setups for studying the same system using different minimum or maximum principles. One of the most popular is called the energy scheme. If internal energy increases with increasing entropy, as usually the case, this scheme is equivalent to the one we just saw.

In the energy scheme we fix the entropy S and volume V. Then the system will choose a state that minimizes the internal energy given these constraints. Assume this state exists and is unique. Call the internal energy of this state U(S,V). Let

Q = \mathbb{R}^2

and assume the entropy is a smooth function

S : Q \to \mathbb{R}

We then have

d U = T d S - P d V

where

\displaystyle{ T = \frac{\partial U}{\partial S} }

is the temperature, and

\displaystyle{ P = - \frac{\partial U}{\partial V} }

is the pressure. You’ll note the formulas here closely resemble those in Example 1!

Example 4. Here are the four most popular schemes for thermodynamics:

• If we fix the entropy S and volume V, the system will choose a state that minimizes the internal energy U(S,V).

• If we fix the entropy S and pressure P, the system will choose a state that minimizes the enthalpy H(S,P).

• If we fix the temperature T and volume V, the system will choose a state that minimizes the Helmholtz free energy A(T,V).

• If we fix the temperature T and pressure P, the system will choose a state that minimizes the Gibbs free energy G(T,P).

These quantities are related by a pack of similar-looking formulas, from which we may derive a mind-numbing little labyrinth of Maxwell relations. But for now, all we need to know is that all these approaches to thermodynamics are equivalent given some reasonable assumptions, and all the formulas and relations can be derived using the Legendre transformation trick I explained last time. So, I won’t repeat what we did in Example 3 for all these other cases!

Example 5. In classical statics, consider a particle on a manifold Q. This particle will seek to minimize its potential energy V(q), which we’ll assume is some smooth function of its position q \in Q. We then have

d V = -F_i dq^i

where q^i are local coordinates on Q and

\displaystyle{ F_i = -\frac{\partial F}{\partial q^i} }

is called the force in the ith direction.

Conjugate variables

So, the partial derivatives of the quantity we’re trying
to minimize or maximize are very important! As a result, we often want to give them more of an equal status as independent quantities in their own right. Then we call them ‘conjugate variables’.

To make this precise, consider the cotangent bundle T^* Q, which has local coordinates q^i (coming from the coordinates on Q) and p_i (the corresponding coordinates on each cotangent space). We then call p_i the conjugate variable of the coordinate q^i.

Given a smooth function

S : Q \to \mathbb{R}

the 1-form d S can be seen as a section of the cotangent bundle. The graph of this section is defined by the equation

\displaystyle{ p_i = \frac{\partial S}{\partial q^i} }

and this equation ties together two intuitions about ‘conjugate variables’: as coordinates on the cotangent bundle, and as partial derivatives of the quantity we’re trying to minimize or maximize.

The tautological 1-form

There is a lot to say here, especially about Legendre transformations, but I want to hasten on to a bit of symplectic geometry. And for this we need the ‘tautological 1-form’ on T^* Q.

We can think of d S as a map

d S : Q \to T^* Q

sending each point q \in Q to the point (q,p) \in T^* Q where p is defined by the equation we just saw:

\displaystyle{ p_i = \frac{\partial S}{\partial q^i} }

Using this map, we can pull back any 1-form on T^* Q to get a 1-form on Q.

What 1-form on Q might we like to get? Why, d S of course!

Amazingly, there’s a 1-form \alpha on T^* Q such that when we pull it back using the map d S, we get the 1-form d S—no matter what smooth function d S we started with!

Thanks to this wonderfully tautological property, \alpha is called the tautological 1-form on T^* Q. You should check that it’s given by the formula

\alpha = p_i dq^i

If you get stuck, try this.

So, if we want to see how much S changes as we move along a path in Q, we can do this in three equivalent ways:

• Evaluate S at the endpoint of the path and subtract off S at the starting-point.

• Integrate the 1-form d S along the path.

• Use d S : Q \to T^* Q to map the path over to T^* Q, and then integrate \alpha over this path in T^* Q.

The last method is equivalent thanks to the ‘tautological’ property of \alpha. It may seem overly convoluted, but it shows that if we work in T^* Q, where the conjugate variables are accorded equal status, everything we want to know about the change in S is contained in the 1-form \alpha, no matter which function S we decide to use!

So, in this sense, \alpha knows everything there is to know about the change in Hamilton’s principal function in classical mechanics, or the change in entropy in thermodynamics… and so on!

But this means it must know about things like Hamilton’s equations, and the Maxwell relations.

The symplectic structure

We saw last time that the fundamental equations of classical mechanics and thermodynamics—Hamilton’s equations and the Maxwell relations—are mathematically just the same. They both say simply that partial derivatives commute:

\displaystyle { \frac{\partial^2 S}{\partial q^i \partial q^j} = \frac{\partial^2 S}{\partial q^j \partial q^i} }

where S: Q \to \mathbb{R} is the function we’re trying to minimize or maximize.

I also mentioned that this fact—the commuting of partial derivatives—can be stated in an elegant coordinate-free way:

d^2 S = 0

Perhaps I should remind you of the proof:

d^2 S =   d \left( \displaystyle{ \frac{\partial S}{\partial q^i} dq^i } \right) = \displaystyle{ \frac{\partial^2 S}{\partial q^j \partial q^i} dq^j \wedge dq^i }

but

dq^j \wedge dq^i

changes sign when we switch i and j, while

\displaystyle{ \frac{\partial^2 S}{\partial q^j \partial q^i}}

does not, so d^2 S = 0. It’s just a wee bit more work to show that conversely, starting from d^2 S = 0, it follows that the mixed partials must commute.

How can we state this fact using the tautological 1-form \alpha? I said that using the map

d S : Q \to T^* Q

we can pull back \alpha to Q and get d S. But pulling back commutes with the d operator! So, if we pull back d \alpha, we get d^2 S. But d^2 S = 0. So, d \alpha has the magical property that when we pull it back to Q, we always get zero, no matter what S we choose!

This magical property captures Hamilton’s equations, the Maxwell relations and so on—for all choices of S at once. So it shouldn’t be surprising that the 2-form

\theta = d \alpha

is colossally important: it’s the famous symplectic structure on the so-called phase space T^* Q.

Well, actually, most people prefer to work with

\omega = - d \alpha

It seems this whole subject is a monument of austere beauty… covered with minus signs, like bird droppings.

Example 6. In classical mechanics, let

Q = X \times \mathbb{R}

as in Example 1. If Q has local coordinates q^i, t, then T^* Q has these along with the conjugate variables as coordinates. As we explained, it causes little trouble to call these conjugate variables by the same names we used for the partial derivatives of S: namely, p_i and -H. So, we have

\alpha = p_i dq^i - H d t

and thus

\omega = dq^i \wedge dp_i - dt \wedge dH

Example 7. In thermodynamics, let

Q = \mathbb{R}^2

as in Example 3. If Q has coordinates S, V then the conjugate variables deserve to be called T, -P. So, we have

\alpha = T d S - P d V

and

\omega = d S \wedge d T - d V \wedge d P

You’ll see that in these formulas for \omega, variables get paired with their conjugate variables. That’s nice.

But let me expand on what we just saw, since it’s important. And let me talk about \theta =  d\alpha, without tossing in that extra sign.

What we saw is that the 2-form \theta is a ‘measure of noncommutativity’. When we pull \theta back to Q we get zero. This says that partial derivatives commute—and this gives Hamilton’s equations, the Maxwell relations, and all that. But up in T^* Q, \theta is not zero. And this suggests that there’s some built-in noncommutativity hiding in phase space!

Indeed, we can make this very precise. Consider a little parallelogram up in T^* Q:

Suppose we integrate the 1-form \alpha up the left edge and across the top. Do we get the same answer if integrate it across the bottom edge and then up the right?

No, not necessarily! The difference is the same as the integral of \alpha all the way around the parallelogram. By Stokes’ theorem, this is the same as integrating \theta over the parallelogram. And there’s no reason that should give zero.

However, suppose we got our parallelogram in T^* Q by taking a parallelogram in Q and applying the map

d S : Q \to T^* Q

Then the integral of \alpha around our parallelogram would be zero, since it would equal the integral of d S around a parallelogram in Q… and that’s the change in S as we go around a loop from some point to… itself!

And indeed, the fact that a function S doesn’t change when we go around a parallelogram is precisely what makes

\displaystyle { \frac{\partial^2 S}{\partial q^i \partial q^j} = \frac{\partial^2 S}{\partial q^j \partial q^i} }

So the story all fits together quite nicely.

The big picture

I’ve tried to show you that the symplectic structure on the phase spaces of classical mechanics, and the lesser-known but utterly analogous one on the phase spaces of thermodynamics, is a natural outgrowth of utterly trivial reflections on the process of minimizing or maximizing a function S on a manifold Q.

The first derivative test tells us to look for points with

d S = 0

while the commutativity of partial derivatives says that

d^2 S = 0

everywhere—and this gives Hamilton’s equations and the Maxwell relations. The 1-form d S is the pullback of the tautologous 1-form \alpha on T^* Q, and similarly d^2 S is the pullback of the symplectic structure d\alpha. The fact that

d \alpha \ne 0

says that T^* Q holds noncommutative delights, almost like a world where partial derivatives no longer commute! But of course we still have

d^2 \alpha = 0

everywhere, and this becomes part of the official definition of a symplectic structure.

All very simple. I hope, however, the experts note that to see this unified picture, we had to avoid the most common approaches to classical mechanics, which start with either a ‘Hamiltonian’

H : T^* Q \to \mathbb{R}

or a ‘Lagrangian’

L : T Q \to \mathbb{R}

Instead, we started with Hamilton’s principal function

S : Q \to \mathbb{R}

where Q is not the usual configuration space describing possible positions for a particle, but the ‘extended’ configuration space, which also includes time. Only this way do Hamilton’s equations, like the Maxwell relations, become a trivial consequence of the fact that partial derivatives commute.

But what about those ‘noncommutative delights’? First, there’s a noncommutative Poisson bracket operation on functions on T^* Q. This makes the functions into a so-called Poisson algebra. In classical mechanics of a point particle on the line, for example, it’s well-known that we have

\begin{array}{ccr}  \{ p, q \} &=& 1 \\  \{ H, t \} &=& -1 \end{array}

In thermodynamics, the analogous relations

\begin{array}{ccr}  \{ T, S \} &=& 1 \\  \{ P, V \} &=& -1 \end{array}

seem sadly little-known. But you can see them here, for example:

• M. J. Peterson, Analogy between thermodynamics and mechanics, American Journal of Physics 47 (1979), 488–490.

at least up to one of those pesky minus signs! We can use these Poisson brackets to study how one thermodynamic variable changes as we slowly change another, staying close to equilibrium all along.

Second, we can go further and ‘quantize’ the functions on T^* Q. This means coming up with an associative but noncommutative product of these function that mimics the Poisson bracket to some extent. In the case of a particle on a line, we’d get commutation relations like

\begin{array}{lcr}  p q - q p &=& - i \hbar \\  H t - t H &=& i \hbar \end{array}

where \hbar is Planck’s constant. Now we can represent these quantities as operators on a Hilbert space, the uncertainty principle kicks in, and life gets really interesting.

In thermodynamics, the analogous relations would be

\begin{array}{ccr}  T S - S T &=& - i \hbar \\  P V - V P &=& i \hbar \end{array}

The math works just the same, but what does it mean physically? Are we now thinking of temperature, entropy and the like as ‘quantum observables’—for example, operators on a Hilbert space? Are we just quantizing thermodynamics?

That’s one possible interpretation, but I’ve never heard anyone discuss it. Here’s one good reason: as Blake Stacey pointed out below, these equations don’t pass the test of dimensional analysis! The quantities at left have units of energy, while Plank’s constant has units of action. So maybe we need to introduce a quantity with units of time at right, or maybe there’s some other interpretation, where we don’t interpret the parameter \hbar as the good old-fashioned Planck’s constant, but something else instead.

And if you’ve really been paying attention, you may wonder how quantropy fits into this game! I showed that at least in a toy model, the path integral formulation of quantum mechanics arises, not exactly from maximizing or minimizing something, but from finding its critical points: that is, points where its first derivative vanishes. This something is a complex-valued quantity analogous to entropy, which I called ‘quantropy’.

Now, while I keep throwing around words like ‘minimize’ and ‘maximize’, most everything I’m doing works just fine for critical points. So, it seems that the apparatus of symplectic geometry may apply to the path-integral formulation of quantum mechanics.

But that would be weirdly interesting! In particular, what would happen when we go ahead and quantize the path-integral formulation of quantum mechanics?

If you’re a physicist, there’s a guess that will come tripping off your tongue at this point, without you even needing to think. Me too. But I don’t know if that guess is right.

Less mind-blowingly, there is also the question of how symplectic geometry enters into classical statics via the idea of Example 4.

But there’s a lot of fun to be had in this game already with thermodynamics.

Appendix

I should admit, just so you don’t think I failed to notice, that only rather esoteric physicists study the approach to quantum mechanics where time is an operator that doesn’t commute with the Hamiltonian H. In this approach H commutes with the momentum and position operators. I didn’t write down those commutation equations, for fear you’d think I was a crackpot and stop reading! It is however a perfectly respectable approach, which can be reconciled with the usual one. And this issue is not only quantum-mechanical: it’s also important in classical mechanics.

Namely, there’s a way to start with the so-called extended phase space for a point particle on a manifold X:

T^* (X \times \mathbb{R})

with coordinates q^i, t, p_i and H, and get back to the usual phase space:

T^* X

with just q^i and p_i as coordinates. The idea is to impose a constraint of the form

H = f(q,p)

to knock off one degree of freedom, and use a standard trick called ‘symplectic reduction’ to knock off another.

Similarly, in quantum mechanics we can start with a big Hilbert space

L^2(X \times \mathbb{R})

on which q^i, t, p_i, and H are all operators, then impose a constraint expressing H in terms of p and q, and then use that constraint to pick out states lying in a smaller Hilbert space. This smaller Hilbert space is naturally identified with the usual Hilbert space for a point particle:

L^2(X)

Here X is called the configuration space for our particle; its cotangent bundle is the usual phase space. We call X \times \mathbb{R} the extended configuration space for a particle on the line; its cotangent bundle is the extended phase space.

I’m having some trouble remembering where I first learned about these ideas, but here are some good places to start:

• Toby Bartels, Abstract Hamiltonian mechanics.

• Nikola Buric and Slobodan Prvanovic, Space of events and the time observable.

• Piret Kuusk and Madis Koiv, Measurement of time in nonrelativistic quantum and classical mechanics, Proceedings of the Estonian Academy of Sciences, Physics and Mathematics 50 (2001), 195–213.


Curriki

8 August, 2010

Textbooks are expensive. They could be almost free, especially in subjects like trigonometry or calculus, which don’t change very fast.

I’m a radical when it comes to the dissemination of knowledge: I want to give as much away for free as I can! So if I weren’t doing Azimuth, I’d probably be working to push for open-source textbooks.

Luckily, someone much better at this sort of thing is already doing that. David Roberts — a mathematician you may have seen at the n-Category Café — recently pointed out this good news:

• Ashley Vance, $200 Textbook vs. Free — You Do the Math, New York Times, July 31, 2010.

Scott McNealy, cofounder of Sun Microsystems, recently said goodbye to that company and started spearheading a push towards open-source textbooks:

Early this year, Oracle, the database software maker, acquired Sun for $7.4 billion, leaving Mr. McNealy without a job. He has since decided to aim his energy and some money at Curriki, an online hub for free textbooks and other course material that he spearheaded six years ago.

“We are spending $8 billion to $15 billion per year on textbooks” in the United States, Mr. McNealy says. “It seems to me we could put that all online for free.”

The nonprofit Curriki fits into an ever-expanding list of organizations that seek to bring the blunt force of Internet economics to bear on the education market. Even the traditional textbook publishers agree that the days of tweaking a few pages in a book just to sell a new edition are coming to an end.

Whenever it happens, it’ll be none too soon for me!

Let us hope that someday the Azimuth Project becomes part of this trend…


Follow

Get every new post delivered to your Inbox.

Join 3,094 other followers