Algebraic Geometry

A more polished version of this article appeared on Nautilus on 2019 February 28. This version has some more material.



How I Learned to Stop Worrying and Love Algebraic Geometry

In my 50s, too old to become a real expert, I have finally fallen in love with algebraic geometry. As the name suggests, this is the study of geometry using algebra. Aroun 1637, Pierre Fermat and Rene Descartes laid the groundwork for this subject by taking a plane, mentally drawing a grid on it as we now do with graph paper, and calling the coordinates x and y. We can the write down an equation like x^2 + y^2  = 1, and there will be a curve consisting of points whose coordinates obey this equation. In this example, we get a circle!

It was a revolutionary idea at the time, because it lets us systematically convert questions about geometry into questions about equations, which we can solve if we’re good enough at algebra. Some mathematicians spend their whole lives on this majestic subject. But I never really liked it much—until recently. Now I’ve connected it to my interest in quantum physics.



We can describe many interesting curves with just polynomials. For example, roll a circle inside a circle three times as big. You get a curve with three sharp corners called a “deltoid”, shown in red above. It’s not obvious that you can describe this using a polynomial equation, but you can. The great mathematician Leonhard Euler dreamt this up in 1745.

As a kid I liked physics better than math. My uncle Albert Baez, father of the famous folk singer Joan Baez, worked for UNESCO, helping developing countries with physics education. My parents lived in Washington D.C.. Whenever my uncle came to town, he’d open his suitcase, pull out things like magnets or holograms, and use them to explain physics to me. This was fascinating. When I was eight, he gave me a copy of the college physics textbook he wrote. While I couldn’t understand it, I knew right away that I wanted to. I decided to become a physicist.

My parents were a bit worried, because they knew physicists needed mathematics, and I didn’t seem very good at that. I found long division insufferably boring, and refused to do my math homework, with its endless repetitive drills. But later, when I realized that by fiddling around with equations I could learn about the universe, I was hooked. The mysterious symbols seemed like magic spells. And in a way, they are. Science is the magic that actually works.

And so I learned to love math, but in a certain special way: as the key to physics. In college I wound up majoring in math, in part because I was no good at experiments. I learned quantum mechanics and general relativity, studying the necessary branches of math as I went. I was fascinated by Eugene Wigner’s question about the “unreasonable effectiveness” of mathematics in describing the universe. As he put it, “The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve.”

Despite Wigner’s quasi-religious language, I didn’t think that God was an explanation. As far as I can tell, that hypothesis raises more questions than it answers. I studied mathematical logic and tried to prove that any universe containing a being like us, able to understand the laws of that universe, must have some special properties. I failed utterly, though I managed to get my first publishable paper out of the wreckage. I decided that there must be some deep mystery here, that we might someday understand, but only after we understood what the laws of physics actually are: not the pretty good approximate laws we know now, but the actual correct laws.

As a youthful optimist I felt sure such laws must exist, and that we could know them. And then, surely, these laws would give a clue to the deeper puzzle: why the universe is governed by mathematical laws in the first place.

So I went to graduate school—to a math department, but motivated by physics. I already knew that there was too much mathematics to ever learn it all, so I tried to focus on what mattered to me. And one thing that did not matter to me, I believed, was algebraic geometry.

How could any mathematician not fall in love with algebraic geometry? Here’s why. In its classic form, this subject considers only polynomial equations—equations that describe not just curves, but also higher-dimensional shapes called “varieties.” So x^2 + y^2  = 1 is fine, and so is x^{47} - 2xyz = y^7, but an equation with sines or cosines, or other functions, is out of bounds—unless we can figure out how to convert it into an equation with just polynomials. As a graduate student, this seemed like a terrible limitation. After all, physics problems involve plenty of functions that aren’t polynomials.



This is Cayley’s nodal cubic surface. It’s famous because it is the variety with the most nodes (those pointy things) that is described by a cubic equation. The equation is (xy + yz + zx)(1 - x - y - z) + xyz = 0, and it’s called “cubic” because we’re multiplying at most three variables at once.

Why does algebraic geometry restrict itself to polynomials? Mathematicians study curves described by all sorts of equations – but sines, cosines and other fancy functions are only a distraction from the fundamental mysteries of the relation between geometry and algebra. Thus, by restricting the breadth of their investigations, algebraic geometers can dig deeper. They’ve been working away for centuries, and by now their mastery of polynomial equations is truly staggering. Algebraic geometry has become a powerful tool in number theory, cryptography and other subjects.

I once met a grad student at Harvard, and I asked him what he was studying. He said one word, in a portentous tone: “Hartshorne.” He meant Robin Hartshorne’s textbook Algebraic Geometry, published in 1977. Supposedly an introduction to the subject, it’s actually a very hard-hitting tome. Consider Wikipedia’s description:

The first chapter, titled “Varieties,” deals with the classical algebraic geometry of varieties over algebraically closed fields. This chapter uses many classical results in commutative algebra, including Hilbert’s Nullstellensatz, with the books by Atiyah–Macdonald, Matsumura, and Zariski–Samuel as usual references.

If you can’t make heads or tails of this… well, that’s exactly my point. To penetrate even the first chapter of Hartshorne, you need quite a bit of background. To read Hartshorne is to try to catch up with centuries of geniuses running as fast as they could.

One of these geniuses was Hartshorne’s thesis advisor, Alexander Grothendieck. From about 1960 to 1970, Grothendieck revolutionized algebraic geometry as part of an epic quest to prove some conjectures about number theory, the Weil Conjectures. He had the idea that these could be translated into questions about geometry and settled that way. But making this idea precise required a huge amount of work. To carry it out, he started a seminar. He gave talks almost every day, and enlisted the help of some of the best mathematicians in Paris.


Alexander Grothendieck at his seminar in Paris

Working nonstop for a decade, they produced tens of thousands of pages of new mathematics, packed with mind-blowing concepts. In the end, using these ideas, Grothendieck succeeded in proving all the Weil Conjectures except the final, most challenging one—a close relative of the famous Riemann Hypothesis, for which a million dollar prize still waits.

Towards the end of this period, Grothendieck also became increasingly involved in radical politics and environmentalism. In 1970, when he learned that his research institute was partly funded by the military, he resigned. He left Paris and moved to teach in the south of France. Two years later a student of his proved the last of the Weil Conjectures—but in a way that Grothendieck disliked, because it used a “trick” rather than following the grand plan he had in mind. He was probably also jealous that someone else reached the summit before him. As time went by, Grothendieck became increasingly embittered with academia. And in 1991, he disappeared!

We now know that he moved to the Pyrenees, where he lived until his death in 2014. He seems to have largely lost interest in mathematics and turned his attention to spiritual matters. Some reports make him seem quite unhinged. It is hard to say. At least 20,000 pages of his writings remain unpublished.

During his most productive years, even though he dominated the French school of algebraic geometry, many mathematicians considered Grothendieck’s ideas “too abstract.” This sounds a bit strange, given how abstract all mathematics is. What’s inarguably true is that it takes time and work to absorb his ideas. As grad student I steered clear of them, since I was busy struggling to learn physics. There, too, centuries of geniuses have been working full-speed, and anyone wanting to reach the cutting edge has a lot of catching up to do. But, later in my career, my research led me to Grothendieck’s work.

If I had taken a different path, I might have come to grips with his work through string theory. String theorists postulate that besides the visible dimensions of space and time—three of space and one of time—there are extra dimensions of space curled up too small to see. In some of their theories these extra dimensions form a variety. So, string theorists easily get pulled into sophisticated questions about algebraic geometry. And this, in turn, pulls them toward Grothendieck.


A slice of one particular variety, called a “quintic threefold,” that can be used to describe the extra curled-up dimensions of space in string theory.

Indeed, some of the best advertisements for string theory are not successful predictions of experimental results—it’s made absolutely none of these—but rather, its ability to solve problems within pure mathematics, including algebraic geometry. For example, suppose you have a typical quintic threefold: a 3-dimensional variety described by a polynomial equation of degree 5. How many curves can you draw on it that are described by polynomials of degree 4? I’m sure this question has occurred to you. So, you’ll be pleased to know that answer is exactly 317,206,375.

This sort of puzzle is quite hard, but string theorists have figured out a systematic way to solve many puzzles of this sort, including much harder ones. Thus, we now see string theorists talking with algebraic geometers, each able to surprise the other with their insights.

My own interest in Grothendieck’s work had a different source. I’ve always had serious doubts about string theory, and counting curves on varieties is the last thing I’d ever try. Like rock climbing, it’s exciting to watch but too scary to actually attempt myself. But it turns out that Grothendieck’s ideas are so general and powerful that they spill out beyond algebraic geometry into many other subjects. In particular, his 600-page unpublished manuscript Pursuing Stacks, written in 1983, made a big impression on me. In it, he argues that topology—very loosely, the theory of what space can be shaped like, if we don’t care about bending or stretching it, just what kind of holes it has—can be completely reduced to algebra!

At first this idea may sound just like algebraic geometry, where we use algebra to describe geometrical shapes, like curves or higher-dimensional varieties. But “algebraic topology” winds up having a very different flavor, because in topology we don’t restrict our shapes to be described by polynomial equations. Instead of dealing with beautiful gems, we’re dealing with floppy, flexible blobs—so the kind of algebra we need is different.




Mathematicians sometimes joke that a topologist cannot tell the difference between a doughnut and a coffee cup.

Algebraic topology is a beautiful subject that has been around long before Grothendieck—but he was one of the first to seriously propose a method to reduce all topology to algebra.

Thanks to my work on physics, his proposal was tremendously exciting when I came across it. At the time I had taken up the challenge of trying to unify our two best theories of physics: quantum physics, which describes all the forces except gravity, and general relativity, which describes gravity. It seems that until we do this, our understanding of the fundamental laws of physics is doomed to be incomplete. But it’s devilishly difficult. One reason is that quantum physics is based on algebra, while general relativity involves a lot of topology. But that suggests an avenue of attack: if we can figure out how to express topology in terms of algebra, we might find a better language to formulate a theory of quantum gravity.

My physics colleagues will let out a howl here, and complain that I am oversimplifying. Yes, I’m oversimplifying. There is more to quantum physics than mere algebra, and more to general relativity than mere topology. Nonetheless, the possible benefits to physics of reducing topology to algebra are what got me so excited about Grothendieck’s work.

So, starting in the 1990s, I tried to understand the powerful abstract concepts that Grothendieck had invented—and by now I have partially succeeded. Some mathematicians find these concepts to be the hard part of algebraic geometry. They now seem like the easy part to me. The hard part, for me is the nitty-gritty details. First, there is all the material in those texts that Hartshorne takes as prerequisites: “the books by Atiyah–Macdonald, Matsumura, and Zariski–Samuel.” But there is also a lot more.

So, while I now have some of what it takes to read Hartshorne, until recently I was too intimidated to learn it. A student of physics once asked a famous expert how much mathematics a physicist needs to know. The expert replied: “More.” Indeed, the job of learning mathematics is never done, so I focus on the things that seem most important and/or fun. Until last year, algebraic geometry never rose to the top of the list.

What changed? I realized that algebraic geometry is connected to the relation between classical and quantum physics. Classical physics is the physics of Newton, where we imagine that we can measure everything with complete precision, at least in principle. Quantum physics is the physics of Schrödinger and Heisenberg, governed by the uncertainty principle: if we measure some aspects of a physical system with complete precision, others must remain undetermined.

For example, any spinning object has an “angular momentum”. In classical mechanics we visualize this as an arrow pointing along the axis of rotation, whose length is proportional to how fast the object is spinning. And in classical mechanics, we assume we can measure this arrow precisely. In quantum mechanics—a more accurate description of reality—this turns out not to be true. For example, if we know how far this arrow points in the x direction, we cannot know how far it points in the y direction. This uncertainty is too small to be noticeable for a spinning basketball, but for an electron it is important: physicists had only a rough understanding of electrons until they took this into account.

Physicists often want to “quantize” classical physics problems. That is, they start with the classical description of some physical system, and they want to figure out the quantum description. There is no fully general and completely systematic procedure for doing this. This should not be surprising: the two worldviews are so different. However, there are useful recipes for quantization. The most systematic ones apply to a very limited selection of physics problems.

For example, sometimes in classical physics we can describe a system by a point in a variety. This is not something one generally expects, but it happens in plenty of important cases. For example, consider a spinning object: if we fix how long its angular momentum arrow is, the arrow can still point in any direction, so its tip must lie on a sphere. Thus, we can describe a spinning object by a point on a sphere. And this sphere is actually a variety, the “Riemann sphere”, named after Bernhard Riemann, one of the greatest algebraic geometers of the 1800s.

When a classical physics problem is described by a variety, some magic happens. The process of quantization becomes completely systematic—and surprisingly simple. There is even a kind of reverse process, which one might call “classicization,” that lets you turn the quantum description back into a classical description. The classical and quantum approaches to physics become tightly linked, and one can take ideas from either approach and see what they say about the other one. For example, each point on the variety describes not only a state of the classical system (in our example, a definite direction for the angular momentum), but also a state of the corresponding quantum system—even though the latter is governed by the uncertainty principle. The quantum state is the “best quantum approximation” to the classical state.

Even better, in this situation many of the basic theorems about algebraic geometry can be seen as facts about quantization! Since quantization is something I’ve been thinking about for a long time, this makes me very happy. Richard Feynman once said that for him to make progress on a tough physics problem, he needed to have some sort of special angle on it:

I have to think that I have some kind of inside track on this problem. That is, I have some sort of talent that the other guys aren’t using, or some way of looking, and they are being foolish not to notice this wonderful way to look at it. I have to think I have a little better chance than the other guys, for some reason. I know in my heart that it is likely that the reason is false, and likely the particular attitude I’m taking with it was thought of by others. I don’t care; I fool myself into thinking I have an extra chance.

This may be what I’d been missing on algebraic geometry until now. Algebraic geometry is not just a problem to be solved, it’s a body of knowledge—but it’s such a large, intimidating body of knowledge that I didn’t dare tackle it until I got an inside track. Now I can read Hartshorne, translate some of the results into facts about physics, and feel I have a chance at understanding this stuff. And it’s a great feeling.


For the details of how algebraic geometry connects classical and quantum mechanics, see my talk slides and series of blog articles.

33 Responses to Algebraic Geometry

  1. Toby Bartels says:

    31,720,6375

    31 crore 720 myriad 6375?

  2. Bruce Smith says:

    This is a fascinating account. The “autobiography” aspect makes it especially interesting.

    I hope you will explain more about the big ideas in this field or in Grothendieck’s work, as you see them. I am especially interested in hearing about unobvious ways to think about math problems, with enough detail to guess where they might be useful.

    • John Baez says:

      Thanks! I was afraid the “autobiography” part would seem self-indulgent, but I figured that if done right it would humanize the story and make more people interested—so I tried to be more modest than I actually am, emphasizing difficulties rather than triumphs.

      I hope you will explain more about the big ideas in this field or in Grothendieck’s work, as you see them.

      Oh boy! That’s a challenging task. I want to continue blogging about how algebraic geometry shows up in geometric quantization, because I have a lot more to say about that. But that’s inevitably rather technical: it’s research in progress. I just saw a nice article by a friend of mine:

      • Colin McLarty, Grothendieck’s unifying vision of geometry, in Foundations of Mathematics and Physics One Century After Hilbert, ed. Joseph Kouneiher, Springer, Berlin, 2018.

      but it’s a new book so it may take a while for the Russians to steal it and distribute it to the public.

  3. Wolfgang says:

    As an outsider to the really abstract mathematics (well, I am a chemist, not a physicist), but still somehow happy to read about this stuff, with the idea in my mind, to learn more about it, I ask myself (and sometimes others, too) about what might be the ‘best’ way to find the right focus on one’s own mathematics education? So far, I came to the following conclusions:

    1) A systematic bottom-up approach, in the sense of thoroughly learning each field first for its own purpose, in the faint hope that this allows oneself to apply the knowledge in one’s own research productively at a later time, does not really yield this outcome very often (the amount of things to learn is huge, and there is not much instant gratification coming from the learning, so it can feel pointless from the start),

    2) So possibly it is far better to work top-down, identifying a problem, one wants to solve, and only then search for and learn the specific mathematics to solve it (I heard this once as an advice).

    But wait, how does one know about the possible tools (or angles of attack) for the top-down approach, if one did not follow the bottom-up approach first? And I wonder how professional mathematicians cope with this contradicting issue, which might be also put as ‘breath first or depth first’? I would appreciate any comments on that.

    Another thing I wonder about is, how much can one learn really with a book on its own, and how much things can be speed up, by interactions with someone who already knows a lot more about the topic one is interested in, and likes to share the knowledge, whenever one has questions on the way?

    This includes some reference to a paradoxical proverb, in the sense of ‘education works best on the people who need it the least’.

    • KH says:

      Another proverb: “tis the good reader that makes the good book”.
      IIRC that epigram was an epigraph at the front of Introduction to Theoretical Physics: Classical Mechanics and Electrodynamics by Roald K. Wangsness.
      Perhaps JB has seen that book.

    • John Baez says:

      People who learn a lot of math mainly do so because it’s fun. So if you don’t feel “instant gratification” learning math, it may be hard to learn much on your own. For me, learning math often gives instant gratification. “WOW! THAT’S COOL!”

      But wait a minute—how are you defining “instant”? Sometimes I need to suffer for a few hours trying to understand something. But when I do, I’m instantly happy! And some things take months or years to understand, but usually these are big things made of small pieces, and I’m happy when understand a piece. Often the longer it takes to understand something, the happier I am when I finally do. It’s like how climbing Mt. Everest versus climbing a small hill.

      Anyway: if you really want to learn a lot, you should find an approach that’s fun for you. I think it’s ultimately more important to enjoy yourself than to learn some specific amount of material. What’s fun varies so much from person to person that it’s hard to advise you! But if you liked my essay, that’s good: there are lots of books and articles that explain math in all sorts of ways, so probably there are some that you would enjoy.

      Another thing I wonder about is, how much can one learn really with a book on its own, and how much things can be speed up, by interactions with someone who already knows a lot more about the topic one is interested in, and likes to share the knowledge, whenever one has questions on the way?

      It’s much easier to learn math when you have someone to talk to—either someone who is also learning the same thing, or someone who knows it already. (Both are good, in different ways.) The great thing about email and blogs and Twitter and Math Stackexchange and MathOverflow is that it’s easier than ever before to find people to talk to.

      And I wonder how professional mathematicians cope with this contradicting issue, which might be also put as ‘breath first or depth first’.

      I find it helpful to have an overall view of a subject before I learn the details; without a big picture it’s hard to know where to ‘put’ the knowledge. It’s sort of like a jigsaw puzzle: if you don’t know what picture you’re trying to assemble, the heap of pieces can be very confusing.

      However, all this gets easier with practice. To get good at learning lots of math you need to learn lots of math. So the main thing is to start.

      • Wolfgang says:

        Thank you for your detailed comment. Yes, I agree, the best way would be, as possibly most mathematicians are, to be self motivated by the topic at each learning step. And it is also true, that the gratification can be much more pleasant, after a while of struggling with a topic. I like the idea you mentioned about “having fun”. I thought about it a while, and I think I realize that those mathematical problems I had to deal with so far were fun for me, whenever there was a strong visual aspect present, either very direct as it was a problem one could graph, or more indirect in being able to create suitable mental images about the process, however crude they may have been. I guess I am a visual person, which makes purely algebraic texts and the typical definition-theorem style of math papers a challenge for me. It’s kind of different if you see, for instance, a complex exponential only as a bunch of mathematical symbols or as representing a plane wave. I sometimes miss this interpretive part in mathematical texts (physical texts by necessity rely much more on it), maybe because it is obvious for experienced mathematicians, or maybe because people don’t feel to need to make this interpretations, possibly because they also can create the wrong images, while it is more secure to stick to the algebra. (I think the distaste for pictures in mathematical texts comes from this risk of getting led astray?) The other things I liked, possibly coming from the experimentalist’s perspective, to have data to model, to find patterns in data, and, practically, working in a more playful than thoughtful, i.e. heuristic mode, trying things out, looking at the result, in an iterative process, full of small errors on the way, in order to adapt and improve the model in small steps. Possibly much less rigorous than mathematicians do, although I heard the rumor that the way mathematics is presented in journal articles is not at all representative for the style it had when it was discovered, which might be similar to the things mentioned above, and which of course I know is true, from my own research experience.

      • John Baez says:

        Wolfgang wrote:

        I thought about it a while, and I think I realize that those mathematical problems I had to deal with so far were fun for me, whenever there was a strong visual aspect present, either very direct as it was a problem one could graph, or more indirect in being able to create suitable mental images about the process, however crude they may have been. I guess I am a visual person, which makes purely algebraic texts and the typical definition-theorem style of math papers a challenge for me.

        I began my math career as a very visual thinker. I wasn’t bad at manipulating symbols according to rules, but to feel l really understood something I needed to have a mental image of it. I gradually extended my visualization ability to handle higher-dimensional or infinite-dimensional spaces, where I wound up sort of ‘faking it’, visualizing in 3 dimensions but imagining it as higher-dimensional and knowing what corrections are needed. I was also okay with general topological spaces (where in general vague pictures of blobs are what you need) and other abstractions that were still based on some notion of ‘space’. But I was not very good at abstract algebra.

        I finally got over that deficiency in a couple of ways. First, I learned how to visualize algebraic structures like groups and rings. The images are so mysterious that I’d have a lot of trouble explaining them, and even if I succeeded they probably wouldn’t help anyone. But they do something good for me. Maybe they engage my visual cortex in the reasoning process somehow. They certainly make these structures feel ‘real’: they’re not longer just symbols on paper.

        The other trick was to think more conceptually, more verbally—that is, using words in my mind that express concepts that make sense to me. Again this is a bit hard to explain, because it’s not as if in the old days I was completely unable to reason verbally! But now I’m much better at curating a large collection of verbally expressed insights and using them one after another to solve problems.

        So, basically, getting better at math required me to think in more flexible ways. And I think that’s a large part of what math is about!

    • Todd Trimble says:

      These are great questions. I think both approaches are to some degree necessary, but realistically, I think all of us approach new mathematics (that is, math that is new to us individually) from somewhere in the middle, with a mix of experience and confusion.

      Once you’re past a certain age and with demands on your time, a bottom-up approach gets to be pretty difficult to pull off unless you’re really, really motivated or really, really disciplined. This is especially true if you’re trying to do it on your own and just reading books or stuff online — it’s easier if you have someone to talk to whom you feel comfortable with. The trouble with a book is that it’s hard to ask it questions. So in that case it becomes a process of interrogating one’s self in the midst of one’s inevitable confusion, with a book or two as an aid to getting unconfused (hopefully).

      A top-down approach very easily lends itself to dilettantism, unless supplemented with some bouts of digging underneath (in effect, some bottom-up) in critical spots.

      Most of us need strong motivation to pick up new math — motivation or fascination being the true mother of discipline. John has described his own method (shared with Feynman): having a special POV or inside track. The light of his extensive experience with quantum physics sheds a special light on structures arising in algebraic geometry, and makes the learning so much easier and more enjoyable, and much less daunting.

      In general, it boils down to how strong is the “need to know”: on can learn anything if the need is strong enough. Something that is really helpful is to have an example of some phenomenon that one really wants to understand better, that gets under one’s skin and one is itching to find out about. For most of us, the more concrete the example the better. One may try to figure it out with the tools one already owns, or if that fails one may start reading around — but reading always keeping in the back of one’s mind the explicit problem to test it against. With that kind of focused attention, quite often the problem will dissolve of its own accord but then new questions arise, and one is on the track of happy learning.

      If one doesn’t have concrete examples to test knowledge against, then what can typically happen is that one reads theory for a while, but after another while all that reading can slip from memory because there weren’t enough personal hooks to hang the theory from. With a variety of examples to test the theory against, the knowledge gets nailed down in one’s head much more effectively.

      It’s embarrassing to admit what one doesn’t know. For example, it shames me that after all this time I haven’t learned more model theory. But I think part of my problem has been not developing enough personal examples. For me, just today even, what helped is wanting to get a better inner feel for nonstandard models of Peano Arithmetic. What do countable nonstandard models look like, smell like? The answers may be incomplete, but see how far I get. All sorts of things and results can get clarified in the effort to try to get a more concrete feel for things.

      A weak motivation is: I want to be a more educated person. That’s fine, that’s great, but my own experience is that the drive won’t last that long unless one is gripped by something more concrete, something to sink one’s teeth into.

      • Jesús López says:

        Hi, not sure if I’m going to be useful/nontrivial, but anyhow one thing about nonstandard PA models is to classify their order types, and they happen to be made pasting the order of the standard natural numbers with the one that results from expanding a dense linear ordering w/o end-points, subtituting each point with a copy of the integers. It is N+D*Z with order type operations (any D), and the study continues with understanding dense orderings that is nicely pursued by Hausdorff, still profitable today, and can be complemented by the modern book on linear orderings by J. Rosenstein.

        I had another source of challenges approaching nonstandard analysis in the classical Robinson book coming to grips to the notion that a non-standard model had more to it than the idea of an elementary extension. What is happening is that beyond nonstandard numbers, there are also nonstandard versions of the higher order concpepts talking abut them (sets/properties, metaproperties, functions and functionals…). Robinson is doing the same thing as Henkin in his ‘Completeness in the theory of types’. He collapses the higher order concepts of the higher order language into a fat FOL signature, introducing new FOL-sorts for the higher order concepts. It is that FOL-ized analogue the one that has elementary extensions (“Henkin semantics”).

        • Todd Trimble says:

          The business about order types is just what hadn’t occurred to me before yesterday, and which seems almost trivial today. That is, I wasn’t aware of the fact that all countable models have the same order type (probably it hadn’t occurred to me to wonder, but anyway I didn’t know it, and it seems worth knowing).

          Is the other thing you’re talking about related to countable saturation, which can sometimes act as a proxy for things like the completeness axiom? (See for instance here.

        • Todd Trimble says:

          Ahem, I meant: all countable nonstandard models of PA have the same order type.

        • In reply to T. T.: There is no intended relation because saturation is in an area of Model Theory I’m not comfortable with, but have an old note about transforming saturation issues into ones involving kappa-density, hence dispensing with the model theory apparatus, that points to a question of Vladimir Kanovei in Math Overflow (question 126838). I have to sort this connections myself though.

      • Wolfgang says:

        Thank you, too, for your very interesting perspective. I especially liked the part where you spoke about the need of concrete examples. I think one major, and maybe even the most important, learning step is to make the transition from a lot of concrete examples to the abstract general case. I guess that this is the major division line between scientists and mathematicians. Modern math seems so abstract and general, that I often wonder, how one could even derive such general results, without referring to examples first. I guess, there exists a rare kind of people who really can do it, but the “average” mathematician also needs the more concrete framework to build upon (and tear down again/hide it later :) ).

        • John Baez says:

          Except for a rare few, I think good mathematicians learn each concept along with a battery of examples that illustrate that concept—and also counterexamples, things that come close but fail to be examples of that concept.

          E.g., I don’t think anyone can claim to really understand the definition of a “Hausdorff” topological space if they don’t know a topological space or two that’s not Hausdorff.

          Textbooks sometimes leave it to us to find these examples and counterexamples ourselves. They may present definitions and immediately move on to theorems using those definitions. But that means we have to pause and collect some examples and counterexamples ourselves.

          Also, whenever you read a theorem, you should think about how dropping any hypothesis would let in examples of things that don’t obey the conclusions of that theorem.

          As you collect your gallery of examples and counterexamples, you can test them for each new property you learn. For example, if you’re studying rings you need to keep the ring of integers (and a bunch of others) by your side. Then, if someone defines a “Dedekind domain”, you should instantly check to see if the integers are a Dedekind domain. First, because this is good to know. And second, because figuring it out will give you some insight into what a Dedekind domain is really like!

          I guess what all this amounts to is that reading mathematics is a very active process of engaging with the text and testing out what it says — not just reading and remembering.

        • Giampiero Campa says:

          Thank you so much for this fascinating post. Reading it (together with all the replies) was a real pleasure! From an engineer’s perspective, I guess that the less one’s field is “applied” the harder might be to visualize stuff, since one tends to lose the all-important anchoring to physical (or at least physically inspired) models. So i can conjecture that going from mechanical engineering to electrical engineering to physics to applied math to pure math it might become harder to visualize things. While i’m here i also can’t resist to comment about Wigner’s “unreasonable effectiveness” of mathematics in describing the universe: maybe i am being too much of a reductionist but shouldn’t it be true that in any universe in which stuff does not happen randomly, observational data can be compressed, and therefore, you can describe them with mathematical laws? I’m sure that this must be a thought that has already occurred to you, so i guess i’m missing something. Or maybe it’s a little bit like a spiritual experience to look at the universe and wonder in awe how can everything be possibly described by a bunch of simple and elegant (of course after you understand them) laws …

        • John Baez says:

          Hi, Giampiero! You wrote:

          From an engineer’s perspective, I guess that the less one’s field is “applied” the harder might be to visualize stuff, since one tends to lose the all-important anchoring to physical (or at least physically inspired) models.

          I would refine this a bit: you seem to be conflating “applied”, “easily visualized” and “physical”. There are branches of pure math that are very easy to visualize: for example, branches of geometry, especially geometry that involves 2- or 3-dimensional shapes one can draw. There are very applied branches of mathematics that are hard to visualize, at least for me, like cryptography. And “physical” seems to be a third axis.

          But I agree that there are lots of people who more easily approach subjects that are applied, visualizable and physical. All these are different aspects of being “concrete” rather than “abstract”, I guess.

          In math departments there are lots of people who find applied and physical problems harder. Some of these people still like visualize problems: for example, geometers and topologists. Others shun visualization: for example, some algebraists.

          I tend to think there are 3 methods of understanding math, and I try to keep getting better at all of them: visualizing, calculating, and verbal reasoning.

          While i’m here i also can’t resist to comment about Wigner’s “unreasonable effectiveness” of mathematics in describing the universe: maybe i am being too much of a reductionist but shouldn’t it be true that in any universe in which stuff does not happen randomly, observational data can be compressed, and therefore, you can describe them with mathematical laws?

          I can imagine, or at least I think I can imagine, a universe we could live in that had lots of patterns—but few or none holding with mathematical precision. For example, maybe gravity would attract masses and decrease with distance, but vary in strength slightly in an unpredictable way depending on time and place.

          I can also imagine—I think—a universe governed by mathematical laws, but only laws that don’t use very deep mathematics. No Riemannian geometry, no gauge theory, no functional analysis. Why, for heaven’s sake, should atomic nuclei be held together by a for described by an SU(3) gauge theory? Wouldn’t some simple sort of sticky behavior be enough?

          These are the things that make me feel mathematics is “unreasonably” effective in describing our universe—that is, apparently better than it needs to be.

          I hope we will someday know a reason for this phenomenon, so I’m putting “unreasonable” in quotes: that’s just the word Wigner used for it. And I don’t think about this much anymore… but it did help launch me on the path of wanting to learn the fundamental laws of physics, back when I was in college. After pondering this stuff a while, I decided that first we should learn what the laws of physics are, and then think about “why” they are that way.

  4. Keith Harbaugh says:

    Thanks muchly for this great exposition. You’re such a terrific expositor!
    For developing an appreciation of the beauty of geometry, this is a great video: https://youtu.be/yeWx_pJpJ50 Geometry can be both beautiful and artistic!

  5. […] Source: johncarlosbaez.wordpress.com […]

  6. This is one of those weird blogs that takes articles from people and reposts them without saying who wrote them.

  7. Ishi Crew says:

    When i was in college , I was basically into biology so i could be outside but was told i had to take math, chemistry and physics.

    i would be presented with something like Schrodinger’s equation and told to solve it. I told them its already solved. It has an = sign. They wanted me to rearrange it.

    Pascal was like Gorondiek another french mathematician as was Galois and cedric Villani. (my first required presentation in undergrad was to prove the boltzmann’s h-theorem which Villani got his fields medal for a related topic—i couldn’t get through it so i proved it my own way–my professor fell asleep during that presentation on purpose–he didn’t like me ). .

    The teacher who prescribed Pascal to me also tried to flunk me (said i had poor class attendence). I met some grad students who were into differential geometry, related to algebraic—descartes sort of turned geometry into algebra. They were all going to Hollywood to make graphics for movies of the kind i don’t like.
    the relationship between algebra and geometry has always interested me though i dont know much about it. my linear algebra teacher gave me this long review article from AMS 1980 by G S Mackey on ‘harmonic analyses as the exploitation of symmetry’ (Hermann Weyl also has an old book on this–the relation between pictures and math). That teacher also tried to flunk me due to poor class attendence.

  8. Mark Meckes says:

    As a mathematician still waiting to fall in love with algebraic geometry (and still in my forties, so I guess there’s time!), I look forward to reading this essay in full. But I was struck by a detail in your second sentence that — as far as I could tell at a glance — you didn’t qualified later on. Namely, you say that algebraic geometry is the study of geometry using algebra. Now from my superficial outsider’s perspective (that is, the perspective is superficial, not the outsider (me)!), it appears that algebraic geometry is just as much the study of algebra using geometry. And surely much of the beauty and usefulness of the subject have to do with the interplay of those two ways of looking at it.

    • John Baez says:

      Yes, that’s true. Sometimes I think of algebraic geometry as the quest to develop ‘double vision’, where everything in algebra (well, at least the algebra of commutative rings) can be seen as geometry, and everything in geometry (well, at least some kinds of geometry) can be seen as algebra. And despite the parenthetical comments, mathematicians are always working to expand the scope of this ‘double vision’. They’re trying noncommutative algebraic geometry, absolute geometry, derived algebraic geometry, and much more.

  9. Alexey says:

    Anyway, thanks for the entertainment! For the wide audience it will be practically impossible to approach Hartshorne from the scratch, I would recommend Miles Reid “Undergraduate Algebraic Geometry”

  10. Alexey Bekasov says:

    From a purely theoretical point of view, Pursuing Stacks had been developed through Grothendieck’s work Les Dérivateurs, published in 1991, and elaborated via subsequent development of the motivic homotopy theory of Fabien Morel and Vladimir Voevodsky in the mid-1990s. It has unexpected applications at Computer Science via homotopy type theory, which helps to model quite a broad and complex set of concepts and ideas (ex. math via univalent foundations). Hope Your generous and productive inputs will help to elaborate on those ideas into a more practical field of quantum physics! Good luck! and great thanks for your popular coverage of the frontline science ideas and approaches!

You can use Markdown or HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it.

This site uses Akismet to reduce spam. Learn how your comment data is processed.