The Rarest Things in the Universe

guest post by Leonard Adleman

About 50 years ago Kolomogorov assigned to each finite binary string \sigma a non-negative integer that measured that string’s ‘descriptive complexity’. Informally, K(\sigma) is the length (in binary) of the shortest (Turing machine) program that with the empty string as input, outputs \sigma and halts. A related measure of descriptive complexity is M(\sigma)=\frac{K(\sigma)}{|\sigma|}, where |\sigma| denotes the length of \sigma. A simple string like:

\sigma_0=\overbrace{111\ldots 111}^{1,000,000}

can be produced by a very short program; hence M(\sigma_0) is near 0. But if \sigma is a ‘random string’ (e.g. obtained by flipping coins), then, with high probability, it cannot be produced by a program significantly shorter than \sigma itself; hence M(\sigma) will be near 1.

If I asked you to produce (using a computer) a thousand strings of length one million with M near 1, it would be easy to do; just flip a lot of coins. If I asked you to produce a thousand strings with M near 0, that would also be easy. For example, you could start with a short random string and repeat it a lot. Actually, if I chose my favorite \alpha\in [0,1] and wanted a thousand strings of length one million with M near \alpha, then a mix of the preceding approaches can be used to produce them. So strings with a desired M are not rare.

Now let’s consider ‘deep strings’*. I will be informal here, but the underlying theory can be found in my Time space and randomness.

For all binary strings \sigma, we assign a value that measures the ‘depth’ of \sigma. D(\sigma) is obtained by considering both the size and the number of steps used by each program that with the empty string as input, outputs \sigma and halts**. D(\sigma) has the following properties:

  • If there is no short program to produce \sigma, then D(\sigma) is small.
  • If there is a short program to produce \sigma and it uses few steps, then D(\sigma) is small.
  • If there is a short program to produce \sigma, but all short programs to produce \sigma use lots of steps, then D(\sigma) is large. Roughly speaking, the more steps small programs use to produce \sigma, the larger D(\sigma) will be.

Informally, we call a string with large D ‘deep’ and one with small D ‘shallow’. A few examples may help.

Consider a string \sigma obtained by flipping coins. With high probability there is no short program to produce \sigma, hence \sigma is shallow.

Now consider \sigma_0 above. Since there is a short program to produce \sigma_0 and that program uses few steps, \sigma_0 is shallow.

Now treat \sigma_0 as a number in binary (i.e. 2^{1,000,000}-1) and consider the prime factorization. The fundamental theorem tells us it exists and will be about one million bits long. But, unless 2^{1,000,000}-1 is somehow special (e.g. a prime times a very smooth number), its prime factorization may be very very deep. A short program can generate the prime factorization (just generate one million 1s with a short program and then give it to a short factoring program). But if it turns out that factoring can’t be done in polynomial time, then perhaps all short programs that generate the prime factorization use a huge number of steps. So the prime factorization would have a very large D. Conceivably, since steps on a computer use time and energy, the prime factorization can never be realized. It is not a long string (only one million bits) but it may exist only in theory and not in reality.

If I asked you to produce even one string of length one million with D near that of the prime factorization of 2^{1,000,000}-1, could you do it? I would not know how and I suspect that as a practical matter it cannot be done. So strings with such a D are rare.

Here is a string that does exist in our universe and that (I suspect) is quite deep:

46817226351072265620777670675006972301618979214252832875068976303839400413682313
921168154465151768472420980044715745858522803980473207943564433*5277396428112339
17558838216073534609312522896254707972010583175760467054896492872702786549764052
643493511382273226052631979775533936351462037464331880467187717179256707148303247

In fact, this string is the prime factorization of 2^{1061}-1. We should not expect it to be as deep as the prime factorization of 2^{1,000,000}-1, but we should still expect it to have considerable depth. There is even some experimental evidence that supports this view. How did I get this prime factorization? I got it from: Factorization of a 1061-bit number by the Special Number Field Sieve, by Greg Childers. So it was easy to get. Well, easy for me anyway; not so easy for Childers. He reports that the factorization took about `3 CPU-centuries’ using the Special Number Field Sieve. Other than by factoring 2^{1061}-1, how could you ever come to write this pair of primes? I venture to say that since the Special Number Field Sieve is the fastest known algorithm for factoring numbers of this kind, no method available today could have written these primes (for the first time) using fewer steps (and hence less time/energy).

The situation might be compared to that of atomic physics. I am not a physicist, but I suppose it is possible to theorize about an atomic nucleus with a million protons. But what if I want to create one? It appears that producing transuranic elements takes huge amounts of time/energy and the greater the number of protons, the more time/energy it takes. It is even conceivable (to me at least) that there is not enough time/energy available (at least on earth) to actually produce one. Like the prime factorization of 2^{1,000,000}-1, it may exist in theory but not in reality. On the other hand, physicists from Russia and America, using lots of time/energy, have created an atomic nucleus with 118 protons called Ununoctium. Ununoctium is analogous to Childers’ prime factorization; both exist in reality; both were very costly to create.

But my focus here is neither ontology nor physics; it is money. Recently Bitcoins and Bitcoin-like things have captured the world’s attention. I suspect that this kind of currency will revolutionize the nature of economies, and consequently the nature of governance. Personally, I am agnostic on whether this is a good thing or a bad thing. But, in any case, I am wondering if ‘depth’ might form part of a theoretical basis for understanding new currencies. I must confess to knowing few details about Bitcoins; consequently, I will start from first principles and consider some of the properties of currency:

  • Mass: A currency may have mass or may be massless. U.S. dollars and gold have mass. Bitcoins are massless. The efficiencies of the Internet market require a massless currency. Imagine running eBay or Amazon using gold. U.S. dollars have mass, so we do not actually use them for transactions on the Internet, rather we use credit card numbers to create massless IOUs that promise to deliver (but, in fact, seldom actually deliver) U.S. dollars in the future. These IOUs are essentially a massless currency.
  • Production: Who can produce or destroy the currency? This has been an important political, legal and economic issue for millennia. In the west, coins began to appear at about the time Solon ruled Athens. Solon realized that he could manipulate the value of coins. And so he did. Solon’s lesson has not been lost on governments ever since. In its simplest form: if you owe K dollars, you print K dollars, and voila! no more debt. Creditors don’t like debtors to do that, so they want to be paid in a currency that no one can produce more of. Gold comes close. You can produce more gold but you have to spend a lot of time/energy in a mine to do so. Production also includes counterfeiting. Counterfeiting comes in at least two important forms: de novo-counterfeiting (build a press) and duplication-counterfeiting (get a xerox machine). For now, all massless currencies are in digital form and are vulnerable to duplication-counterfeiting since computers make it cheap and easy to create perfect copies of digital notes (a unit of a currency will be called a ‘note’). As a result, massless currencies will typically be associated with systems that mitigate this threat. Perhaps the elaborate ledgers implemented by the creators of Bitcoin are an attempt to deal with the threat of duplication-counterfeiting.
  • Abundance: As is often said, money can be used to ‘lubricate the economy’. To accomplish this a currency has to be sufficiently abundant. For example, Ununoctium, might be attractive to creditors because, compared to gold, it is far more costly in time/energy to produce more. However, it is not desirable for lubricating the economy because it is so costly to produce that less than 100 atoms have ever been created.

The transition from mass to masslessness will lead to currencies with uses and properties we do not normally associate with money. For example, using the techniques of secret-sharing, it becomes possible to create digital currencies where a single note can be jointly owned by 1000 individuals; any 501 of whom could cooperate to spend it, while any 500 of whom would be unable to do so.

What is the perfect currency? This is probably the wrong question, rather we should ask what properties may currency have, in theory and in reality. Let’s consider massless currencies.

Is there a massless currency similar to the U.S. dollar? I think yes. For example, the Government could simply publish a set of numbers and declare the numbers to be currency. Put another way, make the U.S. dollar smaller and smaller to decrease mass until asymptotically all that is left is the serial number. With regard to abundance, like the U.S. dollar, the Government is free to determining the number of notes available. With regard to production, as with the U.S. dollar, the Government can print more by simply publishing new numbers to be added to the set (or destroy some by declaring that some numbers have been removed from the set). With regard to counterfeiting, the U.S. dollar has some advantages. The mass of the U.S. dollar turns counterfeiting from a digital problem to a physical one. This provides the Government with the ability to change the U.S. dollar physically to defeat technology that might arise to produce faithful (either de novo or duplication) counterfeits.

Is there a massless currency similar to gold? I think yes. I think this is what Bitcoin-like currencies are all about. They are ‘deep-currencies’, sets of deep strings. With regard to abundance, they are superior to gold. The total number of deep strings in the set can be chosen at initiation by the creator of the currency. The abundance of gold on the other hand has already been set by nature (and, at least as currently used, gold is not sufficiently abundant to lubricate the world economy). With regard to production, as with gold, making notes requires time/energy. With regard to counterfeiting, gold has an advantage. Counterfeit gold is an absurdity, since all gold (by the ounce, not numismatically), no matter how it arises, is the same atomically and is perceived to have the same value. On the other hand, as stated above, massless currencies are vulnerable to duplication-counterfeiting. Interestingly, deep-currencies may be resistant to de novo-counterfeiting, since the creator of the deep-currency is free to choose the depth of the notes, and consequently the cost of producing new notes.

The value of a massless currency in our information based world is clear. Deep-currencies such as Bitcoin offer an attractive approach. But there is an interesting issue that may soon arise. The issue stems from the fact that the production of each note in currencies like Bitcoin requires a large investment of time/energy, and as with gold, this deprives governments of the prerogative to print money. Creditors may like this, but governments will not. What will governments do? Perhaps they will want to create a ‘Dual-currency’. A Dual-currency should be massless. It should come with a secret key. If you do not possess the secret key, then, like gold, it should be very costly to produce a new note, but if you possess the secret key, then, like the U.S. dollar, it should be inexpensive to produce a new note. Is a Dual-currency possible? Here is an example of an approach I call aurum:

  • Generate a pair of RSA keys: a public key \langle E,N\rangle, and a secret key \langle D,N\rangle. Publish the public key \langle E,N\rangle.
  • Declare that notes are exactly those integers G such that 2\leq G\leq N-1 and (the least positive residue of) G^E\mbox{Mod}(N) is less than or equal to \frac{N}{10^9}.

So, choosing a random integer V such that 2\leq V\leq N-1, and then computing V^E\mbox{Mod}(N) has about one chance in a billion of producing a note. Hence the expected number of modular exponentiations to produce a note is about one billion. On the other hand, those who possess the secret key can chose an integer W such that 2\leq W\leq \frac{N}{10^9} and calculate W^D\mbox{Mod}(N) to produce a note after just one modular exponentiation. There are many bells and whistles that can be accommodated with aurum, but here is what is ‘really’ going on. The depth of a string \sigma is obtained by considering programs running with the empty string as input, but we can consider a more general concept: ‘relative depth’. Given a pair of strings \sigma and \tau, the depth of \sigma relative to \tau is obtained by considering programs running with \tau as the input. Hence depth as we have been discussing it is the same as depth relative to the empty string. In the example above, we have made `dual-strings’; strings that are deep relative to the public key, but shallow relative to the secret key.

One of the interesting phenomena of theoretical computer science is that you can sometimes turn bad news into good news. If factoring is hard, then we are deprived of the ability to do something we (at least number theorists) would like to do. Bad news. But surprisingly, we acquire public-key cryptography and the ability to preserve our privacy. Good news. Similarly, strings that are very hard to produce seem useless, but Bitcoins have revealed that such strings can provide a new and useful form of currency. Now that we are aware that deep strings can have value, I expect that clever people will find many new uses for them.

After thinking about deep strings for many years, I see them everywhere – I invite you to do the same. I will finish with one of my favorite observations. The watchmaker analogy is well known and is frequently used in arguing the existence of a creator. If you stumble upon a watch, you recognize from its complexity that it did not form by chance, and you conclude that there must have been a watchmaker. A human is more complex than a watch, so there must have been a ‘human-maker’ – a creator. An alternative view is that the complexity you recognize is actually ‘depth’ and the conclusion you should reach is that there must have been a computational process sustained through a great many steps. In the case of humans, the process is 3.6 billion years of evolution and the depth can be read in the genome. The watch is deep as well, but much of its depth is acquired from the human genome relative to which it is not nearly so deep.

*It has been brought to my attention that others have considered similar concepts including Charles Bennett, Murray Gell-Mann, and Luis Antunes, Lance Fortnow, Dieter Van Melkebeek, and N. Variyam Vinodchandran. Indeed, the latter group has even used the term ‘deep string’, hence it is to them we owe the name.

**For all \sigma\in\{0,1\}^{*}:

  • T(\sigma) denotes the set of Turing machine programs that with the empty string as input, output \sigma and halt.
  • E(\sigma) denotes \min\{\max\{|P|,\log_{2}(\vec{P})\}|P\in T(\sigma)\}, where |P| denotes the length of P, and \vec{P} denotes the number of steps used by P with the empty string as input.

It may be convenient for the reader to assume that D(\sigma) is approximately \frac{E(\sigma)}{K(\sigma)}; however, the proper theory of deep strings extends the notion of depth to sets of strings and accommodates the use of randomness in computation. Depth in the context of quantum computation may also be of interest.

42 Responses to The Rarest Things in the Universe

  1. Baumann Eduard says:

    Consider the limit 137 for transuranes.

    • John Baez says:

      Of course you’re referring to Len Adleman’s remark:

      I am not a physicist, but I suppose it is possible to theorize about an atomic nucleus with a million protons.

      It’s an interesting issue, though utterly tangential to the main point of this post. Apparently Feynman argued that the Dirac equation for the relativistic, quantum-mechanical motion of an electron around a nucleus has no bound state solutions when the nucleus has charge greater than the reciprocal of the fine structure constant, roughly 137. I’ve never seen the proof of this, and now I want to!

      (I’m wondering if the real claim is that the Hamiltonian ceases to be self-adjoint when the charge becomes that large. A similar phenomenon occurs for the Schrödinger equation with a potential of the form -C/r3/2; the Hamiltonian is self-adjoint when C is small enough but not when C is large. I may have the exponent wrong here; it’s been a long time.)

      Anyway: Feynman’s argument has convinced many people that element 137 is the last one possible, even ignoring issues of stability of the nucleus. I think people call this imaginary element Feynmanium.

      However, other physicists have pointed out that this argument is oversimplified, since the nucleus is not a point charge. Wikipedia provides obscure hints regarding this issue:

      More accurate calculations taking into account the effects of the finite size of the nucleus indicate that the binding energy first exceeds 2mc2 for Z > Zcr ≈ 173. For Z > Zcr, if the innermost orbital (1s) is not filled, the electric field of the nucleus will pull an electron out of the vacuum, resulting in the spontaneous emission of a positron. However, this does not happen if the innermost orbital is filled, so that Z = 173 does not constitute a limit to the periodic table.

      The business of ‘pulling an electron out of the vacuum’ sounds like an example of the Schwinger limit, an upper limit on the strength of the electric field: electric fields stronger than 1.3 exavolts/meter will ‘spark the vacuum’, since it becomes energetically favorable to create electron-positron pairs that reduce the strength of the field!

      Some people must take this fairly seriously, since there are chemists who have worked out what elements would be like up to element 172:

      • B. Fricke, W. Greiner and J.T. Waber, The continuation of the periodic table up to Z = 172. The chemistry of superheavy elements, Theoretica Chimica Acta 21 (1971), 235–260.

      • Pekka Pyykkö, A suggested periodic table up to Z ≤ 172, based on Dirac–Fock calculations on atoms and ions, Physical Chemistry and Chemical Physics 13 (2011), 161–168.


      However: all of this has to do with chemistry (the stability of electron orbitals) rather than nuclear physics (the stability of nuclei), which is what Len was talking about!

      I’m having trouble finding out the theoretical upper bounds on the size of nuclei. All I know is this:

      Things seem to fizzle out after the island of stability… so, maybe we shouldn’t expect nuclei with more than about 126 protons.

      Of course, there’s a fuzzy line between a ‘radioactive isotope’ and ‘a bunch of protons and neutrons that have been jammed in one place but quickly fly apart’… so there may be no sharp answer to ‘what’s the largest possible nucleus’.

      We can think of a neutron star as an enormous piece of nuclear matter… but this is stabilized by gravity, not nuclear forces. So, maybe neutron stars don’t count. But if they do, we might consider them the ultimate form of ‘massive currency’, apart from black holes.

  2. domenico says:

    It is very interesting.
    I have a problem: if it exist a program to generate a random string, and if I can write this random string generator, then the length of the program can be little; if there is a real random string obtained with cosmic rays, then the information in the string is a real information, of cosmical events without apparent regularity; I am extracting information from a measure, that can be used in a future, to obtain some real data of infinite information, and I think that it is right that the complexity is great.
    I understand that if a person write a factorization in a string of symbols, then the information is great, but the dimension of string is little; but I don’t see the problem if the information in the string is only the value of a multiplication, like a theorem statement without demonstrations, the Fermat’s Last Theorem was a vacuum conjecture (low complexity) before the full demonstration.
    I think that there is not a difference between a paper money and an electronic money, if each transaction is between two persons that exchange a private-public key message with an intermediation of a central bank that stores the operation; the cryptography can be changed with the use, so that it is better of a real money, that have a fixed depth.

  3. Berényi Péter says:

    prime factorization may be very very deep

    Then again, it may not, we just don’t know for sure. It is an educated guess, all right, but currently we do not have a proof this problem is NP-hard. Neither do we know quantum computers with a large number of qubits are infeasible. Should we have one, Shor’s algorithm would do factorization in polynomial time.

    If the entire financial system is based on our inability to do something like factoring large integers, we are better to be damn sure before going off to that direction. If the system is already in place, there would be a huge incentive to get rid of trouble makers. Is it not somewhat disturbing, that certain branches of math or physics can get lethal for the inattentive researcher?

    • Frederik De Roo says:

      Berényi Péter wrote:

      If the entire financial system is based on our inability to do something like factoring large integers, we are better to be damn sure before going off to that direction.

      I certainly agree with being cautious, but in this context I would like to add that our current financial system has many flaws too probably due to the fact that private banks have the power to create electronic money which is way more abundant than the physical money created by the central bank. See e.g. Positive Money for a criticism of the current money system and proposals for reforms. (I should add their books to our reading list.)

      • Berényi Péter says:

        private banks have the power to create electronic money which is way more abundant than the physical money created by the central bank

        Unfortunately it is an unrelated issue. If the legislative power chooses to go on with this preposterous practice, that is, license to create money as debt is granted to commercial banks, it may do so with bitcoins as well.

  4. John Baez says:

    I’ve been interested in logical depth ever since I was a grad student at MIT hanging around Tomasso Toffoli‘s ‘physics and information’ group. I met Charles Bennett there, and read his then-new paper:

    • Charles Bennett, Logical depth and physical complexity, in The Universal Turing Machine: A Half-Century Survey, ed. Rolf Herken, Springer, 1988, pp. 227–257.

    which is still very fun to read!

    More recently I’ve been trying to understand the value of biodiversity. Why is that this:

    sells for $140 million, much more than this:

    More importantly, how could a dead tiger be worth more than a live one? It’s all very complicated…

    (Part of the issue is that rich people are currently able to buy paintings but not species: they can buy individual animals, but these will eventually die and lose their value, so they’d need to buy species to make this count as a good investment. (I’m not sure I like the idea of letting people buy species, but it seems that nowadays anything that cannot be bought and sold has a tendency to be deemed worthless.)

    … but right now what interests me is the idea that a tiger, by having great logical depth, has a certain ‘inherent value’.

    • Allan E says:

      I read a bunch of papers from Tomasso Toffali’s group back when I was a grad student, including Bennet’s Logical Depth paper. It was good stuff.

      One thing to blame for our skewed values lies in the human practice of, if I might coin a snappy phrase, “resource-bounded double-entry book keeping”. That practice is geared towards creating a system that is massively biased towards defining value in terms which are either constant or trivial to predict. The entire industry of accountancy is built on the presumption that one need not record externalities on your accounts, only direct assets and liabilities.

      With infinite computational, investigative, etc resources a future frontier accountant might be able to predict the ills from the potential demise of any one tiger or group of tigers, and amortize that cost back to the present day to find if it is in the company’s best interest (or otherwise!) to participate or at least be indifferent to the tiger’s demise.

      But accountants are famously slow to modernize.

      So rather than try to fix accountancy by “putting the tigers on the books” I suggest the opposite — we need to make it illegal to take any action (at-all) that CAN’T be put the books. Then nobody will kill tigers because there will be no way to price it, and you’re not allowed to do it if it doesn’t have a price.

    • Zoltán Zimborás says:

      By the statement “tigers have great logical depth”, do you mean that the DNA is a “relatively” simple object (playing the role of a short program), while building up the proteins etc are done through long and complicated chemical processes, pathways (which would be analogous to a long running time)?

      This is actually a quite interesting thought. It is very usual that popular books and articles on genetics compare two species at very different biological organization levels; and by noting that the length of the DNA is not too different for the two, conclude that their complexity might not differ that much either. However, the notion of logical depth may put that into a different perspective. (Btw, then also evolution could be looked at from a “logical depth point of view”.)

      • Zoltán Zimborás says:

        Reading this very nice post again, I see that the “depth of humans” were put in a bit bigger picture, than what I suggested in the previous comment:

        In the case of humans, the process is 3.6 billion years of evolution and the depth can be read in the genome. The watch is deep as well, but much of its depth is acquired from the human genome relative to which it is not nearly so deep.

    • arch1 says:

      The picture and some of the discussion remind me that I’d like to read carefully “The Quark and the Jaguar” by Murray Gell-Mann, in which IIRC he tries to speak clearly about complexity, information, and quantum mechanics.

      (Devolving into gossip mode, it was among published comments appended to a separate Gell-Mann complexity essay that I encountered the following comment by Stuart Kauffman who – full disclosure – was also employed at the Santa Fe Institute: “Murray is enormously smart, sensible, and knowledgeable. He may know more things than any other single human being.” I’d like to hear his wife’s take on this.)

      • domenico says:

        I thought a simple measure for simple forms of life: it is possible, now, a complete multicellular simulation of a C.Elegans, so that there is a program that represent the complete functions of this organism; the Kolmogorov complexity of the program can be a measure of the logical depth of the organism, and the dna complexity (that it is the biologic program) can be near to this value.
        I thought that the programs, that Baez describes in The complexity barrier have some analogy with the differential equations: we use the relaxation method to solve them from some initial condition on a grid (like a Conway’s game of life), the Kolmogorov complexity of the differential equation is low, and the logical depth of the results is high, and the final string (values on the grid points) of the result is high (high complexity of the calculus and high complexity of the solutions): I am thinking that the great complexity seem to be due to the boundary-values, and the initial values, so that a simple environment give simple solution.

  5. nad says:

    due to the fact that private banks have the power to create electronic money which is way more abundant than the physical money created by the central bank.

    ????
    Where exactly do private banks have the power to CREATE electronic money? What exactly is meant here?

    By introducing a currency like bitcoin one does per se introduce only a new exchange medium that is “real” (central bank) money may be used to “buy” a bitcoin. Apriori the value of a bitcoin is thus related to it’s value as a means for exchange and the costs for the handling of exchange. Like if I print some paper money, the value of that money would consist for you in the production price of that paper, it’s durability and readability for transfers etc., so even if I would write 1000$ on that piece of paper money you wouldn’t pay that amount unless it wasn’t excepted by a large group of people like in the case of the money of central banks that there is a “real value” behind it, like workforce, ressources etc.

    I don’t know much about bitcoin, but just read a recent article about it, which however left open some questions. As I understood the deep strings are in the case of the bitcoin currency used for handling the transactions.

    In the case of bitcoin there are actually quite some “real” costs and values like expensive computers, energy costs for computers, attention behind the mining and for the transfer, a handling value due to anonymity etc. involved.

    Like the bitcoin block chain (which seems to be needed to be transferred/communicated for each transaction?) is now already more than 10 GByte long and the transfer of digital information and especially computing is (yet) not fully massless, there are usually still electrons and their friction etc. and even for optical transmissions the transfer is not without energy costs. So it would be actually interesting to know what those costs actually are. And like how much carbon has to be burned on average for transfering 10 Gbyte from the US to Europe?

    When I first saw the announcement that there is going to be a post on bitcoin my first thought was actually that the post would be about the energy costs involved with bitcoin.

    But this post was also interesting.

    • Frederik De Roo says:

      Nad wrote:

      ????
      Where exactly do private banks have the power to CREATE electronic money? What exactly is meant here?

      Just browse e.g. the website Positive Money or read e.g. the book Modernizing Money. They explain it better than I can repeat it.

      Banks create money when they make loans. You may have heard of the “money multiplier” effect in introductory economy courses, i.e. the central bank creating physical money and the bank creating an electronic multiple of that, but in reality it’s more of the opposite, with banks creating loans in order to maximize their profits.

      Anyway, my comments about electronic money are just an aside to the topic of this blog post.

    • John Baez says:

      Frederik wrote:

      Banks create money when they make loans.

      Money is extremely subtle and confusing; I really don’t understand it. I suspect that if I did I could figure out ways to get rich.

      In the US we have various definitions of money: M0, M1, M2, M3 and others (click for details).

      In the US, M0 consists merely of notes and coins in circulation (outside Federal Reserve Banks and the vaults of depository institutions). Here’s a graph of M0:

      measured in millions of dollars. So, in early 2014 it was about 3 trillion dollars.

      According to Wikipedia, M1 includes M0 and also traveler’s checks and demand deposits. Here’s a graph of M1:

      measured in billions of dollars. In early 2014 it was about 2.6 trillion dollars. How can it be less than M0? I don’t know. Like I said, I don’t understand money.

      M2 includes M1 and also savings deposits and money market accounts. Here’s a graph of M2:

      measured in billions of dollars. In early 2014 it was about 11 trillion dollars—much more than the other two.

      All this data is only for the US, and it’s from here.

  6. John Baez says:

    By the way, I would be very happy to talk with anyone here about the mathematical aspects of logical depth. The only definition I know is Charles Bennett’s, which if I recall correctly is: the logical depth of a string is the runtime of the shortest program that prints that string.

    • Frederik De Roo says:

      Excuse me for nitpicking, but do you mean the runtime of the fastest program?

      In addition, what puzzles me a little bit is that fast programs (perhaps rather the language+compiler) may have needed a lot of human thought put into their optimization. So the “runtime” for the human brain to construct the pieces of hardware and software that could make this program fast is not taken into account? I.e. is the logical depth of the words (perhaps expressed in bit strings) that constitute the fastest program taken into account? I’m not into factorization algorithms, so perhaps the “development” phase is irrelevant compared to the computing time once the string is deep enough and my question becomes irrelevant.

    • John Baez says:

      Frederik wrote:

      Excuse me for nitpicking, but do you mean the runtime of the fastest program?

      This isn’t nitpicking at all—these nuances are crucial. I mean the runtime of the shortest program. By this definition, if there’s a very short program that takes a very long time to compute a string of bits and print it out, we count that string as ‘deep’.

      We should compare this to the much more widely studied concept of Kolmogorov complexity or algorithmic information, which is the length of the shortest program that prints out a string of bits.

      If we have an incompressible string of n bits, the shortest program that prints them out will consist of a command like

      print …

      where “…” is the string of bits. This program will have length roughly equal to the string itself, so the algorithmic information of the string is about n. It will take about n steps for the program to print the string, if printing each bit counts as a step, so its logical depth is also about n.

      Compare this to the shortest program that computes and prints out the prime factors of

      n = 2100000000000-1

      (where we write the factors in in binary and concatenate them). Here the program is much shorter than the string of bits it prints out, so the algorithmic information of this string is much much less than n—in fact, less than a million. In other words, this bit string highly compressible. However, it will take a long time for the program to run… longer than n steps. So, we say this string has high logical depth (according to the definition I mentioned).

      On the other hand, a string of 2100000 ones has both low algorithmic information and low logical depth.

      So, a string of high logical depth and low algorithmic information is interesting because it can be compressed to a tiny ‘seed’, but it takes a long time to recover the string from this seed.

      Len Adleman’s definition of logical depth is a bit different, but this give you some of the flavor of various things that can happen. Algorithmic information came first, but this is not sensitive to everything we’re interested in.

      • domenico says:

        I have a problem, here, the halting problem: it is not possible to know if a program can perform a task; so that each definition of logical depth that contain a program execution time, or step number, have not a sure measure (the Kolmogorov complexity is sure).
        The program that compute the prime factors is part of the definition, so that I can define (for example) all the roots of a polynomial of ten degree with a low complexity, but an impossible solution: the complexity is low (it is only a definition) but it has not depth (I cannot write some roots).
        It seem analog to a Godel incompleteness theorem, because there is a program (factorization) that is included in a program (logical depth); the problem does not exist for well definite string, that are only statements.

      • John Baez says:

        Domenico wrote:

        I have a problem, here, the halting problem: it is not possible to know if a program can perform a task; so that each definition of logical depth that contain a program execution time, or step number, have not a sure measure (the Kolmogorov complexity is sure).

        I would not use the word ‘sure’ here: I think you mean to say ‘computable’.

        Actually both logical depth and Kolmogorov complexity are uncomputable, because there is no algorithm that can decide whether an arbitrary algorithm halts and prints out a given string: as you say, this comes from the undecidability of the halting problem. So, it is impossible to write a program that finds the shortest program that prints out a given string… which is what we’d need if we wanted to compute that string’s Kolmogorov complexity.

        This does not make these concepts useless, but it does limit their use. If you want a computable substitute for Kolmogorov complexity, try Leonid Levin’s time-bounded complexity. It would be interesting to find a computable variant of logical depth.

        • domenico says:

          The Kolmogorov complexity is equal to the Shannon entropy of the initial halting program, so that the logic depth of Bennett can be equal to some Shannon entropy of the halting Turing machine.
          If the input, and output, of the Turing machine are write in a sequence (another string that register each intermediate result), then the Shannon entropy of registered string can be the logical depth of Bennett; for example the result of the factorization is concatenated in the registered string, and if the intermediate step are enormous, then the registered string is enormous.
          If this is true, then each minimum mathematical demonstration of a theorem have a logic depth (if each theorem is computable in a formal computer language), and a Shannon entropy: it is funny, a complexity index for a class of theorems.

    • Joe Bebel says:

      Hi John, I am Len’s PhD student. If we are to get a little ‘meta’ with this idea, I like to use the informal example of Fermat’s Last Theorem. On one hand, we can consider that it took 358 calendar years and countless thousands of mathematician-years to produce a proof, perhaps evidence of its depth. (I can’t comment on the Kolmogorov complexity of this proof, but I suspect it’s not incompressible, at minimum)

      On the other hand it seems obvious that mathematicians did not exhaustively search the space of (say) 1 million bit long proofs to find the one that ends with FLT. It’s conceivable (some might even say very likely, if P \ne NP) that there actually are proofs that require such investment of time and energy to find, and in essence those theorems are far beyond mathematicians ability to ever prove them, despite being perfectly true and following from your favorite set of axioms. In this sense one might even consider FLT as relatively ‘shallow’ given the potential depth of other theorems.

      • John Baez says:

        Indeed, one might say that the truly ‘deep’ proofs are not the sort human mathematicians are most interested in. The four-color theorem might be an example: its first proof was so long that only a few people seem interested in simplifying it!

        Even fewer people seem interested in thinking about Hales’ proof of the Kepler conjecture, which consisted of 250 pages of notes and 3 gigabytes of computer programs, data and results. After four years of work, In 2003, after four years of work, the referees for Annals of Mathematics said they were “99% certain” the proof was right, but couldn’t check the correctness of all of the computer calculations. Hales is working on a completely computerized proof.

  7. KWRegan says:

    Here’s an example of strings with low information complexity, high depth, but also low “distinctiveness”—upon fixing some nondescript program P:

    x = the string of length 2^{100000} on which P has its worst-case running time (breaking any ties lexicographically among inputs of that length).

    Ostensibly, computing x entails running P on all those inputs. So it is deep, but also if P has a bunch of similar bad-case inputs, especially if there’s a tiebreak, x may be mostly arbitrary—and certainly no “jewel” like a factorization or a first-nanotile residue as in the article.

    Incidentally this is basically the proof of Li-Vitanyi’s theorem that under Levin’s universal distribution, every program’s expected and worst-case time are Theta-of each other. A kind of “Murphy’s Law” theorem, which extends to any other computable performance measure with worst and average cases defined for it.

  8. arch1 says:

    Bennett’s definition of logical depth seems kinda quirky in that it depends on how a single object at the extreme tail of a distribution along one dimension, happens to behave along an independent dimension. My knee jerk reaction is that as a practical matter it would rarely be possible to determine a string’s Bennett-logical-depth, or even a tight lower bound thereon, or *any* upper bound thereon, with certainty. But that makes the definition sound not real useful, so I’m probably confused (which is typical for a Tuesday).

    • John Baez says:

      Indeed, logical depth is uncomputable in general, at least as I’ve stated the Bennett’s definition. You can put lower bounds on the logical depth of a given string, but not, in general, upper bounds.

      A similar thing is true for algorithmic information (or Kolmogorov complexity): it’s uncomputable, and we can find upper bounds on the algorithmic information for a given string but not, in general, lower bounds.

      More surprisingly, there’s a number K such that we can’t prove any string has algorithmic information more than K, even though all but finitely many do! I explained this earlier here:

      The complexity barrier, Azimuth, 28 October 2011.

      So, logical depth and algorithmic information are mainly the sort of thing people prove general theorems about, rather than attempt to compute in any one particular example.

      Uncomputable things aren’t so bad as long as you don’t try to compute them.

      • arch1 says:

        Thanks, and thanks for the reference to the earlier posting which I found remarkable in at least 3 respects (the theorem, the demos and the discussion).

        Each time I run across a gem such as that, or reflect on the ongoing value of enterprises such as your (free) blog on which it appeared, I’m reminded anew of the fact that conventional financial economics seems almost comically oblivious to the huge and increasing richness that our ever more networked world brings to many lives. Maybe it’s better that way:-)

        • John Baez says:

          Thanks! I enjoy giving the world gifts to pay it back for my good fortune… and when you get a reputation for giving gifts, it turns out to pay off in many ways. The internet makes the global gift economy much more powerful. There are economists who are starting to think about this, but I don’t think they have wrapped their heads around it yet:

          • Aki Ito, The free web Has economists puzzled, Bloomberg Businessweek, 21 November 2013.

          A revealing snippet:

          In a Boston Consulting Group survey, about 7 percent of U.S. respondents said they would forgo showering for a year to maintain Internet access, 21 percent said they would give up sex, and 73 percent said they’d sacrifice alcohol.

          Official data keepers remain orthodox. “We’re economists. We don’t believe there’s such a thing as a free lunch,” says Steve Landefeld, director of the Bureau of Economic Analysis, the U.S. Department of Commerce agency that calculates GDP.

        • arch1 says:

          (Re: your comment of 30 January, 2014 at 12:04 pm)
          Ah so Steve’s the guy we have to keep in the dark..:-)

        • David Lyon says:

          If one thinks of the internet in terms of a massively multiplayer prisoner’s dilemma, the largest total norm occurs when everybody cooperates to share knowledge, including copyrighted information or proprietary information. John Baez is my hero for sharing his knowledge up to the limit of allowing others to preempt his own scientific publications. Although such behavior seems altruistic according to simple economic theory, it can even benefit sharers by raising the value of their information. Paradoxically at first glance, the more people there are with knowledge about a topic, the more that knowledge is worth because its value grows super-linearly with the number of interested people if that number is initially small.

          One cryptocurrency, dogecoin, combines the idea of depth discussed in this guest post with the globalized gift economy. For example, dogecoin users are sending the Jamaican bobsled team to the Olympics by donating $33k worth of Dogecoins in one day. An Indian Luge athlete was given enough dogecoins ($6k worth) to travel to the Olympics within 3 hours of dogecoiners hearing about his plight. By being unselfish, dogecoin users have attracted new people and new investment to their coin far faster than any other cryptocurrency in history, including bitcoin. Dogecoin is now the 4th largest cryptocurrency, despite being only 2 months old. By giving away coins to each other and to new users, dogecoin users increase the value of their remaining holdings by more than the previous value of what was donated! Dogecoin users with a high reputation for generosity and altruism are unable to empty their wallets because others donate them even more coins than they have given away. As an active dogecoin user, I am a true believer in its culture and I hope dogecoin will be able to scale up from its current market cap of $60 million into a size that can globally exchange the economics of debt and artificial scarcity of centrally controlled money for economics of cooperation and distributed control of money. I also wish that this blog had the capability to send dogecoin tips to users who make insightful posts, as much of reddit now does!

  9. nad says:

    John wrote:

    Len Adleman’s definition of logical depth is a bit different, but this give you some of the flavor of various things that can happen.

    You both are sort of informal in your descriptions here, so is the difference meant here the difference between Len Adleman’s definition of logical depth which he called (?) “depth of a string” D(\sigma) that for that definition that program is chosen which has minimal length but that for your/Bennets definition of a “logical depth” that program is chosen that has minimal length among the programs that run fastest? I find the latter description difficult, since what exactly means running, that seems to depend on the corresponding physical realization ?

    Frederik wrote:

    You may have heard of the “money multiplier” effect in introductory economy courses, i.e. the central bank creating physical money and the bank creating an electronic multiple of that, but in reality it’s more of the opposite, with banks creating loans in order to maximize their profits.

    The additional money which is described by the “multiplier” is not “created” in the sense that there is more money for circulation. It is a bookkeeping artefact. It might be that there is a missunderstanding.
    I might explain more about that and also about the money supply if I have more time.

    • Frederik De Roo says:

      Nad wrote:

      The additional money which is described by the “multiplier” is not “created” in the sense that there is more money for circulation. It is a bookkeeping artefact. It might be that there is a missunderstanding.

      First of all, it appears the multiplier effect itself is a textbook artefact, it’s not how money creation happens in practice. So we shouldn’t spend too much time discussing it because it’s purely economic theory.

      Secondly, electronic money is real in the sense that you can use it to buy stuff and it can also drive up e.g. house prices, there is more (electronic) money available for circulation. This is exactly why it is claimed that the banking sector constitutes an important part of a capitalistic economy: by providing liquidity. Electronic money is not real/physical in the sense that not everybody can simultaneously demand coins or notes for their electronic money. So there is no more (base) money available for circulation. If this is what you mean by “bookkeeping artefact”, I agree with you. Sadly, this bookkeeping artefact has real consequences for the economy; the amount of electronic money dwarfs the amount of base money.

      I might explain more about that and also about the money supply if I have more time.

      You can spare yourself the time.

    • John Baez says:

      Nadja writes:

      I find the latter description difficult, since what exactly means running, that seems to depend on the corresponding physical realization?

      There are standard ways to make such concepts precise in computer science, which justify the informal ways of talking about runtime. Of course we don’t use physical machines to make this concept precise: we use mathematics!

      It would take too long to explain this stuff here, but you can get a feeling for it if you read the Wikipedia definition of Kolmogorov complexity, also known as algorithmic information. It states the crucial invariance theorem, which addresses your concern.

  10. Eli Rabett says:

    Bitcoins are only massless if you neglect the size and cost of the wallets needed to hold them. Since bank accounts are by nature notational, what is the difference?

  11. […] Leonard Adelman — he’s the A in RSA — has a guest post over on John Baez’s blog about the rarest things in the universe. […]

  12. Many interesting posts about math, physics, and science!

  13. nad says:

    Apart from the energy costs of transactions, as mentioned
    above, it seems the minting gets also increasingly energy costly. The New York Times has an article with photos about a minting factory:
    In China’s Hinterlands, Workers
    Mine Bitcoin for a Digital Fortune.

    Would be interesting to compare the energy costs for minting and for transactions.

  14. nad says:

    By the way does anyone know, whether https://en.wikipedia.org/wiki/Evolutionary_algorithm are used in bitcoin minting?

You can use Markdown or HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it.

This site uses Akismet to reduce spam. Learn how your comment data is processed.