For details, see:

• Stefan O’Rear, A Turing machine Metamath verifier, *Metamath*, 15 May 2016.

I haven’t checked his work, but it’s available on GitHub.

What’s the point of all this? At present, it’s mainly just a game. However, it should have some interesting implications. It should, for example, help us better locate the ‘complexity barrier’.

I explained that idea here:

• John Baez, The complexity barrier, *Azimuth*, 28 October 2011.

Giampiero Campa wrote:

I’d be curious to see the Kolmogorov complexity of a piece by e.g. Bach though.

If you’ve been reading this thread I guess you know that Kolmogorov complexity is uncomputable, in general, and nobody can prove anything has a Kolmogorov complexity exceeding a few thousand bytes or so. What we can do is compute upper bounds: we can have a contest to see who can write the shortest program to print out a Bach fugue (in some standard format), and the winner will count as an upper bound. But we may never know the shortest program that works—and if the actual Kolmogorov complexity of the piece exceeds the complexity barrier, we *know* we’ll never know what it is.

So, the fun thing would be to have a contest.

]]>I see, thanks.

Then the claim of this being completely “pattern free” is not true. Then it is probably the case that the way in which we humans process (look for patterns) in sounds must be based on a fixed mechanism (something like a FSM) looking for distance between notes. In other words we look only for very specific patterns and can’t use “mod” but only simpler functions based on the difference between notes … not sure, just thinking out loud.

I’d be curious to see the Kolmogorov complexity of a piece by e.g. Bach though.

]]>Its Kolmogorov complexity is indeed very low. It is based on Costas arrays for which there are algorithms to generate. Indeed in the video such an algorithm is given: 1*3=3, 3*3= 9, 9*3=27 …

The 88-length array mentioned can be generated by a single Haskell string:

map (\x -> mod (3^x) 89) [1..88]

Which compresses the information of

[3,9,27,81,65,17,51,64,14,42,37,22,66,20,60,2,6,18,54, 73,41,34,13,39,28,84,74,44,43,40,31,4,12,36,19,57,82, 68,26,78,56,79,59,88,86,80,62,8,24,72,38,25,75,47,52, 67,23,69,29,87,83,71,35,16,48,55,76,50,61,5,15,45,46, 49,58,85,77,53,70,32,7,21,63,11,33,10,30,1]

There is a small program to generate it, hence the complexity is low.

]]>http://tedxtalks.ted.com/video/TEDxMIAMI-Scott-Rickard-The-Wor

]]>The step that opens up a variety of productive methods is changing from imagining physical systems as driven by our explanatory models, to seeing that we’d need models that behave by themselves to emulate real complex systems that behave by themselves.

That change in perspective amounts to a new paradigm I think, and to the idea of “using math to help discover real complex systems from their behavior, rather than to represent them. That being unfamiliar it might take a bunch of exploratory puzzlement to start.

It becomes more than an idle curiosity when you realize that complex systems display constantly accumulating change in organization, everywhere at once, in response to both their internal and external environments. So while they “exist” as observed entities, after a fashion, they also display a kind of strictly accumulative evolutionary process.

That changes the rules of science, to attempt to study what an observer does not see about how their internal organization is changing. One has to switch from studying how they are “controlled” to more what they are “learning”. Of course, in reality complex systems are neither ever exactly ‘controlled’ nor exactly ‘learning’, but we bend those and other words to fit the reality if we want our research to be productive.

]]>Chaitin (http://www.umcs.maine.edu/~chaitin/ait2.html) showed that’s true for a variant of LISP that has a “readBit” instruction and whose programs are followed by a string of bits. If you leave off that ability, then the halting probability for LISP is not random;the fact that you can tell where a program ends just by counting parentheses means it’s not an optimal encoding. Cris Calude and I showed (http://arxiv.org/abs/cs.CC/0606033) that the halting probability for paren-balanced LISP is “asymptotically random”, i.e. it is (1-epsilon)-random for any epsilon > 0.

]]>Thanks, David. I’ve added that reference to my summary of this whole discussion on my website.

]]>