In Parts 3 and 4, I showed some work of Cheyne Weis on the ‘derivative form’ of the Kuramoto–Sivashinksy equation, namely

Steve Huntsman’s picture of a solution above gives you a good feel for how this works.

Now let’s turn to the ‘integral form’, namely

This has rather different behavior, though it’s closely related, since if is any solution of the integral form then

is a solution of the derivative form.

Cheyne drew a solution of the integral form:

You’ll immediately see the most prominent feature: it slopes down! I’ll show later that the average of over space can never increase with time, and it decreases unless is constant as a function of space. By contrast, we saw in Part 2 that the average of over space never changes with time.

However, we can subtract off the average of over space to eliminate this dramatic but rather boring effect. The result looks like this:

Now it’s very easy to see the ‘stripes’ I’m so obsessed with: they are the ridges in these pictures. You can see how as time increases from left to right these stripes are born and merge, but never die or split.

But how can we mathematically define these stripes, to make it possible to state precise conjectures about them? We could try defining them to be points where is locally maximized of as a function of at any time With this definition, Cheyne gets stripes like this:

The previous picture shows up in the lower right hand corner of this one.

These stripes look pretty good, but you’ll see some gaps where they momentarily disappear and then reappear. I don’t think these invalidate my conjecture that stripes never ‘die’. I just think this definition of stripe is not quite right. (Of course I would think that, wouldn’t I? I want the conjecture to be true!)

Cheyne thought that maybe overlaying maxima in time would help:

This fills in some gaps, but there are still stripes that momentarily die, only to be shortly reborn. It might be good to define stripes to be points where this function— minus its average over space—exceeds a certain cutoff.

Let’s conclude by proving that the average of over space can never increase with time. To prove this, just take the time derivative of the integral of over space, and show it’s Remember that we’re assuming is periodic in with period , so ‘space’ is the interval with its endpoints identified to form a circle. So, we get

This is as desired. Moreover, it’s zero iff is constant as a function on space!

Related

This entry was posted on Sunday, October 24th, 2021 at 1:53 am and is filed under mathematics. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.

3 Responses to The Kuramoto–Sivashinsky Equation (Part 5)

In this picture, the “particles” look more like “particle-antiparticle” bound states. The blue stripes are the zeros in u where the sign of u is changing from positive to negative. If you zoom in on a bright merger event, it looks like the innermost particle and antiparticle annihilate, and the outermost particle and antiparticle recombine into a new bound state. The whole picture looks like a vacuum breaking down and forming a condensate.

A (maybe crazy) thought: To the extent that the K-S equation can make these excitations “for free,” it is a bit like they are massless excitations. Are they Goldstone modes for some broken symmetry? The K-S equation is Berger’s equation + some higher derivative terms. The Berger’s equation on the circle has (I believe) an infinite Diff(S^1) symmetry called particle relabeling symmetry. Do we get Goldstones from (explicitly) breaking Diff(S^1)?

I have been making pictures a bit like Cheyne Weis’s. I am interested in the branching patterns. The way I see it, the gaps are due to having little bumps on the sides of big bumps. Suppose there is a big bump like this:

0 1 2 3 4 5 4 3 2 1 0

and a small bump like 0 1 1+d 0, where d is small. Combining them, we might get

0 2 3+d 3 4 5 4 3 2 1 0

If d is just above zero, the 3+d is a maximum. If it is just below zero, it disappears. This is not a numerical problem: refining the x and t resolution doesn’t help. On the other hand, looking the gradient near the point of disappearance does tell you where to find the big bump. So I think you can join things up that way, with lines of constant t.

I haven’t implented this, except by printing out images and joining some gaps by hand. It is clear that the branching pattern is very unbalanced. When two subtrees join, they each have a number of tips, say i and n-i. In simple models of branching processes, the probabilities of 1:(n-1), 2:(n-2), … (n-1):1 are all equal to 1/(n-1). The Kuramoto–Sivashinksy branching pattern favours the extremes. In particular, there are lots of 1:(n-1) and (n-1):1. I would like someone to prove a theorem about this!

Something else I noticed when trying the integral version with large L. If you look at for fixed , and at low resolution so that the bumps and dips are lost, the result looks (to me) like a Brownian bridge (https://en.wikipedia.org/wiki/Brownian_bridge). I have a glimmer of an idea about why that happens. If is a solution, then so is for any constant . So in a range which is big compared the bumps, but small compared to L, there is nothing to stop drifting away, except the gentle long range tug that comes from having to be periodic.

You can use Markdown or HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it. Cancel reply

You need the word 'latex' right after the first dollar sign, and it needs a space after it. Double dollar signs don't work, and other limitations apply, some described here. You can't preview comments here, but I'm happy to fix errors.

For fun, I made a plot of u^2:

https://ibb.co/QMPkBxB

In this picture, the “particles” look more like “particle-antiparticle” bound states. The blue stripes are the zeros in u where the sign of u is changing from positive to negative. If you zoom in on a bright merger event, it looks like the innermost particle and antiparticle annihilate, and the outermost particle and antiparticle recombine into a new bound state. The whole picture looks like a vacuum breaking down and forming a condensate.

A (maybe crazy) thought: To the extent that the K-S equation can make these excitations “for free,” it is a bit like they are massless excitations. Are they Goldstone modes for some broken symmetry? The K-S equation is Berger’s equation + some higher derivative terms. The Berger’s equation on the circle has (I believe) an infinite Diff(S^1) symmetry called particle relabeling symmetry. Do we get Goldstones from (explicitly) breaking Diff(S^1)?

I have been making pictures a bit like Cheyne Weis’s. I am interested in the branching patterns. The way I see it, the gaps are due to having little bumps on the sides of big bumps. Suppose there is a big bump like this:

0 1 2 3 4 5 4 3 2 1 0

and a small bump like 0 1 1+d 0, where d is small. Combining them, we might get

0 2 3+d 3 4 5 4 3 2 1 0

If d is just above zero, the 3+d is a maximum. If it is just below zero, it disappears. This is not a numerical problem: refining the x and t resolution doesn’t help. On the other hand, looking the gradient near the point of disappearance does tell you where to find the big bump. So I think you can join things up that way, with lines of constant t.

I haven’t implented this, except by printing out images and joining some gaps by hand. It is clear that the branching pattern is very unbalanced. When two subtrees join, they each have a number of tips, say i and n-i. In simple models of branching processes, the probabilities of 1:(n-1), 2:(n-2), … (n-1):1 are all equal to 1/(n-1). The Kuramoto–Sivashinksy branching pattern favours the extremes. In particular, there are lots of 1:(n-1) and (n-1):1. I would like someone to prove a theorem about this!

Something else I noticed when trying the integral version with large L. If you look at for fixed , and at low resolution so that the bumps and dips are lost, the result looks (to me) like a Brownian bridge (https://en.wikipedia.org/wiki/Brownian_bridge). I have a glimmer of an idea about why that happens. If is a solution, then so is for any constant . So in a range which is big compared the bumps, but small compared to L, there is nothing to stop drifting away, except the gentle long range tug that comes from having to be periodic.