## The Kuramoto–Sivashinsky Equation (Part 5)

In Parts 3 and 4, I showed some work of Cheyne Weis on the ‘derivative form’ of the Kuramoto–Sivashinksy equation, namely

$u_t + u_{xx} + u_{xxxx} + u u_x = 0$

Steve Huntsman’s picture of a solution above gives you a good feel for how this works.

Now let’s turn to the ‘integral form’, namely

$h_t + h_{xx} + h_{xxxx} + \frac{1}{2} (h_x)^2 = 0$

This has rather different behavior, though it’s closely related, since if $h$ is any solution of the integral form then

$u = h_x$

is a solution of the derivative form.

Cheyne drew a solution of the integral form:

You’ll immediately see the most prominent feature: it slopes down! I’ll show later that the average of $h$ over space can never increase with time, and it decreases unless $h$ is constant as a function of space. By contrast, we saw in Part 2 that the average of $u$ over space never changes with time.

However, we can subtract off the average of $h$ over space to eliminate this dramatic but rather boring effect. The result looks like this:

Now it’s very easy to see the ‘stripes’ I’m so obsessed with: they are the ridges in these pictures. You can see how as time increases from left to right these stripes are born and merge, but never die or split.

But how can we mathematically define these stripes, to make it possible to state precise conjectures about them? We could try defining them to be points where $u$ is locally maximized of as a function of $x$ at any time $t.$ With this definition, Cheyne gets stripes like this:

The previous picture shows up in the lower right hand corner of this one.

These stripes look pretty good, but you’ll see some gaps where they momentarily disappear and then reappear. I don’t think these invalidate my conjecture that stripes never ‘die’. I just think this definition of stripe is not quite right. (Of course I would think that, wouldn’t I? I want the conjecture to be true!)

Cheyne thought that maybe overlaying maxima in time would help:

This fills in some gaps, but there are still stripes that momentarily die, only to be shortly reborn. It might be good to define stripes to be points where this function— $u$ minus its average over space—exceeds a certain cutoff.

Let’s conclude by proving that the average of $h$ over space can never increase with time. To prove this, just take the time derivative of the integral of $h$ over space, and show it’s $\le 0.$ Remember that we’re assuming $h(t,x)$ is periodic in $x$ with period $L$, so ‘space’ is the interval $[0,L]$ with its endpoints identified to form a circle. So, we get

$\begin{array}{ccl} \displaystyle{ \frac{d}{d t} \int_0^L h(t,x) \, dx } &=& \displaystyle{ \int_0^L h_t(t,x) \, dx } \\ \\ &=& \displaystyle{ -\int_0^L \left( h_{xx} + h_{xxxx} + \frac{1}{2} (h_x)^2 \right) \, dx } \\ \\ &=& \displaystyle{ -\left( h_x + h_{xxx} \right) \Big|_0^L -\int_0^L (h_x)^2 \, dx } \\ \\ &=& \displaystyle{ -\int_0^L (h_x)^2 \, dx } \end{array}$

This is $\le 0,$ as desired. Moreover, it’s zero iff $h$ is constant as a function on space!

### 3 Responses to The Kuramoto–Sivashinsky Equation (Part 5)

1. Bob says:

For fun, I made a plot of u^2:

https://ibb.co/QMPkBxB

In this picture, the “particles” look more like “particle-antiparticle” bound states. The blue stripes are the zeros in u where the sign of u is changing from positive to negative. If you zoom in on a bright merger event, it looks like the innermost particle and antiparticle annihilate, and the outermost particle and antiparticle recombine into a new bound state. The whole picture looks like a vacuum breaking down and forming a condensate.

2. Bob says:

A (maybe crazy) thought: To the extent that the K-S equation can make these excitations “for free,” it is a bit like they are massless excitations. Are they Goldstone modes for some broken symmetry? The K-S equation is Berger’s equation + some higher derivative terms. The Berger’s equation on the circle has (I believe) an infinite Diff(S^1) symmetry called particle relabeling symmetry. Do we get Goldstones from (explicitly) breaking Diff(S^1)?

3. Graham jones says:

I have been making pictures a bit like Cheyne Weis’s. I am interested in the branching patterns. The way I see it, the gaps are due to having little bumps on the sides of big bumps. Suppose there is a big bump like this:

0 1 2 3 4 5 4 3 2 1 0

and a small bump like 0 1 1+d 0, where d is small. Combining them, we might get

0 2 3+d 3 4 5 4 3 2 1 0

If d is just above zero, the 3+d is a maximum. If it is just below zero, it disappears. This is not a numerical problem: refining the x and t resolution doesn’t help. On the other hand, looking the gradient near the point of disappearance does tell you where to find the big bump. So I think you can join things up that way, with lines of constant t.

I haven’t implented this, except by printing out images and joining some gaps by hand. It is clear that the branching pattern is very unbalanced. When two subtrees join, they each have a number of tips, say i and n-i. In simple models of branching processes, the probabilities of 1:(n-1), 2:(n-2), … (n-1):1 are all equal to 1/(n-1). The Kuramoto–Sivashinksy branching pattern favours the extremes. In particular, there are lots of 1:(n-1) and (n-1):1. I would like someone to prove a theorem about this!

Something else I noticed when trying the integral version with large L. If you look at $h(x,t)$ for fixed $t$, and at low resolution so that the bumps and dips are lost, the result looks (to me) like a Brownian bridge (https://en.wikipedia.org/wiki/Brownian_bridge). I have a glimmer of an idea about why that happens. If $h(x,t)$ is a solution, then so is $h(x,t)+c$ for any constant $c$. So in a range which is big compared the bumps, but small compared to L, there is nothing to stop $h(x,t)$ drifting away, except the gentle long range tug that comes from having to be periodic.

This site uses Akismet to reduce spam. Learn how your comment data is processed.