Adaptive time-stepping and white-noise forcing

Last month I implemented the Bogacki-Shampine Runge-Kutta 3-2 method so that I could do some low-accuracy, but efficient, time stepping in some of my code. The method has adaptive time-stepping, where the size of the time step is adjusted to control for the error. In particular, the method computes two versions of the time step (dt), one which has an error at dt^3, and another which has error at dt^4. By comparing the difference of the two time step values, you have an estimate for the size of the error. This means that if you decide that you want an error no larger than 1 part in 1000, you can estimate how big of a time step you should take.

I find that the method works extremely well and generally speeds up my code; however, I did run into one major problem. I’m doing a forced-dissipative quasigeostrophic turbulence problem where the forcing is narrow-banded in wavenumber, and the phases are randomized at each time step. More intuitively, you can think of the system as modeling the surface of the ocean. The narrow banded in wavenumber forcing means that I’m pounding on the surface of the ocean with fists of a particular size (choosing the wavenumber, chooses the size of the fist). The randomized phases means that punching the surface at one point in time has no bearing on where I punch the surface the next time. Narrow-banded wavenumber forcing is fairly physical as you can imagine that there are certain process that occur at preferred length scales. However, the randomized phasing isn’t particularly physical and, as it turns out, the adaptive time-stepping algorithm found a way to tell me that.

When I started to use the adaptive-time stepping algorithm for this problem, I found that the algorithm decided to shrink the time step by a couple orders of magnitude over what I estimated the time step should be. This really defeats the whole purpose of an adaptive time stepping algorithm, since it’s supposed to be able to increase the time step and shorten the computation time when possible. So what was happening? What it was doing, was shrinking the time-step to the point where the randomized forcing was below the error threshold. If you think about it, my randomized phase forcing is equivalent to adding uncorrelated white noise to the problem—in other words, error. So the time-stepping algorithm was reducing the time-step to the point where dt, which multiplies all the terms in the equation, was shrinking the forcing below the error threshold.

What this means to me is that randomizing the phases is unphysical. In reality, if I were to pound my fist on top of the ocean in one spot, the next time I hit the surface of the ocean with my fist is likely to be nearby, and not some other completely random location. My solution to this is to change the forcing to have Markovian phases, where the phases may move about in random directions, but they don’t jump to entirely new locations at each time step.