Adaptive time-stepping and white-noise forcing

Last month I implemented the Bogacki-Shampine Runge-Kutta 3-2 method so that I could do some low-accuracy, but efficient, time stepping in some of my code. The method has adaptive time-stepping, where the size of the time step is adjusted to control for the error. In particular, the method computes two versions of the time step (dt), one which has an error at dt^3, and another which has error at dt^4. By comparing the difference of the two time step values, you have an estimate for the size of the error. This means that if you decide that you want an error no larger than 1 part in 1000, you can estimate how big of a time step you should take.

I find that the method works extremely well and generally speeds up my code; however, I did run into one major problem. I’m doing a forced-dissipative quasigeostrophic turbulence problem where the forcing is narrow-banded in wavenumber, and the phases are randomized at each time step. More intuitively, you can think of the system as modeling the surface of the ocean. The narrow banded in wavenumber forcing means that I’m pounding on the surface of the ocean with fists of a particular size (choosing the wavenumber, chooses the size of the fist). The randomized phases means that punching the surface at one point in time has no bearing on where I punch the surface the next time. Narrow-banded wavenumber forcing is fairly physical as you can imagine that there are certain process that occur at preferred length scales. However, the randomized phasing isn’t particularly physical and, as it turns out, the adaptive time-stepping algorithm found a way to tell me that.

When I started to use the adaptive-time stepping algorithm for this problem, I found that the algorithm decided to shrink the time step by a couple orders of magnitude over what I estimated the time step should be. This really defeats the whole purpose of an adaptive time stepping algorithm, since it’s supposed to be able to increase the time step and shorten the computation time when possible. So what was happening? What it was doing, was shrinking the time-step to the point where the randomized forcing was below the error threshold. If you think about it, my randomized phase forcing is equivalent to adding uncorrelated white noise to the problem—in other words, error. So the time-stepping algorithm was reducing the time-step to the point where dt, which multiplies all the terms in the equation, was shrinking the forcing below the error threshold.

What this means to me is that randomizing the phases is unphysical. In reality, if I were to pound my fist on top of the ocean in one spot, the next time I hit the surface of the ocean with my fist is likely to be nearby, and not some other completely random location. My solution to this is to change the forcing to have Markovian phases, where the phases may move about in random directions, but they don’t jump to entirely new locations at each time step.

Freakonomics: save the earth, drive your car

In a fairly ridiculous post, Freakonomics bloggers argue that public transit in America can be often less efficient than driving.

Now, the description of their conclusions as I have put it, contains most of the information needed to understand the caveats, namely, that there are some pretty inefficient transit networks in the US. But the conclusion as Dubner and Morris wrote it, essentially suggests that public transit needs to be looked at skeptically because it often isn’t much better than driving a car. What they really mean to say, is that transit generally works well in dense areas where ridership can be fairly high, and in the US we often have public transit for other reasons (social justice issues, area equity, etc). That’s a fairly trivial conclusion, so instead they worded things rather dramatically.

Update: Silly me, I should have read Jerrett Walker’s rebuttal, and you should too. Looks like I nailed his primary point too.

Apple Fail #2: Selection

Since 10.8 (or was 10.7?) you can get yourself into a situation where you can’t select the last item in a column. Yes, that’s right, Apple’s updated UI is so awesome, and so perfect, that you can’t possibly need the last item in a list. Check out the snapshot below,

SelectionProblemSee that last item ‘XQuartz’? Because the bottom scroll bar appeared over it, I now have no way to select it. If I leave my mouse hovered over it, the bottom scroll bar never disappears and I have no way to select the item. You have to to either key down, or move the mouse away in a manner that causes the scroll bar to vanish, and then try again. It’s ridiculous.

Ironically, as I’m typing this post in MarsEdit, the same problem is presenting itself again. The magic appearing scrollbars are blocking out the bottom line of text in the post, which is exactly where I’m typing. It’s so awesome, that I’ll upload that as well,


The bottom line here is that this is completely unacceptable. Apple didn’t used to make this kind of stupid UI mistake, did they?

The GOP bubble suggests that we should worry about their policies

I like this provocative post from Matt Yglesias where he argues that the inability of the Republican party to predict their own loss, correlates with poor policymaking. The connection from that direction is a bit weak to me, but, as a scientist, I’ve always thought the logic extends from this idea found at the end of the piece:

On the right, the idea of academic expertise is held in low esteem. Conservatives accurately perceive that academia is hostile to nationalism and religious traditionalism and thus become much more prone to become out of touch with academic knowledge or to reject valid academic insights even on other topics.

I’m one of those academics he describes in that sentence, so I suppose it isn’t surprising that I find little to like about the Republican party. However, I don’t think it’s so much “nationalism and religious traditionalism” so much as it is simply the Republican rejection of science. Paul Krugman expressed a similar sentiment when admitted to rooting for Nate Silver simply on professional grounds, e.g., that science applied to polling leads to good predictions. To me, it’s pretty consistent that rejecting science leads to both bad policy and bad poll reading.

I should also add that I make a, perhaps false, distinction between Republicans and Conservatives that makes me object to Matt’s use of the word Conservatives in the above quote. I consider Republicans ideals to be a sort of ‘average’ over the many ideas presented by current politicians and people who self identify as Republicans. I apply the same sort of thinking to Democrats and, in both cases, I find very little logic behind their platforms–as should probably be expected given my definition. On the other hand, I consider Conservatives and conservatism a way of reasoning that stems from the idea that government should remain small. Conservatism, with my definition, has much to like and certainly doesn’t include a rejection of science.

Why do people quit their jobs after having an affair?

CIA director Petraeus quit his job after an affair came to light and a Lockheed Martin executive was fired after having an affair with a subordinate. In the second case, that’s a clear abuse of power and being fired at least makes some sense. But in the case of David Petraeus, his affair was with his biographer.

I really don’t understand what having an affair with your biographer has to do with running the CIA. It’s such a random connection. You may as well declare that ‘Oh, I had an affair—I guess that means I can’t go to Starbucks anymore.’ That seems as logical to me as ‘Oh, I had an affair—I guess I had better quit my job.’ I didn’t read the whole story, so maybe I’m missing something.

Explicit racism and political party affiliation

My jaw hit the floor after reading this AP article (via Andrew Sullivan) summarizing the results of a study on racism in America. Specifically that paragraph that states that 79% of Republicans are explicitly racist, compared to 32% of Democrats.

Consistent with past research (Sniderman & Carmines, 1997; Tesler & Sears, 2010b), explicit racism was more common among Republicans than among Democrats in all years. In 2008, the proportion of people expressing anti‐Black attitudes was 31% among Democrats, 49% among independents, and 71% among Republicans, highly significant differences (p<.001). In 2012, the proportion of people expressing anti‐Black attitudes was 32% among Democrats, 48% among independents, and 79% among Republicans, again highly significant differences (p<.001).

Um wow. That is to say that 4 in 5 Republicans respond to questions in an explicitly racist manner.

The implicit racism tests are more sophisticated and use the Affect Misattribution Procedure to identify our unconscious attitudes on racism. The finding in this case suggest that as a society we still have a long way to go,

Implicit anti‐Black attitudes manifested almost the identical pattern but with smaller differences between the parties, higher apparent levels of racism among Democrats, and lower apparent levels of racism among Republicans. In 2008, the proportion of people expressing anti‐Black attitudes was 46% among Democrats, 48% among independents, and 53% among Republicans, statistically significant differences (p=.03). In 2012, the proportion of people expressing anti‐Black attitudes was 55% among Democrats, 49% among independents, and 64% among Republicans, again highly significant differences (p<.001).

These are obviously measuring two different things, but I think it’s pretty fair to say that being explicitly racist is much worse than being implicitly racist.

If I understand the test correctly, the person identified as implicitly racist is labeled as such because they associate a black person with unpleasantness. This isn’t necessarily, a bad thing. If the implicit racist lives in a world where there is a strong socioeconomic divide along racial lines, then their response may simply reflect that the black people they see are living in relatively unpleasant conditions. I think one could argue in some cases it might actually be good to recognize this socioeconomic divide, if it does exist. Of course, in some cases people will react this way for other, less charitable reasons. I think this ambiguity explains why difference between democrats and republicans erodes somewhat with the implicit test.

This is in contrast to the explicit racist who actually believes that the black person is, or somehow should be, inferior and says as much. So, it’s definitely worse to be explicitly racist.

Horrible and depressing.

The non-divide between Romney and Obama on foreign policy

Watching the presidential debate on foreign policy, I think that Romney is working hard to convey a large difference between him and Obama on the different issues—whether a genuine difference exists or not. There certainly are some differences between the two candidates, much of which might be attributable to Obama having more experience. However, the upshot of this from my perspective as an Obama supporter, is that Romney ultimately wouldn’t be that bad if he were to end up as president. That he generally agrees with Obama, but merely lacks experience, is actually pretty comforting.

Promises and incumbents

Its interesting that in last nights presidential debate Mitt Romney appeared to make substantially more promises about what he will do if elected president than Barak Obama made. I could be wrong on this, but it seems like when Obama was campaigning and debating in 2008 he made plenty of promises of what he would do, some broken promises of which are plaguing him now.

In other words, what I’m seeing is that the challengers in these elections make plenty of promises, while the incumbents are much more cautious. But the irony here is that if the incumbent is re-elected, it doesn’t hurt him four years later if he breaks his promises (since he won’t be running again), while the opposite is true of the challenger.

I’m making this generalization from very few data points, so it could be bunk, although at least in this case it seems to be true.