It’s from a blog called Under the SymmeTree.

]]>I do agree. Not being a scientist but an engineer (ret.) I admit to bringing up something that is perhaps tangential. From an equation that explained to me a well-studied phenomenon in psychology called probability learning, I think I see a way to apply this equation for the lab animals’ behavior to guide investors’ or governments’ spending if and when ENSOs get worse and worse. Actually, my application of the equation wouldn’t care at all about the details of particular models. It doesn’t answer which model explains historical data better. The idea is just to give investors’ and governments’ some guidance for their spending behavior in real time– as a future unfolds in which the probability of ENSOs might well change over comparatively short periods of time.

Although I do think it would be interesting to use the equation in order to compare a number of differently initialized Hopf bifurcation models with 1/f noise against what is going to happen in the future. The algorithm might pick one or the other of the differently initialized Hopf models as time goes on. And in a separate running, their performance as the future unfolds could be compared to the same application of the equation to the models of the “big guys.” To me it looks like a competition between simple and complex models.

]]>As such, I think the only time you’d use them is when you’re coming up with a new classifier technique, and you compare against the base model (on your “validation set”) and discard one of the two models. (Actually, if you care about the *kinds* of error (false positive, false negative, etc) you might find different models work better for different regimes and you’re off doing ROC analysis as briefly discussed a little here. However, in that case you’d still only have one active model for each combination of error values.)

That’s related to what I think Tim was getting at; I’ll have to think some more about what you proposed as an idea in its own right.

]]>Since 1/f noise is found in some types of meteorological data, has anybody thought about using 1/f noise to model ENSO with Hopf bifurcation and noise? Since 1/f noise is said to model “long term memory effect” could it also be relevant to the current discussion in the El Nino project, part 6, between Steve Wenner and HyperG about Bayesian and frequentist interpretations of probability?

— In the above post Tim van Beek wrote “a model can have different uses…[for example] as a null hypothesis for other, more sophisticated models.”

Instead of a single null hypothesis I wonder if an algorithm based on multiple null hypotheses would be useful. Don’t know if that’s standard but here goes.

This is an idea about an optimal capital budgeting process based on software prediction of ENSOs (using an algorithm based on multiple null hypotheses).

Why this kind of approach to software specification? Because ENSOs have big economic costs and so we might be able to cost-justify developing some software for this kind of optimal budgeting process.

Banks should be interested because, on behalf of their shareholders, they would be able to bet their money on better ENSO predictions. Governments should be interested because, on behalf of the people under their governance, they would be able to make better expenditures e.g. on appropriate corn storage and so on.

The idea comes from looking at Steve’s spreadsheet, specifically the table for “Training.” And then, thinking of all those models from the “big guys,” that John showed us, as actually, null hypotheses. For what follows I assume that each null hypothesis from the list either predicts or does not predict an upcoming ENSO each year.

To make it easier (for me anyway), imagine a cartoon about a bank with a group of investment bankers, in three acts. Each investment banker is given just one dollar per year from a finite amount of dollars to invest based on predictions of an ENSO. After all it’s a just a cartoon.

Act one focuses on one of the investment bankers. As the curtain opens it is capital budgeting time and he is considering all of the null hypotheses. Scene one: he focuses on one of the null hypotheses. It is predicting that next year there will be an ENSO. The banker bets his one dollar on shorting a company that will be hurt by the predicted ENSO. Scene two: The predicted ENSO does not occur. The banker puts a mark on a blackboard for this particular null hypothesis labelled “buyer’s regret.” Because he regrets having listened to the prediction by this particular null hypothesis. Scene three: Time passes. Once again we see the banker focusing on the very same null hypothesis. It predicts an ENSO for next year. The banker goes somewhere else with his dollar. Scene four: As predicted, the ENSO occurs. The banker puts a mark on another blackboard next to the buyer’s regret blackboard for this null hypothesis. It is labelled “regret at lost opportunity.” Scene five: Time moves on like pages turning in a book. We see the banker putting more and more marks on each blackboard for the null hypothesis, keeping track of his two kinds of regret associated to the null hypothesis in question. Final scene of act one: the narrator tells us how the banker has balanced the counts. Through his choices the banker has evenly balanced his counts on the two blackboards assigned to the null hypothesis in question. Buyers regret balances regret from lost opportunity. Each mark of regret on betting this null hypothesis would be correct pushes the investment banker away from it. But each mark of regret on having lost an opportunity to invest in this null hypothesis pushes the banker toward it. He is pushed toward and away from the null hypothesis in question by these two opposing entropic forces. Equlibrium occurs for the null hypothesis in question when these two balance each other. Note: the bankers in this cartoon bet just on whether or not a particular null hypothesis is right in its prediction, regardless of whether the prediction is for or against an ENSO . (This would be the infamous probability learning algorithm if all the null hypothesis worked like the feeding spots in the probability learning experiment. Of course they don’t, but for now this is just a cartoon.)

Act two. Scene one: We see a the entire group of investors for the bank,each given a dollar bill per budgeting period, each balancing these two forces of regret for each null hypothesis in the collection that John gave us. That is, each banker has two blackboards for each of the null hypotheses. It’s a lot of blackboards. Scene two: In flurry of activity the bankers run around from one kind of blackboard to the other for each null hypothesis. Finally, some form emerges and we see that things settle out to there being for each null hypothesis a group of bankers, each group of varying size in proportion to the prediction accuracy of each null hypothesis. (It’s like the school of fish settling into a Nash equilibrium in a previous comment on biology.) Scene three: We focus on the original soliatry banker and see the cartoon bubble of thoughts in his mind. “I’d like to try out another one of those hypotheses, but I’m afraid of leaving my group here. It’s stable. Who knows, if I leave it they might not let me back in. And those other groups might not let me in. Think I’ll just stay here. I’m afraid to do anything else.”

Act three: the narrator speaks. “Socrates left us with the idea that self-knowledge is the basis of all other knowledge. We may never be able to know the perfect model, but we do know how all of these null hypotheses can make us experience regret. The self-knowledge in this case is knowing our own experience of regret as a kind of thirst or hunger not satisfied. It’s the basis of the algorithm.”

(curtain closes)

]]>I think we might get a chance to see some, at some point!

Who is “we” ? – museum wards, artists, students, the british upper class?

In the Math building in Göttingen there were (at least some years ago) instruments visibly on display. The exhibition was in glass boxes and was open whenever the math building was open. The collection holds also a reconstruction of the Integraph by Abdank-Abakanowicz Coradi, reconstructed by Mühlendyck.

]]>Hubert Airy

Nature, 1871, 310-313 (part I) and 370-372 (part II)

“It was a happy chance that directed my fingers, in an idle mood, one day in March of last year, to the top of a stiff twig that sprang from the stool of an old acacia, and rose to a height of about three feet, where it had been lopped by the gardener’s knife. Pulling the twig aside, and letting it fly back by its own elasticity, I noticed the path which its top traced in the air. … On the present occasion I could see that the twig began at once to deviate from the plane of its first vibration, and to describe an elliptic path, the ellipse growing wider and shorter till it was nearly circular, then still wider and still shorter, till its width exceeded its length, and it was again elliptic, but the long axis now occupied nearly the position of what was the short axis before”.

Moving from the garden to the workshop (his bedroom – it would have a high ceiling) he improvised, risking half-a-hundredweight of lead going through his bedroom floor and struggling to make a reliable pen. Finally: “It chanced, however, that the adjustment for the proportion 2 : 3 was beautifully accurate I shall never forget the feeling of delight which I experienced while watching the marvellous fidelity with which the pen point traced the curve appropriate for that proportion”.

I think Anita would recognise that feeling. Airy senior would have appreciated the Mathematica code too.

https://archive.org/stream/nature41871lock#page/310/mode/2up

Marvellous expansive writing that would not stand a chance today, especially in Nature!

Warning: reading old issues of Nature can be addictive.

Hubert Airy (1838-1903), a physician, was a son of Sir George Airy (1801-1892), mathematician and Astronomer Royal (1835-1881).

]]>(My link called the file “ElNinoTemps.xslx”, but it’s “ElNinoTemps.xlsx”.)

]]>