Thursday, April 14, 2011

Quiet time

GCM's are models



I'd like to bring together some things I expound from time to time about GCM's and predictions. It's a response to why didn't GCMs predict the pause? Or why can't they get the temperature right in Alice Springs?

GCM's are actually models. Suppose you were designing the Titanic. You might make a scale model, which, with suitably scaled dimensions (Reynolds number etc) would be a good model indeed. It would respond to various forcings (propellor thrust, wind, wave motion) just like the real boat. You would test it with various scenarios. Hurricanes, maybe listing, maybe even icebergs. It can tell you many useful things. But it won't tell you whether the Titanic will hit an iceberg. It just doesn't have that sort of information.

So it is with GCM's. They too will tell you how the Earth's climate will respond to forcings. You can subject them to scenarios. But they won't predict weather. They aren't initialized to do that. And, famously, weather is chaotic. You can't actually predict it for very long from initial conditions. If models are doing their job, they will be chaotic too. You can't use them to solve an initial value problem.

How GCMs are used


So I'd better say a bit more about how GCM's are traditionally used - then I'll get onto a recent alternative, which is part of the motive for this post. Because GCM's can't reliably work from initial conditions, they are usually started well back in time from the period of interest. This is also common in CFD (computational fluid dynamics). Flows in CFD are also chaotic, and not usually solved as initial value problems. As with GCM's, a powerful reason is that an initial state just isn't known.



So people "wind back". The idea is to start with conditions that aren't expected to correspond to a measured state. rather, they strive for physical consistency. If you are modelling a near-incompressible fluid (explicitly), you'll want to make sure that the density is what it should be, and the velocity is divergence-free. Otherwise, explosion.


In fact, even that won't be right, but the artificial initial effects will work out over time, and the flow will come into some sort of balance with the forcings. Then you can look at variations.


So that is why a GCM, in its normal running,  can't predict an El Nino. Good ones do El Nino's well, but with no synchronicity with events on Earth. A good illustration is this video from GFDL:


Note that the months are numbered, but no year is given. It's a prediction of an El Nino, but not for any particular time. Though the months are significant - El Nino is weakly coupled to the annual cycle.


Here is another SST Movie from GFDL


Again it shows all sorts of interesting motions. These are not predictions of actual events, and they are not the consequence of any initial conditions. They are the results of forcings, fluid properties and topography. But it is actually telling you a lot about real world currents (and SST).

Decadal forecasting


This is the recent alternative that I mentioned. People are trying to get useful information for years in advance from initial conditions. Meehl wrote a paper in 2009, basically with a prospectus, and wrote a review earlier this year, subtitled "an update from the trenches". Are we there yet? No, and I think success is still uncertain. For details, I can only suggest reading the paper, but here is a summary quote:
"It remains an important question as to whether or not decadal climate predictions will end up providing useful information to a wide group of stakeholders. Indications now are that temperature, with a greater signal-to-noise ratio, shows the most promise, with precipitation being more challenging. These two quantities are typically the ones that have been addressed so far in the literature. Since sources of skill are time dependent, it is important to emphasize that for the first 5 or so years of a decadal prediction, skill could come from the initial state, and after that skill arises because of the external forcing, with some regions having potentially greater skill than others. Further quantification with other variables needs to be done and applied in reliability studies, which are just now beginning, in order to demonstrate useful- ness of decadal climate predictions."


So it sounds like five years max, at the moment. We'll see.















Saturday, April 9, 2011

TempLS Ver 2.1 release

This is the first release for a while. The structure is still much like ver 2, and the math basis is the same. The main differences are:
  • The code has been internally reorganized. Most variables have been gathered into R-style lists. And more code has been sequestered into functions, to clarify the underlying structure of the program.
  • The new work on area-based weighting has been incorporated.
  • Other work has been done with weighting. In particular, subsets of stations can be weighted differently relative to each other.
  • Hooks have been added so that the adaptions to the Antarctic studies can be included.


The preprocessor which merges and converts data files available on the web to a standard TempLS format is unchanged. Data and inventory files from Ver 2 will still work.

A reminder of how TempLS is used. In an R environment, a jobname is defined, and a .r file with that name is created which defines some variables relevant to the wanted problem. This may have several consecutive runs defined. The variables are variants of supplied defaults.

Then source("TempLSv2.1.r") solves and writes data and jpg files to a directory jobname.

The files you need are on a zip file TempLSv2.1.zip. This 16Mb file contains preprocessed data sets, so you won't need to use the preprocessor. I've had to separate GSOD data; GSOD.zip is another 7 Mb.

If you just want to look at the code and information (everything readable), there is a 170 Kb TempLS2.1_small.zip. A ReadMe.txt file is included. There is also a partly updated manual from V2.0, most of which is still applicable. More details below, and past files on TempLS can be found on the Moyhu index. I'll describe below the things that are new.

Monday, April 4, 2011

Blogger's spam filter

In about September last year, Blogger, the Google outfit that host this site, introduced a new spam filter. Until that time I had been very happy with them. I still am, in most aspects. But the spam filter is making dialogue on the site impossible. It is supposed to learn from the liberations that I make, but it seems to be just getting worse. I have not yet been blest with a single item of real spam, but about one in three genuine comments go to the bin for no obvious reason at all.

The problem is compounded because, since I'm on Australian time, they can sit in the spam bin for hours before I can fix it.

I've done what I can to raise the problem with Blogger. They don't offer direct communication, but delegate feedback to a forum. The response is that, no, you don't get a choice here, your sacrifice is for the general good.  We seem to be conscripted into an experiment whereby Google uses our rescue activity to improve its spam database.

So I'm looking at Wordpress. Blogger has just announced a revamp, but from their aggressive attitude about the filter, I don't see much hope. I'll wait a few days longer, and listen to advice on whether Wordpress is likely to be better, but it's looking inevitable.