An old perennial in climate wars is the adjustment of land temperature data. Stations are subject to various changes, like moving, which leads to sustained jumps that are not due to climate. For almost any climate analysis that matters, these station records are taken to be representative of some region, so it is important to adjust for the effect of these events. So GHCN publishes an additional list of adjusted temperatures. They are called homogenised with the idea that as far as can be achieved, temperatures from different times are as if measured under like conditions. I have written about this frequently, eg
here,
here and
here.
The contrarian tactic is to find some station that has been changed and beat the drum about rewriting history, or some such. It is usually one where the trend has changed from negative to positive. Since adjustment does change values, this can easily happen. I made a Google Maps gadget
here which lets you see how the various GHCN gadgets are affected, and posted histograms
here. This blog started its life following a classic
2009 WUWT sally here, based on Darwin. That was probably the most publicised case.
There have been others, and their names are bandied around in skeptic circles as if they were Agincourt and Bannockburn. Jennifer Marohasy has for some reason an irrepressible bee in her bonnet about Rutherglen, and I think we'll be hearing more of it soon. I have a post on that in the pipeline. One possible response is to analyse individual cases to show why the adjustments happened. An early case was David Wratt, of NIWA on
Wellington, showing that the key adjustment happened with a move with a big altitude shift. I tried
here to clear up Amberley. It's a frustrating task, because there is no acknowledgement - they just go on to something else. And sometimes there is no clear outcome, as with Rutherglen. Reykjavik, often cited, does seem to be a case where the algorithm mis-identified a genuine change.
The search for metadata reasons is against the spirit of homogenisation as applied. The idea of the pairwise algorithm (PHA) used by NOAA is that it should be independent of metadata and rely solely on numerical analysis. There are good reasons for this. Metedata means human intervention, with possible bias. It also inhibits reproducibility. Homogenisation is needed because of the possibility that the inhomogeneities may have a bias. Global averaging is very good at suppressing noise(see
here and
here), but vulnerable to bias. So identifying and removing possibly biased events is good. It comes with errors, which contribute noise. This is a good trade-off. It may also create a different bias, but because PHA is automatic, it can be tested for that on synthetic data.
So, with that preliminary, we come to Cape Town. There have been rumblings about this from Philip Lloyd at WUWT, most recently
here. Sou dealt with it
here, and Tamino touched on it
here, and an earlier occurrence
here. It turns out that it can be completely resolved with metadata, as I explain at WUWT
here. It's quite interesting, and I have found out more, which I'll describe below the jump.