This is a follow-up post to the previous post on the pending paper:
“Assessing the consistency between short-term global temperature trends in observations and climate model projections"
by Patrick Michaels, Chip Knappenberger, John Christy, Chad Herman, Lucia Liljegren and James Annan
I'm calling it the Knappenberger study because the only hard information I have is Chip's talk at the ICCC meeting. But James Annan has confirmed that Chip's plots, if not the language, are from the paper.
Fallacy is likely because, as I showed in the previous post, the picture presented there is considerably different after just four months of new readings. Scientific truth should at least be durable enough to outlast the publication process.
The major fallacy
Chip's talk did not provide an explicit measure of the statistical significance of their claim of non-warming, despite hints that this was the aim. The main message we're meant to take, according to James, is
"the obs are near the bottom end of the model range"
Amd that's certainly what the plots suggest - the indices are scraping against that big black 95% level. This is explicit in Chip's slide 11:
"In the HadCRUT, RSS, and UAH observed datasets, the current trends of length 8, 12, and 13 years are expected from the models to occur with a probability of less than 1 in 20. "
But here's the fallacy - that 95% range is not a measure of expected spread of the observations. It expresses the likelihood that a model output will be that far from the central measure of this particular selection of models. It measures computational variability and may include some measure of spread of model bias. But it includes nothing of the variability of actual measured weather.
The GISS etc indices of course include measurement uncertainty, which the models don't have. But they also include lots of physical effects which it is well-known that the models can't predict - eg volcanoes, ENSO etc. There haven't been big vocanoes lately, but small ones have an effect too. And that's the main reason why this particular graph looks wobbly as new data arrives. Weather variability is not there, and it's big.
Sources of deviation
I posted back in Feb on testing GCM models with observed temperature series. There were three major sources of likely discrepancy identified:
- Noise in measured weather
- Noise in modelling - unpredictable fluctuations
- Uncertainty from model selection
I showed plots which separated the various effects. But the bottom line is that the measured trends and the model population means both have variance, and to compare them statistically, you have to take account of combined variance (as in a t-test for means).
I railed against Lucia's falsifications of "IPCC projections" a couple of years ago. A big issue was that Lucia was taking account then of weather noise, but not model uncertainty. The result is that something that was then "falsified" is no longer false. The total variability had been underestimated. The same effect is being seen here in reverse (model noise but no weather noise).
Estimating model uncertainty
I don't know in detail how the probability levels on Chip's slides were calculated. But it's hard, because model runs don't form a defined population subject to random fluctuations. They are chosen, and with fuzzy ctiteria. Individual runs have fluctuations that you can estimate, but there's no reason to suppose that across models they form a homogeneous population.
That is significant when it comes to interpreting the 95% levels that are quoted. As often in statistical analysis, there's no real observation of the tail frequencies. Instead, central moments are calculated from observation, and tail probabilities quoted as if the distribution is normal.
Normality is hard to verify, and even if verified for the central part of the distribution, it's still a leap to apply that to the tail. The unspoken basis for that leap is some variant of the law of large numbers. If getting into the tail requires the conjunction of a number of independent happenings, then it's a reasonable guess.
But if the occurrence of a tail value is dependent on simple selection (of model run), then even if the scatter looks centrally bell-shaped, as in Chip's slide 5, the reasons for thinking the tail fades away as quickly as a normal distribution would is not really there. The slide does note correctly that the points on the histogram are not independent.