## Thursday, May 27, 2010

### ## Fallacy in the Knappenberger et al study

This is a follow-up post to the previous post on the pending paper:

“Assessing the consistency between short-term global temperature trends in observations and climate model projections"

by Patrick  Michaels, Chip Knappenberger, John  Christy, Chad  Herman, Lucia  Liljegren and James  Annan

I'm calling it the Knappenberger study because the only hard information I have is  Chip's talk at the ICCC meeting. But James Annan has confirmed that Chip's plots, if not the language, are from the paper.

Fallacy is likely because, as I showed in the previous post, the picture presented there is considerably different after just four months of new readings. Scientific truth should at least be durable enough to outlast the publication process.

#### The major fallacy

Chip's talk did not provide an explicit measure of the statistical significance of their claim of non-warming, despite hints that this was the aim. The main message we're meant to take, according to James, is
"the obs are near the bottom end of the model range"
Amd that's certainly what the plots suggest - the indices are scraping against that big black 95% level. This is explicit in Chip's slide 11:
"In the HadCRUT, RSS, and UAH observed datasets, the current trends of length 8, 12, and 13 years are expected from the models to occur with a probability of less than 1 in 20. "

But here's the fallacy - that 95% range is not a measure of expected spread of the observations. It expresses the likelihood that a model output will be that far from the central measure of this particular selection of models. It measures computational variability and may include some measure of spread of model bias. But it includes nothing of the variability of actual measured weather.

The GISS etc indices of course include measurement uncertainty, which the models don't have. But they also include lots of physical effects which it is well-known that the models can't predict - eg volcanoes, ENSO etc. There haven't been big vocanoes lately, but small ones have an effect too. And that's the main reason why this particular graph looks wobbly as new data arrives. Weather variability is not there, and it's big.

## Tuesday, May 25, 2010

### ## What a difference four months makes!

Deep Climate, at Tamino's, noted one of the interesting talks at the ICCC meeting was by Chip Knappenberger. It foreshadows a paper submitted to GRL by an eclectic group of authors:
“Assessing the consistency between short-term global temperature trends in observations and climate model projections"
by Patrick Michaels, John Christy, Chad Herman, Lucia Liljegren and James Annan

I presume Chip is an author too, although it isn't entirely clear. Anyway, he shows some plots, apparently from the paper, of temperature trends measured back from present to periods of from five to fifteen years. These are compared with model predictions, with the general idea of suggesting that they have been overpredicting warming. In fact, the talk pauses a couple of times to examine the phrase, written in very large font:

### Global warming has stopped (or at least greatly slowed) and this is fast becoming a problem.

Present for this paper means end 2009. So I thought it might be interesting to update with four months more data.
(Updated discussion below)

Here are the two relevant slides from the talk. They split into land instrumental and satellite:

And here are my updates, moving the starting point forward four months. I haven't seriously tried to calculate the probability bounds - they just roughly follow the original for visual comparison.

Following Carrot Eater's suggestion, the plot is animated, with the higher value using the more recent data

Update:
The new data shows that:
1. GISS trend is positive in the range
3. NCDC is mixed
4. UAH is quite positive
Not really a basis for concluding that warming has stopped. And after a few more months of warmth....?

#### Looking forward

To see what the plots might look like by end 2010 (when this paper might appear), I calculated the same trend diagram assuming that each coming month, for each index, was as warm as April 2010. Here are the plots, with the old trends shown as thinner curves:

As you'll see, not only are all trend curves decidedly positive (warming) but getting close to the central value of the models.

## Sunday, May 9, 2010

### ## The Greenhouse Effect and the Adiabatic Lapse Rate

This post is prompted by recent posts by Steve Goddard on WUWT about the GHE and the lapse rate on Venus. They muddle the effects, in a way that is quite often seen in the blogosphere. The meme is that surface warming is due to the lapse rate and not to the GHE. Often on WUWT this comes down to even more simplified assertions that warming is due to atmospheric pressure.

The fact is that the dry adiabatic lapse rate, and the mechanism that creates it, are an intrinsic part of the greenhouse effect that causes warming at the surface. More below the jump.

## Saturday, May 8, 2010

### ## Scale of spatial correlation

I've been trying to follow up on the discussion here of whether the Hansen/Lebedeff claim of correlation of temperatures (longterm) over 1200 km is reasonable. I mentioned kriging as one line to follow.

Kriging is a method for interpolating from a rather random distribution of spatial information. The original application was to mining and borehole information.

You have a spread of readings and would like to know something about the mineral field in between. You'd probably like to know where to drill next, You want a weighted formula which takes account of the fact that the further away the readings are, the more likely they to be influenced by just random noise. You want to balance the desire for a lot of readings with the need to value closer information more highly.

In R there's a kriging routine in the package geoS. But it seems to be oriented to distribution of a single variable, whereas we typically have a time series at each point. In mineral exploration, a reasonable analogy is a core profile, and there must be stuff for that.

Or I could try just using trend as the single variable. A problem there is that we don't have a uniform trend period.

Kriging depends on estimation spatial correlation, and this is often done by fitting parameters of a variogram. This rather comes back to the weighted average idea climate people use. For example, GISS uses a linear taper weight function, held at zero beyond the bounding radius, which could be regarded as the parameter.

There are a number of commonly used functions for variogram fitting, of a generally Gaussian shape. This conical function, with its discontinuous derivative, is not one of them, and with good reason. The fitting algorithms usually involve minimising with derivatives.

Anyway, I thought I'd at least do some exploratory analysis varying the radius. Details below the jump.

## Saturday, May 1, 2010

### ## Just 60 stations?.

Eric Steig at Jeff Id's site said that you should be able to capture global trends with just 60 well-chosen sites. Discussion ensued, and Steve Hempell suggested that this should be done on some of the other codes that are around. So I've given it a try, using V1.4 of TempLS.

I looked at stations from the GHCN set that were rural, had data in 2009 2010, and had more than 90 years of data in total. The selection command in TempLS was
"LongRur" = tv$endyr>2009 & tv$length>90 & tv\$urban == "A",
That yielded 61 stations.

Update: This topic was revisited  here

Results and comparisons below the jump.

A summary of this series of posts is here.