## Tuesday, September 13, 2011

### ## FFT, impulse response, clouds, GMST

There has been a lot of interest in the thread at CA which started out looking at correlations between a figure that notionally expresses (unreliably) the part of TOA radiant energy flux attributable to clouds and surface temperature. There is a contention, favored by Spencer and Lindzen, that ST may be affecting clouds, which is a kind of feedback, which in fact is spoken of as a forcing.

There's lots to be said about whether these notions are true, but in this post I want to talk about a kind of analysis proposed by commenter Bart at that site.

Instead of looking at correlations, he proposed in effect to find an impulse response function which will transform temperature (T) into that quantity ΔR_cloud (or dR) by convolution:
$dR(t) = \int_{-\infty}^\infty h(t-\tau) T(\tau) d\tau$

You can this of this as being like applying a smoothing filter but it doesn't smooth - it turns T into dR. The basic idea is that Fourier transformation turns that convolution into a product. Then you can work out the FT $$\hat{h}$$ from
$\hat{h} = \hat{dR}/ \hat{T}$
invert the FT, and it's done. Along the way there are lots of complex variables, but the answer comes out real.

#### The papers

Discussion has been stirred by a recent paper SB11 by Spencer and Braswell 2011, discussed here,here,here, and a
response D11 by Dessler. But the CA thread was about an earlier paper D10 by Dessler (discussed here. The links here will lead you to other papers in the series.

#### The data

The data file is flux.csv here. It covers about 10 years. Col 9 is the surface temp (Hadcrut3), col 5 is TOA flux measured by CERES, and col 8 is a notional clear sky value. The cloud effect dR is got by subtracting col 8 from col 5. Dessler points out that this is biased, because it in effect extrapolates from clear patches of sky to the whole atmosphere, and the air there is much drier than the global average. So there is less wv IR absorption. He used reanalysis data which corrects for this (but may bring other problems).

#### Causality

The idea of impulse response comes from a known causal relation. You hit a gong with a hammer - you get a response (afterwards). But fitting an impulse response by FFT does not require any kind of causality. You could replace dR by births in Utah.

So convolving with h will pull in temp numbers from the past and from the future. It could well be that past values of dR affect present T - in fact, this is the conventional view. It's not actually clear how to decide, once you've found h (as you always can) what it means.

#### FFT, truncation and periodicity

This became a subtopic at CA. I've expressed the concept of impulse response with an infinite integral. But there are two dual problems. You can't compute over an infinite range. And you have to have a finite sample width.

While the relation we explore goes on indefinitely, we only see it for a short time. In FT terms, it helps to pretend that we really have indefinite data, but we've multiplied by a gate function to get the window that we see. The reason that it helps is that the gate function multiplication turns up as a convolution in the frequency domain, which explains a lot of things. And you can choose the form of gate function. Instead of a sharp termination, you can taper. This means some loss of data utility, but avoids spurious effects in the medium/high frequency range.

FFT is a way of implementing the discrete Fourier Transform (DFT). This neatly deals with both sampling and range issues but implementing the FT as a Fourier Series. Here's how I think about that.

You're trying to do something about an infinite line in space (denoting time). But you only see a period of 500 km. Suppose you regard it as part of a circle. The Equator is a familiar one. So you put your data on that and sample over longitudes.

But you'll think of this as a line segment, measured by longitude. And you need an IDL (Date Line). You'll probably put this just to one side of the data period.
(This analogy would work better if we started Lon at 0 at the IDL,  ie Lon+180)

Then you say that the rest of the data on the equator is zero(padding). You divide into discrete longitude intervals, say 8192. And you sample frequency by starting with a sinusoid with period the Earth's circumference, and then its 8192 harmonics.

The impulse response will also be expressed on the equator, and will wrap around. And the convolution integral will be evaluated wrapped around as well.

It's a bit harder to think of frequencies as being periodic. We go through the range from a wavelength of 40,000 km down to 5 km. And 5km is about our sampling length. Surely that 5km wave is very different to the 40,000 km and couldn't be adjacent in a periodic measure?

The answer is beats - or strobing. The trig functions are the underlying theory, but numerically all that counts is what is sampled. Those highest frequencies almost match the sampling frequency, but not quite. The difference is what is sampled, and it is indeed a low negative frequency.

#### The impulse response

Let's now look at the impulse response. The first thing you'll notice is that it's very noisy, and isn't really localised. Why?

You might expect it to be local because for most of its range it produces a zero response on convolution. Zero values would do that.

But in fact it just need to be orthogonal to the T data. That's just one vector in a 8192-dimensional space. h can wriggle as much as it likes, with only that very modest constraint.

So we'll need to smooth it to see anything. Smoothing generally violates that orthogonality, so it won't work exactly as a impulse function afterward. Maybe the smoothing could respect the orthoginality? Yes, more sophisticated filters would help. Here it's just triangular. So here are plots. In terms of the equator notion, I've shifted the date line to the other side of the world from where we have data, and I'm looking at a central window in  the plot. Smoothed on the right with a 10-month triangle filter.

The smoothed version was smoothed across the "date line". The behavious is somewhat different on each side, which I'd like to understand better. So these are the impulse responses that will be used for convolutions.

#### Bart's h taper.

I don't want to make too much of this, as I don't think it's responsible for any majorly incorrect results. But it doesn't help, and messes up interpretation of the results. In Equator terms, suppose we did have data for 500 km to the E of the IDL. Then Bart said, OK the impulse response doesn't mean anything far away, so we'll taper it off over S America, But his taper continued right around to the IDL. The roll-off was a Hann filter, which is quite good. But the revised smoothed plot now looks like:

#### Regenerating with convolution

As commenter Steve noted at CA, the test is how well do these work when actually convolved with T

There are some subtleties here in the conventions used with convolve functions. Basically, which way around h is to be used, and taking account of the fact that we are on a circle. R has a convolve function, and you need the options type="circular" and conj=F.

But then, the original unaltered h does give exactly dR. That's not proving anything except that the algebra worked. The smoothed version is rather different, and to me says that we need better smoothing. And the Bart truncation makes a significant difference - probablymore than smoothing.

In these plots, look to the legends for the colors. Because the red reconstructed dR is exact (to plotting accuracy) I've widened the black dR so it doesn't totally disappear under the red.

#### Regression coefficients

The Dessler-style analysis does regression to say that dR is proportional to T, and the proportionality becomes identified as a feedback factor. Here that is as if the h was compressed into a big spike, and you'd look at the area underneath. That's effectively what Bart is doing when he does another FFT and looks at the zero value. He got -9.4 W/m2/°C. If it weren't for the taper, that FFT is reundant; he could have looked at the f=0 value of the $$\hat(h)$$ derived originally. Without the taper, that returns -12.22 W/m2/°C. As he points out, decidedly negative. That's in the direction of the SB and LC interpretations, though it doesn't necessarily contradict Dessler.

But that gives an important clue to what these numbers mean. The data has mean zero. The low frequency end of the FT is actually a moment-generating function. It returns the moments of the data set, and at VLF, that means the first - the regression slope.

So this number -12.22 W/m2/C is actually found by doing time regressions of dR and T over the 10-yr window. If you do this, you get

OLS grad of temp is 0.000412 °C/month
and the grad of dR is -0.005039 W/m2/month
So if you divide the dR slope by the T slope, you get, exactly,
-.005039/.000412= -12.22 W/m2/C

But you'd get such a number from any two series. . It doesn't imply feedback on its own. To do that, you have to have several points showing an association of change in dR with change in T.

#### Enough for now

There's lots more to say - this is interesting material. I'd like to talk about different ways of dealing with the original data (Hann tapering). I'd like to see what can be said about the mid-range frequencies, where some evidence might be found. Maybe even get into Laplace transforms, which do allow you to take account of causality.

This stuff can generate a huge number of graphs. And I've been dabbling with Javascript tricks to make that more user-friendly. But that's for another day.

1. Nick:

Thanks for posting this, and for cleaning up and reposting your conversation with Bart (and others) at CA, at
http://moyhu.blogspot.com/p/spencers-data.html

The mathematics here are beyond me, I'm afraid -- most field geologists are somewhat math-impaired. But it does look like signal-processing techniques have been neglected in climate science so far. Very interesting stuff.

Cheers -- Pete Tillman
Professional geologist, amateur climatologist

2. Peter,
Thanks for that. I'm hoping to get time to do a line-by-line analysis of the code.