There is questioning at CA as to whether the whole approach is the right way to try to analyse feedback between these variables. There is a confused notion of causality. I agree with that. But here the aim is more limited.
Here's the problem. We have a variable dR (CRF) and we have a model for its dependence on T: \[dR(t) = \int_{-\infty}^\infty h(t-\tau) T(\tau) d\tau\] Just convolution. This can be resolved by Fourier transform FT, which converts convolution into a product: \[FT(dR) = FT(h) * FT(T)\] and so \[h = iFT(FT(dR)/FT(T))\] iFT being the inverse FT. Of course, all this is implemented wqith the discrete Fourier transform (DFT), and in practice with FFT.
I'll analyse a simplified version of Bart's code translated to R. Here it is:
data=read.csv("flux.csv",skip=0);
data[is.na(data)]=0;
temp=data[,9];
dR=data[,5]-data[,8];
# temp and dR are the data vectors, length 124 months
N=length(dR);
dT=1/12;
Nsamp = 8192;
Npad=Nsamp-N;
X=fft(c(temp,rep(0,Npad))); # pad temp to 8192 points and FFt
Y=fft(c(dR,rep(0,Npad)));
# Inverse FFT to get the impulse response
h = Re(fft(Y/X,inv=T))/dT/Nsamp;
h = Re(fft(Y/X,inv=T))/dT/Nsamp;
# now h is tapered;
Nc=Nsamp/8;
w=c(rep(1,Nc),(1+cos(pi*(1:Nc-1)/(Nc-1)))/2,rep(0,6*Nc));
H=dT*fft(h*w); # Fourier transform of tapered h;
So X and Y are duly FFT'd, with lots of padding. There are 8192 frequencies, all multiples of a base frequency (1/8192) RPM (revs per month), and 8192 month in the time range.
Periodicity
Now the first thing to remember with the DFT is that everything is periodic, with period 8192 units. That is because everything is expressed in linear combinations of harmonics of that base frequency. That applies to everything that emerges from an FFT or an iFFT. Being periodic, it is best to visualize it on a circle.Impulse response
So back to the code. Down to the calculation of h, as an FFT of Y/X, it's straightforward. And h is indeed the vector which convolves with T to give dR to near machine accuracy. To emphasise the periodicity, I'll show two periods. It looks like this:The y-axis is scaled for the smoothed h (30 month triangular filter). The unsmoothed is scaled down by a factor of 40. Now I'll show a window about t=0:
There's clearly a lot of noise. The reason is basically that there are only 104 units of real information of temp, and the large number of vectors orthogonal to that are used to fit the padding.
Both sides matter
Now Bart says that the negative t part of h is pure noise and can be discarded. That's not true. To show this, just run the problem with the data in reverse order. The problem has exactly the same form. What does the smoothed h then look like:The same as before, but exactly reversed. Is the negative t part now noise? It's exactly the same numbers as before, on the positive side.
Very strange taper
So Bart wants to taper h. He creates a taper w which looks like this. Again it's periodic (there's no other domain), so I'll show 2 periods:
Max w is 1; the y axis is scaled for h, which is shown in the background.
So now there are two problems. One is that the taper cuts of a part of h that matters - the negative t part. This has the result that the tapered h no longer convolves with T to give dR. In fact, it gives this, after a little smoothing (10 mth triangular):
The red curve from h exactly matches; the gold curve from tapered h does not. That's serious, because reproducing dR is what h is supposed to do. If it reproduces something else, then any results apply to that quantity.
Frequency response of taper
A less serious, but even more obvious issue is the taper itself. When you cut data, you in effect multiply by a gate function, which is 0 where you cut, and 1 elsewhere. The effect in the frequency domain is to convolve with the FT of the gate. You want to make as little difference as possible in medium frequency ranges, so that FT should be narrow, like a delta function. But an actual gate (square wave) tapers off slowly in the frequency domain, as inverse freq, often described as 6 dB/octave.The rule is that the rate of roll-off (in dB) is proportional to the order of the first derivative that does not exist (somewhere). For a gate, that is 1. For a Hann taper, which is a period of cos^2, the second derivative exists, but is discontinuous at the join. So the roll-off is 18 dB/octave.
Now Bart is using half a Hann filter, with an adjoining flat stretch. That's fine, but is spoilt by the cut at zero. In the frequency domain, his taper looks like this (magnitude). The red line has slope 6 dB/octave
As a taper, it performs very poorly. It should attenuate at at least 18dB/octave. Now of course that is not so serious. It could be omitted. But it indicates the wrongness of the thinking.
For completeness, here is the R graphics code that I used:
# For display, I stretch the variables to cover two periods.
# Then I select subsections of the range using xlim
f1=c(1:15,15:1); f1=f1/sum(f1);
q=Mod(fft(w)); q=c(q,q);
w=c(w,w);h=c(h,h);
hs=filter(h,f1,circ=T);
hm=mean(h[1024:7168]); hh=h-hm;
f1=c(1:5,5:1); f1=f1/sum(f1);
cum=cumsum(hh)*dT; cum=filter(cum-cum[Nsamp/2],f1);
x=(1-Nsamp):Nsamp;
wh=w*h; wm=w*(max(wh)/2)
xl=c(1,Nsamp,-300,300,-800,800);
graphics.off();
cl=c("black","grey","red","green")
png("post/f1.png",width=1000);
plot(x,h/40,type="l",col=cl[2],ylim=c(-2.5,1.5),xlab="Month",ylab="Response W/m2/C",
main="Impulse response CFR to temp");
lines(x,hs,col=cl[3]);
legend("topright",c("h","h smoothed"),text.col=cl[2:3])
dev.off()
png("post/f2.png",width=1000);
plot(x,h/40,type="l",xlim=xl[3:4],ylim=c(-2.5,1.5),col=cl[2],xlab="Month",ylab="Response W/m2/C",
main="Impulse response CFR to temp");
lines(x,hs,col=cl[3]);
legend("topright",c("h","h smoothed"),text.col=cl[2:3])
dev.off()
png("post/f3.png", width=1000);
plot(x,wh,type="l",col=cl[2],xlab="Month",ylab="Response W/m2/C",
main="Tapered impulse response CFR to temp, and taper");
lines(x,wm,col=cl[3]);
legend("topright",c("Tapered h","taper"),text.col=cl[2:3])
dev.off()
plot(x/Nsamp,q,type="l",xlim=c(1,800)/Nsamp,log="xy",col=cl[1],xlab="Freq RPM",ylab="Mag FFT(taper)",
main="Impulse response CFR to temp");
lines(x/Nsamp,1000/x,col=cl[3]);
dev.off()
I'm not sure I see the point in using FFTs for this problem (and in my field we are unreasonably predisposed in favour of FFTs in the first place).
ReplyDeleteFFTs were very useful for deconvolution when computing power was very limited. They're still handy on massive datasets or high dimensional data.
But for a small 1-d dataset like this, instead of messing around with filters, you might as well pick a set of basis functions with the sort of behaviour you want (e.g. smoothness, directionality in time), construct the convoluting function from the basis functions, and solve the system of equations for the coefficients of the basis functions.
Kevin C
Kevin,
ReplyDeleteIt's true that nowadays one could simply solve directly any representation you choose for the convoluting function (GF).
For directionality you'd probably choose exponentials, thus recreating a Laplace transform. There are genuine difficulties about inverting Laplace transforms, for which Pade approximants are a partial answer.
Personally, I think the problem here is using methods designed for signals going down wires. We don't have wires here. Causality is rarely clearcut.
Nick,we use the same methods in acoustics and they work just fine.
ReplyDeleteWe call h(tau) the impulse response function, it's FT the "transfer function" and it's OK if h(tau) is nonzero for negative tau. Systems that are linear and passive have a clear interpretation in terms of h(tau) >= 0, as long as the variables have a clear and definite cause-and-effect relationship.
Nonlinear, active systems (e.g., systems that have a power source, pretty sure the Sun qualifies there) can have negative delays, even if the system is physically causal and one variable is clearly dependent on the other.
Here it's even more not-clear cut because there's no reason apriori to expect either of the two variables you're relating to be the cause of the other. Seems to me they both covary depending on external stimuli. I would avoid terms like "forcing" and "feedback" here. It's often a distinction without a difference, and the real operational question is "what can you learn from a model?" not "how do you describe the model in words?"
Carrick,
ReplyDeleteWell, I hope you don't use this taper caper.
A problem is that it does imply exclusive dependence, so that changes in the output (dR) are held to be due to the input (T). That's what the convolution formula says. And long-term relations are what people want to know about, so they look at the low frequency limit. That's why this sort of analysis comes up with the measure of sensitivity being just the ratio of trends.
But the problem here is that temperature doesn't have much trend over the last decade, and CRF does. So the assumption that T is causing the CRF trend means the dependence must be very sensitive. But then the trend of T can change sign, and CRF doesn't. It all gets unrealistic. Of course, the sensible response to a situation where CRF trends and T doesn't is to allow that the CRF trend may be caused by something else.
Nick no worries there. I think your criticisms are spot on.
ReplyDeleteJust to follow up my prior comment, I should have mentioned so that it was clear I wasn't just talking about what is possible, I actually have acoustic data (as do others in this particular area, otoacoustic emissions) where there is a known cause-effect relationship (stimulus signal, measured response) in an active, nonlinear system (the cochlea), where a negative delay can sometimes be observed.
My only real point is it isn't just electrical signals of course, where this can be used. (Since our acoustic systems have direct electrical analogs to the acoustic models, this shouldn't be surprising;)
You definitely shouldn't truncate negative frequencies, and even if you know there's causality and a linear, passive system, you can get "negative" into negative frequency bins. In this case, you can filter h(t) before transforming it back to T(f) ... the transfer function, which is usually what we're interested in, by e.g., filtering out delays that are not realizable (e.g., if I know the maximum delay in the system is 40-ms, and the minimum is 0-ms, depending on the window size, I might keep only the range -5 ms -- 45 ms before transforming back). The issue in this case is the width of the window function associated with your taper and window length.
One thing that we commonly look at before looking at h(tau) is the signal coherence. If it's very low, your measured h(tau) is meaningless, even when you know a cause-effect relationship exists.
Carrick,
ReplyDeleteI asked for Bart's reaction on CA about your last comment. He declined:
> But, I think I’ve said all that can be said, and people who understand the analysis and procedures will come to the proper conclusions. Those who do not, and do not want to, never will.
A bit before that Mark T has answered this way:
> Unbelievable. The level of ignorance in this thread and over at Nick’s is simply amazing. Carrick doesn’t know what he’s talking about and, based on some of what Nick has said, I wonder if he thinks the same as Carrick.
http://climateaudit.org/2011/09/08/more-on-dessler-2010/#comment-304024
Would you care to comment?
I will, but it's going to be useless. Other than name-calling, blank stares, and closed minds, I don't anybody to care on a conversation over there.
ReplyDelete,,,carry on...
ReplyDeleteThanks willard! Now I get attacked by the denizens over there!
ReplyDeleteFollowing up on that thread, I had a pretty poorly written paragraph that led to confusion. Here is a stab at making that paragraph make more sense:
You definitely shouldn’t truncate negative delays, and even if you know there’s causality and a linear, passive system, you can get nonzero values into negative delays. In this case, you can filter h(t) before transforming it back to T(f) … the transfer function, which is usually what we’re interested in, by e.g., filtering out delays that are not realizable (e.g., if I know the maximum delay in the system is 40-ms, and the minimum is 0-ms, depending on the window size, I might keep only the range -5 ms — 45 ms before transforming back). The issue in this case is the width of the window function associated with your taper and window length.
[Note in the topic sentence I was referring to frequency rather than delays, which is nonsensical, since I was addressing the issue of impulse response functions and whether one should zero out their negative delays..]
Carrick,
ReplyDeleteSorry for having attracted such flame on you.
I could have been easy to see through your mistaken formulation, which might very well be a boon, as it helps underline the bad faith.
As an hi-fi fan, I appreciate your explanations, albeit intuitively. I also duly note that Bart's analysis seems to both presumes causality and also succeeds in deriving it. At least we now know that time does not fleet backwards in our reality.
Quite frankly, I did not know you were on an hypocritic streak, so I might ask Mark T about this growing list of his.
Thank you for showing magnanimity.
All the best,
w
willard, if I may be frank, I think Bart's problem is he's hung up on what he knows, and isn't willing to concede when he doesn't. This happens to be an area that I've spent 20+ years working on, have numerous peer reviewed publications, and collected in the 100s of GB of data on. And even more to the point I've discussed this stuff (which I admit is highly nonintuitive) with perhaps a dozen very bright colleagues strategically positioned around the US at some very fine institutes. This isn't an army of one in this case.
ReplyDeleteSo I might just have some idea what I'm talking about though I can see how a guy who is used to the spherical chicken approximation of impulse response functions might get flummoxed over it. There is a point, where you sort of have a sense that if you speak it's going to be like you're talking in Martian. This was one of those cases, which is why I was reluctant in putting in my $1.20 up till now.
Unfortunately I made it initially worse by screwing up an explanation. Though, with every error there is opportunity, same applied here.
I still haven't sorted out if Bart gets causality because he assumes it, or because it is really there. That's bad on me, but I have other problems I'm working on simultaneously that I find much more interesting and engaging, like what is the true sensitivity of my microphones, how linear are they, why does one approach for calibration work and another doesn't so well, and stuff like that. (We've just got some GREAT data using my sensors and we're all pumped about it at my lab.)
It is very possible that "differing frequency bands" fixes the ills in Bart's analysis. It'd be interesting to see this properly applied and tested. I'd be mildly surprised if his large negative DC gains survive full scrutiny.
"I'd be mildly surprised if his large negative DC gains survive full scrutiny."
ReplyDeleteI don't think they have survived. I showed some calcs here. Without the taper, the dc gain is almost exactly the ratio of the trend of CFR over the period to the trend of T. With taper, it still tracks fairly well. I think that's already a dodgy number to hang your hat on.
But the temp trend is in the denominator, and it's notoriously close to zero and fluctuating during that decade. If it goes negative, the dc gain goes big positive, as it does if you start the calc later in 2000.
Bart says that we mustn't even think of dropping data. But even without, one could go back in time. We're short of CFR data, but not temp. The trend in CFR is strong over the decade, and is not going to change sign with a few extra months of data. But if you get back to 1998, the temp trend turns negative again. The -9.4 W/m2/K just depends on what month your data happens to start.
I'm posting some followups here to a comment made by myself at ClimateAudit. Based on the dangling tag a the end, I may have fubared that thread.
ReplyDeleteSince I was able to comment on this question by Amac I'll assume I managed to break Steve McIntyre's posting system, rather than getting on his nerves enough to get banned. ;-)
I'm going to reserve my comments to "negative delays" seen in measurements of the impulse response function computed via the following method:
1) input a broadband signal X(t)
2) record the system response Y(t) to X(t)
3) Compute the ratio T(f) = FT(Y)/FT(X) where FT(...) is the forward Fourier transform from time t to frequency f.
This quantity goes by the rubric, the "transfer function".
4) Inverse Fourier transform this to the time domain, call this new quantity "h(tau) = IFT(FT(Y)/FT(X)) = IFT(T(f)).
This quantity h(tau), where "tau" is conventionally known as the delay or lag, is goes by the rubric "impulse response function". The true impulse response function is the response of a system to a infinitely narrow impulse (e.g., a delta function).
Under certain circumstances h(tau) can be equated with the true impulse response function. In other circumstances, it means something different. I've laid the conditions out on the CA thread, those who have actually sat down and proved that h(tau) is an impulse response function also understand the assumptions that go into making this connection. In particular, if the system is linear and passive h(tau) will always be the same as the "true" impulse response function, and since all physical systems obey physical (information) causality, assuming in addition that you have measured T(f) from 0 to infinity (assuming real valued quantities to start with, otherwise you need f from -infinity to +infinity), you will never observe negative delays in the system.
Additive noise gives you nonzero signals in the negative delay spectrum—and in general in frequencies which are outside the true response band of the system—and Bart correctly notes these can be neglected.
In my software package for analyzing impulse response functions, I start with frequency domain measurements, convert them into time domain, in order to separate out the various delay components (basically filter component by component), but also separately apply a boxcar filter from -5 to + 40ms, since for these measurements anything outside of that range isn't realizable in that system. That is a method of smoothing the signal that allows us to remove "episodic" noise (e.g., noise spikes that appear just at one frequency measurement).
This is the second part of my comment on negative delays and their interpretation.
ReplyDeleteHere is an example of a paper that studies an active system with a negative group delay.
Link to key figure:
figure.
Note that the measured signal is advanced (negative delay) relative to the generated signal.
Negative delays are real and consequences of the behavior of active systems that have a transfer function which has an "advancing" phase.
He didn't show it, but it would be interesting to see how his system responds to a pulse. You won't (of course) get a response prior to the generated signal. Information causality is not violated here, just an assumption that relates continuous signals to impulsive ones.
OK this is the last comment I wanted to make on this. Sounds like we're starting to get other people, like MartkT, admitting that negative delays are physical in h(tau).
ReplyDeleteI think we can call this "progress".
Carrick,
ReplyDeleteAs promised, I submitted my questions to MarkT:
http://climateaudit.org/2011/09/08/more-on-dessler-2010/#comment-304578
To save you a click, here is the comment:
***
Mark T,
You said something to Carrick that seems to deserve due diligence:
> Hypocrisy seems to be your forte these days, quite frankly. The list of examples of such behavior is increasing.
First question. I’m not aware of such list. Do you still have it? Auditors might be interested to take a look. Full disclosure might be the best policy on this matter.
Second question. I note that you used words like “hypocrisy” to qualify Carrick’s behaviour, and also Nick’s behaviour hereunder. I am unsure why the behaviour of Nick or Carrick shan’t be characterized instead as “dogged persistence” or “tremendous tenacity”, expressions which, I believe, connote a virtue for an auditor to endorse.
Here is my question: by which criteria are these epithets judged? It would be important for people to understand which behaviour is considered virtuous and which behaviour is considered less so.
Thank you for your consideration and for constructively moving forward the discussion,
w
PS: Impartial auditors like TerryMN could take a look at these two questions too.
***
Let's hope he saved a copy of that list and that he has a good criteria to evaluate hypocrisy.
All the best,
w
willard I too look forward to the answer to these questions so I might be able to further improve myself as a person.
ReplyDelete;-)
Thanks for all that data and presentation work, Carrick. It really adds substance to the picture.
ReplyDeleteThanks, Nick!
ReplyDeleteWow, that is indeed an eye-opening bit of work Carrick!
ReplyDeleteKevin C