Sunday, January 25, 2015

Trends, breakpoints and derivatives - part 2

In part 1, I discussed how trends worked as a derivative estimate for noisy data. They give the minimum variance estimator for prescribed number of data points, but leave quite a lot of high frequency noise, which can cause confusion. I also gave some of the Savitsky-style theory for calculating derivative operators, and introduced the Welch taper, which I'll use for better smoothing. I've chosen Welch (a parabola) because it is simple, about as good as any, and arises naturally when integrating (summing) the trend coefficient by parts.

I gave theory for the operators previously. The basic plan here is to apply them, particularly second derivative (acceleration) to see if it helps clarify break points, and the general pattern of temperatures. The better smoothing might seem contrary to detecting breakpoints, since it smooths them. But that actually helps to avoid spurious cases. I'll show here just the analysis of GISS Land/Ocean.

I'll start with the spectrum of acceleration below. As I said in Part 1, you can actually get much the same results by differencing the smooth (twice for accel), or smoothing the difference. But the combined operator shows best what is happening in the frequency domain.

Spectrum of acceleration

Here is a plot of the spectra for acceleration, as with trend in part 1.

Some points:
  • Each of the operators is now quadratic for low frequencies, as differentiation requires. As the frequency (1/width) = 10 /Cen is approached, the response again starts to taper. This is the effect of smoothing at higher frequencies.
  • Each operator then has pronounced band-pass character, slightly more so than with trend. This will show in their behaviour.
  • You can still see the increasing order of roll-off, though each is slower than the corresponding trend spectrum.

Gradient plots

The active plot below shows gradients with 10,20 and 30 year filters, on 13 different datasets. Each plot shows the three different tapers ("Regress" (red) is just OLS). You can use the buttons at the top to change data set or filter length.

Length yrsDataset

The plot you see first here is GISS Land/Ocean monthly, 30 year filters. The filter is centered, so you see an estimate of the derivative at the year marked on the axis. There is no padding, so the plot stops at 2000. Some notes:
  • The trend is mostly positive (warming).
  • As the smoothing increases, there is more pronounced amplification around the filter period (30 yrs). Inevitably, most of that is noise. But it happens even with the OLS trend.
  • There is no radical change as smoothing increases, but the blue curve strips away high frequency detail, which probably had little meaning.
  • What remains are the familiar features - warming 1910 to near 1940, then a hiatus, then warming from about 1975 on, with a max trend (not a pause) at about 2000. Some sign of deceleration there, although it could be just the amplification of the 30 yr band.

Acceleration plots

Now we are estimating second derivative, which should be mostly the derivative of the above. This will be clearer with the W2 blue curve. The main thing to look for are spikes (+ or -) to indicate break points, where the derivative changes.
Length yrsDataset

  • The spikes aren't very pronounced. There is conflict between the want to remove HF noise, and preserving the spike. So the smoothest line shows smoothish spikes, but that is abtually the meaningful part. It isn't really better without smoothing. So here we see 1910 and 1940 as the most prominent features, with a reasonable peak around 1972 (it's really hard now to pin down a year, as it should be). At this resolution, no sign of a peak at 2000.
  • Going to shorter periods doesn't really reveal more. There is just more noise at about the periodicity of the filter length.

More about the datasets

  • HadCRUT - HADCRUT 4 Land/Ocean
  • GISSlo - GISS Land/Ocean
  • NOAAlo - NOAA Land/Ocean
  • UAH5.6 - UAH Lower Troposphere
  • RSS.MSU - RSS Lower Troposphere
  • TempLSgrid - Land/Ocean
  • BESTlo - Land/Ocean
  • C.Wkrig - Cowtan and Way kriging Land/Ocean
  • TempLSmesh - Land/Ocean
  • BESTla - Land Only
  • GISS.Ts - Met stations
  • CRUTEM - Land Only
  • NOAAla - Land Only
  • HADSST3 - Sea Surface
  • NOAAsst - Sea Surface


  1. Really interesting. Can you do this to the PDO and AMO indexes?

    1. Yes, it is just a time series method. I'm currently looking at updating the climate plotter data - I'll see if I can add it as a general facility.

  2. This pair of articles is very informative.

    It is odd however that you have not pointed out one of the most significant features of all these filters: the negative lobes.

    The first negative lobe appears to have a magnitude of about 50% of main peak. That is a huge problem. 50% leakage would be bad enough but actually inverting the signal is potentially disastrous if there is significant signal in that part of the spectrum.

    This actually looks considerably worse than a straight running mean:

    ( Note that is a magnitude plot and, as here, every other lobe is negative for RM )

    1. Greg,
      Yes, it is. That is why I am trying to reduce noise with Welch filters.

      It is necessarily worse than the mean. It is a differentiating filter. That means among other things that the big central peak of the spectrum is missing, so the lobes have to look bigger.

    2. ps Greg,
      I have a new post up on this.

    3. Thanks Nick,

      what are the magnitude and frequency of main peak and the peak in the negative lobe? Eye-balling your graph I'd guess about 11/century and 22/century..

      Do you have an analytic expression for sliding trend 'filter'. That's the red line right?

      I'd like to plot this against gaussian-derivative that I detailed in your new post.

    4. Greg,
      The analytic form of the spectrum is
      where x=f*π/10, f=freq in 1/Cen

    5. Thanks Nick,

      that works for n=10 but if I put n=20 it blows up at zero. I think there's another n missing somewhere.

    6. Greg,
      I'm working through the multiplier; π²/6 might not be right.
      There is interesting theory. The three curves that I plotted are spherical bessel J functions of order 1,2,3. The following R code duplicates my original plot (that you showed at CA) except for the y-scaling:

      for(i in 1:3)y=cbind(y,a*bess(x,i))
      for(i in 1:3)lines(x,y[,i],col=i+1)

    7. Thanks for the code. I extended that index to 4 and it starts to look like a useful response curve: only a moderate neg. lobe and much faster roll-off than GD.

      That makes it good choice where G is too slow.

      Can your method be extended to produce something with that freq resp?

    8. He, he !! I exaggerated "much faster" . In fact I have a pretty good hunch that this is converging to gaussian as the order increases. All roads lead to Rome. Central limit theorem ?

      Forth order may still be a useful tool since it does have zero in the FD. That can be a notable selling point in many situations. For the price of small neg. lobe this could be good.

      It seems to settle a lot better than a crudely cropped 2-sig gaussian, too.

  3. Here is the response of the gaussian-derivative. Since the freq resp of a gaussian is itself a gaussian the result of diff and gauss is the linear ramp of diff x gaussian and gaussian wins at high freq.

    The example I plotted is for sigma=5y which is a suitable replacement for the M&F 15y sliding-trend .

    It would be good to see this (with suitable frequency scaling) on the same plot as your graph in this post to allow direct comparison.

    Roll-off is slow and even but most importantly does not have negative lobes. It looks like your blue line filter may have some merit of a much faster roll-off but the neg. lobe, though smaller, is still a bit ugly.

    presumably you could take the process further and get a still improved version.

    I would like to try a Lanczos-derivative, in the same way as GD. That should have minimum overshoot/ripple and be faster roll-off than the blue line.

  4. One thing I should add is that Nick's transfer function is not dimensionless. This is the ratio of the trend amplitude to temperature amplitude.

    If you want to look at the dimensionless version, you need to divide by 1/f (the value at f=0 is undefined as is appropriate).

    Also, you have to look at the noise spectrum when looking at the relative magnitude of the signal that passes through the first lobe as compared to the pass band of the filter.

    Since this is also 1/f to some power, in practice, the amount of energy in the first band is much less than the 50% that Greg is quoting.

  5. You seem to assume that temperature is random: integrated white noise. If it was we would not all be arguing about "forcings".

    There were two major "forcing" events: El Chichon and Mt Pinatubo that inserted a very significant circa 10y signal in the latter half of the 20th. c. Inverting part of that is definitely not desirable.

    There are some errors in Nick's formula as give since it only works n=10. I would not start doing dimensional analysis until he's fixed. it.

    The transfer should not be undefined at zero, the difference operation will take out any constant term. It is correctly zero and should tend to zero in a well defined manner.

    1. Greg,
      I'm not sure what to fix. Could you be more specific?

      The formula (sin(x)-x*cos(x))/x² tends to x/3 as x->0, and is well defined at 0. You can check that by power series expansion
      x+x^3.6+... - (x+x^3/2+...)
      It doesn't depend on n=10

    2. Oops
      x-x^3.6+... - (x-x^3/2+...)

    3. OK, the formula you give just above is the derivative of the sinc fn.


      so my first question is why diff of sinc.? We are considering diff in TD that is not equivalent to diff in FD.

      If you are doing runing mean of diff, or diff of RM in TD. you are convoluting a two pt diff kernel with a rectangular pulse to in FD thus you multiply:

      FT(diff) * FT(pulse) = f . sinc(f) = f*sin(f)/f = sin(f) !!

      ie running mean of diff has ZERO roll-off and periodic pass-bands ( with the nice feature of inverting every other band).

      Now if you invoke of Carrick's 1/f type spectrum for the signal being analyise you may get close to your red line as the spectrum of the output.

      I should note in passing that this noise model is totally integrated white noise and does not apply to climate. Climate is not totally random as looking at any spectrum of real climate data will show.( not just temps). There is not infinitely strong long frequencies. Any real data levels off as you approach zero freq.

      If it is a truly random component it may not matter too much inverting it. If it is the strong volcanic signal that is one of the major features of both models and climate it "matters", in spades.

      Maybe this is the difference between RM of diff and a sliding trend which is min variance condition, not min deviation.

      I've been trying to check what you posted but I'm having trouble reconciling your integrals with integration by parts. You seem to assume certain things that are probably obvious to you but not necessarily to the reader.

      I'll post on the relevant thread about that.

    4. A quick test doing diff and RM on an impulse confirms freq resp is sin(f) with zeroes every 15y. data inverted between 15,30y :(

      No good.

      Also look at graphs of 15y trends in M&F paper. Clearly loads of sub 15y variability in the results. Spiky as hell.

    5. "Also look at graphs of 15y trends in M&F paper. Clearly loads of sub 15y variability in the results. Spiky as hell. "

      What's the deal with these tossed-off remarks? As if you are such an authority that you can casually project confidence in what you are asserting.

      I am still looking at factorizing the temperature time-series. The basic CSALT factors (CO2-SOI-Aerosols-LOD-TSI) generate this fit, where blue is the model

      In keeping with the point of the top-level post, I am only applying a 12-month sliding window filter to the GISS data. If you look closely you probably notice that fine detail occasionally pops through as shoulders and minor peaks and valleys in the model fit. This is not a coincidence as it reveals the impact of ENSO on the global temperature, even when strong El Nino or La Nina conditions do not prevail.

      Further, what is interesting is that the residual noise contains spectral components consisting of higher frequency harmonics of the 21-22 year Hale cycle, particularly N=3, N=4.
      I probably will take a look at this in more depth at the risk of being accused of over-fitting.

      After that the residual will probably lie somewhere between red and white noise.

  6. I likely will never use filters with that large a window. All I really need is a window of about 1 year to suppress the intra-annual noise. It is becoming more and more clear that a time-series such as the global GISS record is composed by ENSO, volcanic, CO2, etc factors and that one uses these signatures as a model-based compensation or filtering. And for the long-term variations, the temperature compensation is empirically established by the LOD anomaly.

    So the approach is to apply the same small-window filter to both the SOI and the GISS time series and compose that way.

    Moreover, this approach points out how delusional the current hysteria is over manipulation of temperature data -- see Curry, WUWT, and the usual suspects. There is no possible way that manipulation of temperature and of a metric such as SOI can be aligned so perfectly as a result of a conspiratorial strategy.

    1. "All I really need is a window of about 1 year to suppress the intra-annual noise. "

      Bad new Webby, you can't possibly filter noise with "inter-annual" components using a window of "about a year", think about it.

      Your final para has fundamental errors of logic. Firstly that your CSALT thing means anything. It is basically an exercise in over-fitting using a now very large number of natural variables as possible explanatory variables.

      It is you that is being delusional, not your "usual suspects".

      Make yourself a similar number AR1 times series and repeat the exercise. You will probably get a very similar level of fit. Then try a Monte Carlo on your fit to real data. chop out a few 10 ro 15y segments and swap them over, then redo you fit and check the correlation. Do that 1000 time or so and do some stats. Find out which variables have any objective explanatory power.

      You may get a confirmed result on one or two. Volcanoes probably. Even that you are not regressing correctly since the climate response in not in phase with the forcing so you will not be correctly scaled. See my recent post at Curry's on that question.

      I hope those suggestions help you beef up your model.

    2. Bad news Goodmann, you have a reading comprehension problem. I clearly wrote "intra-annual", which you somehow read as "inter-annual".

      The CSALT model is the winner in all this and you are afraid of what it can do with absolutely no over-fitting.

  7. How about you learn some manners PUKE-ITE. ?

    Yes, I misread fine.

    Now as usual you ignore the very valid points I make about your ill-conceived model and how you need to test it and why your regressions are wrong, and simply declare it "the winner in all this".

    Very powerful argument that.

    You're right, I'm really, really afraid of the power of your model and what it is going to do to the world. LOL.

  8. I don't know what your problem is Goodmann, other than you have problems spelling your own name.

    So you want me to go through your "very valid points", eh?

    First, I don't do AR1, I do model an Ornstein-Uhlenbeck random walk process. And that isn't going to cut it.

    Next, what exactly is the problem with the well-known factors that contribute to natural variability? You call them exploratory variables. Yes, I have seen how you are thrashing and wailing about trying to make sense of the volcanic aerosols. My recommendation to you is to take a deep breath and just do it correctly. The result pops right out, just as the SOI factor pops out. Adding in the log(CO2), TSI, and LOD anomaly brings up the correlation coefficient to at least 0.98.

    That is essentially the starting point for further analysis.

    If you really want to ignore 98% of the jigsaw puzzle because you are beholden to some political agenda, that is your problem. It really is painful to watch you abnegators try to make things more difficult than they actually are, for the sole purpose of creating FUD.

    1. Well, Pukeite you can do Ornstein-Uhlenbeck if you think it makes you sound smarter but then just saying "And that isn't going to cut it." is hardly a response. You don't seem to realise that most your 98% of the jigsaw puzzle is probably spurious correlation. You seem to be up to about a dozen variables now.

      Then you tell me to "do it correctly" whatever you think that means. You certainly know enough about ODEs and system behaviour to know that you should not be regressing a radiative forcing directly against temp.

      Correlating ENSO or SOI to global mean temperature is a joke since about half the planet shows negative correlations and everything in between with varying lags.

      You know as little about me as I do about you, so your pathetic last comment about my motivations has probably more to do with projection than anything else.

      Now if you wanted to show that your magic 98% is more than spurious correlation you know how to do so. But you won't will you, because you are scared of what you might find.

      You prefer sounding off, being insulting and trying to impress with your long words.

      You talk the talk but you won't walk the walk.

    2. Yeah, I am sure I am "up to about a dozen variables now", when I count 6 factors that comprise essentially fixed time-series.

      I really don't care for your FUD. Everything you say is the exact opposite to the truth.

      Everyone knows that ENSO cycles correlate strongly to warm and cool periods throughout the globe. That is not under discussion, and it is certainly not a "spurious correlation". Matching an erratic time-series that represents ocean oscillations to an erratic time series representing global temperature can not be jiggered.

      It is also well known that strong volcanic eruptions can cool the climate for a few years. This also can not be jiggered.

      Dish this data up on a silver platter and Goodmann still wouldn't know what to do with it.

    3. "It is also well known that strong volcanic eruptions can cool the climate for a few years."

      I did not say anything to the contrary, in fact I linked you to an article I've just done investing this very thing. It can be "jiggered" if you do not account for the proper relationship between radiative forcing and surface temperature. This is what is most commonly done as I documented in a list of references in that article. You appear to make the same mistake.

      One of the results is an spurious apparent warming several years after and eruption after ENSO and volcanoes have been supposedly removed. This was noted in Santer 2014. I explained where this comes from.

      In doing your kind of scatter-gun multivariate analysis this will get falsely attributed to another variable , likely CO2. That will not prevent you getting a high correlation coeff. if you have enough variables.

      BTW, dLOD is mostly transfer of momentum from solid earth to atmosphere, via evaporation and rainfall, so you are using a symptom of temperature change as a causative variable.

      But don't worry all that is just FUD, you're definitely onto a "winner" with a score like 98% . I haven't seen anything that impressive since Pratt's global warming to within a millikelvin farce.

    4. "
      It can be "jiggered" if you do not account for the proper relationship between radiative forcing and surface temperature.

      Actually it's just a first-order perturbation. Get used to using differentials.

      How can you remove a "spurious apparent warming" -- just because you don't like what you see?

      LOD anomaly is useful because it reveals the trendless long-term temperature variability. The significance is that the trend is removed. Just like the trend is missing in the SOI as well. This leaves the trend in the residual, and you can either attribute that to an effective log(CO2) or you are going to have to get very creative.

      But then when you say "I haven't seen anything that impressive since Pratt's global warming to within a millikelvin farce.", I realize that you have truly jumped the shark, as Pratt applied a smoothing filter that essentially removed all the fine detail that I am trying to model. And you apparently are now trying to do the same thing with these multi-year filters that you are playing around with.

      Are you really that deceitful? At least I am honest in not removing the interesting part of the time-series. Get serious and use the data as is with minimal filtering, just as I asserted in my first comment today.

      Or are you afraid of what you are going to find?