The standard remedy has been the one recommended by McCulloch (but long used before); an AR(1) regression model, though usually implemented as the OLS trend with a Quenouille correction (1952) to the confidence intervals of the trend. This post introduces an extension of the Quenouille approx to ARMA(1,1).
The Quenouille approximation is fine, but in 2011 Foster and Rahmstorf argued that AR(1) was inadequate, and ARMA(1,1), with an extra parameter, better described the observed autocorrelation functions.
This has some practical consequences, because ARMA(1,1) was then adopted by the Skeptical Science trend calculator and is being cited as the basis for various numbers of "years without statistically significant warming". However, other such quotes, like the famous "none since 1995" quote of Phil Jones, are based on the more standard AR(1) model.
My trend viewer uses AR(1) with Quenouille. Since I'm regularly updating now, the inconsistency may be an issue. So I decided to further examine the choice of model.
Some linear algebra of regression and ARIMAThis builds on an earlier post showing how autoregressive models could be fitted exactly by solving polynomial equations. Suppose we have a vector y of temperatures over consecutive time periods (months), and a matrix X of regressors, the first column just o\1's, and the second (and last) the times of each y. The 3-col matrix Y is just (y,X).
Then the OLS regression problem is to fit to the stochastic model
d = Y B=ε; B=(1, -b), b=(α,β)
The RHS ε is a vector of random variables, which are assumed iid (indentically independently distributed). b are the coefficients, d the residuals. The OLS fit is X* (Y B) = 0.
An AR(1) model brings in a lag matrix L. Ly is just y with every element moved back 1. It is just a square matrix with 0 everywhere except for a 1 just below each points on the diagonal. The stochastic model is just (I - ρL)d = ε.
The operator on d has these properties: it is banded (just two diagonals), and Toeplitz. Toeplitz means that each row is almost a copy of the one above, shifted one place to the right (so just one new element on the left). Here it expresses stationarity - ie the relation between variables doesn't change between timesteps (except that time moves on a step).
A shift of two places is just expressed by the matrix L2. The general ARMA model is
A d = M ε
where A and M are just low order polynomials in L. The notation ARMA(a,m) just lists the order of these polynomials. There are a+m parameters.
Again ε is assumed iid, so the correct fitting is the LS solution of M-1A d = 0. However, in terms of the coefficients b, this is little different from the OLS fit. The significant difference comes in the uncertainty attributed to the coefficients.
The OLS formula X* Y B=X* ε expresses the coefficients b as stochastic variables (the solution values correspond to mean ε=0),
b= (X*X)-1X* ε + mean(b)
so Covar(b) = (X*X)-1 X*K X (X*X)-1
Continuing the idea that stationarity implies Toeplitz matrices, it is reasonable to expect that K can be modelled by a Toeplitz matrix, in which each row is a copy of the autocovariance, centered on the diagonal. And the covariance of b is then made up from the zero, first and second central moments of the acv. We could calculate it that way, but noise from the ragged ends is a problem. That's why ARMA models are used.
But in the ARMA models, we have a similar Toeplits matrix which is
so it is reasonable to compare these with the acv. I'll normalise so central values are 1, which makes the acv an acf.
Here is a plot of the acf's of five major temperature indices from 1980 to present, using the R acf() command. It's an active plot - you can press the buttons for the case you want:
There are some marked features in these acf's, to an extent that I found surprising. The gerneral pattern is that the part close to zero represents the structure of randomness, while further away it is dominated by systematic variations - periodicities and other non-random deviations from linearity. There is a marked periodicity at about 45 months. It's in the data, and shows up in a FFT. Here it shows up as a sort of sinc function. Naturally the acf models can't follow it, nor should they. It would be better to take out some of these low frequency effects. I'll say more about that in a future post that looks at AIC considerations.
Now AR(1) fits a geometric progression, forcing a match to the first lag. As Foster and Rahmstorf say, this tends to underestimate, and ARMA(1,1) gives an extra degree of freedom which allows the later decay to vary to match the slower decay observed. So here are the same acf's but with the two models added. The coefficients are calculated using the R arima() command with the CSS optimisation, and the model acf's produced by ARMAacf().
But now you can see that, while the ARMA(1,1) is a good match until the acf gets close to zero, it of course stays positive while the acf goes negative. The central moments of the acf include the negative parts, so it's reasonable to say that the ARMA(1,1) approx overshoots, while the earlier undershoot of the AR(1) approx starts to look better.
What about other models? Here is HADCRUT 4 modelled with AR(2) and AR(3) as well.
A family of Quenouille approximations
If you center the time sequence so that it has zero mean, then the trend variance is proportional to the zero'th moment (area) under the acf within the range. Since the ARMA model acf's generally tend rather rapidly to zero relative to the trend range, it's a good approximation to take the whole acf area. If ρ is the parameter of the AR(1) model, then the acf G(i,j) = |ρ|i-j, and if you sum over all j for fixed i, the area is (1+ρ)/(1-ρ).
For general ARMA with polynomials A(L), M(L), L=lag, there are corresponding approx factors. I'll describe the general theory in a coming post (it involves complex analysis), but for ARMA(1,1)
with M(L)=1+mL, A(L)=1-rL,
ResultsThe coefficients are calculated using the R arima() command with the CSS optimisation. The period is from 1980 to June 2013. Trends are quoted in °C/century. Note that I give standard errors only - you need to multiply these by 2 to get a conventional confidence interval.
|OLS trend β||1.5662||1.5683||1.4842||1.4139||1.3039|
|OLS s.e. β||0.0656||0.0705||0.0621||0.0916||0.0900|
|AR(1) trend β||1.5707||1.5794||1.4925||1.4490||1.3269|
|AR(1) s.e. β||0.1452||0.1371||0.1331||0.2509||0.2372|
|AR(1) Quen s.e.||0.1448||0.1367||0.1327||0.2504||0.2371|
|ARMA(1,1) trend β||1.5801||1.5956||1.5038||1.4714||1.3474|
|ARMA(1,1) s.e. β||0.2126||0.1985||0.1911||0.3298||0.3393|
|ARMA(1,1) Quen s.e.||0.2115||0.1976||0.1901||0.3290||0.3387|
|ARMA(1,1) MA coef||-0.3785||-0.4007||-0.3690||-0.2677||-0.3329|
Conclusions and next stepsI think AR(1) probably underestimates the variability of the trend and ARMA(1,1) probably overestimates it. I think it is possible to use the area under the ACF directly without use of a model, but it will require tapering, which may introduce similar ambiguity.
One quantitative way of deciding when increasing the number of model parameters should end is the Akaike information criterion. Up to ARMA(1,1) there is no sign of overfitting.
I'll expand on the topic of the extended Quenouille approx in a future post. The mathematics is interesting, and the results are elegant. But it requires the model coefficients (as it does for AR(1)), so increasingly it only sidesteps a smaller part of the work. I described in an earlierautocorrelation post how these could be obtained by polynomial solution; I have a more systematic version which I'll post soon.