So the aims of this post are:
- To see how various levels of approximation reduce the variance
- To see graphically how predictability is removed from the residuals. The idea here is that if we can get to iid residuals in known locations, that distribution should be extendable to unknown locations, giving a clear basis for estimation of coverage uncertainty.
- To consider the implications for accurate estimation of global average. If each approximation is itself integrable, then the residuals make a smaller error. However, unfortunately, they also become themselves harder to integrate, since smoothness is deliberately lost.
- Temperatures and mesh.
- Statistical modelling.
- Absolute temperatures and anomalies.
- Spherical harmonics spatial analysis.
- LOESS type prediction.
- WebGL residuals.
- Review of aims.
Temperatures and mesh
I'll look at the GHCN/ERSST data for January 2017, as used in the TempLS calculation. It's the data I illustrated in the palettes post. The normals are actually those derived from spherical harmonics integration. Illustrations are based on the triangular mesh for TempLS, which is actually the convex hull of the stations.Statistical modelling
I am following an ANOVA pattern here. It's not exactly ANOVA because we don't have subsets with different treatments; I am looking at different predictors for the same data set. A statistical model for this data would divide it into an a priori predictable part and a remainder:y = p + e The remainder is identified with the residuals - what you get when you subtract the prediction. The idea first is to improve the prediction, as measured by the sum of squares, or variance, of the remainder. But I haven't yet called e the random component. That is because individual values may still be somewhat predictable from some pattern of the data (or residuals). The classic pattern here is autocorrelation. So that needs a further equation:
e = f + ε where ε is an independently distributed random variable - no further prediction is possible. That is aspirational. The second equation may well involve lags or shifts, as in ARIMA models.
The distinction I am making here is analogous to that between the fixed and random effects of ANOVA. In fact, I won't be talking about fixed effects, although the boundary can be fuzzy. BEST uses them - it tries to predict first on the basis of things like latitude. I (and most others) rely on observed normals for that part, which I think is more direct, but is deduced from the data rather than a priori. And in this post I go on to look at spatial variation pattern as a predictor. NOAA for example does this with EOFs; I'll use fitted spherical harmonics here. But the things to keep tracking are
- How much variance has been explained so far? and
- How random are the residuals?
I should add that by sum of squares here, I will generally mean an area-weighted sum, using the weights used for temperature average.
Absolute temperatures and anomalies
I have often written about why one should never average absolute temperatures, but rather anomalies (as have GISS and NOAA. I often put it in terms of sampling and homogeneity, but in a recent post, I quantified it in terms of location sampling error. It also shows up in a SS analysis, and that seems like a good place to start. So the first plot I'll show below is just the plot of temperature in °C (for Jan 2017), and the second is the plot of normals that emerge from TempLS analysis. The point is that they are pretty much identical. This implies two things- The plot doesn't tell us much about Jan 2017
- Much of the variation is due to the spatial variation of normals
Variance Temp | Variance residuals | |
Total | 215.225 | 1.388 |
% | 100 | 0.64 |
Anomalies have a special status, because we are familiar with the idea of deviations from normals, and use that as our conventional measure of temperature change. I have plotted it below as the third plot. And it's clear that there is a lot of spatial correlation. That is the reason why it can be successfully interpolated and so integrated. And it is why the anomaly plot is much more informative than the first two plots, which just tell us about normals, rather than what happened in Jan 2017.
Mbr>Anyway, the spatial correlation suggests that we can reduce the variance further by some spatial approximation. So to spherical harmonics.
Spherical harmonics spatial analysis
I have described spherical harmonics here and elsewhere. They are the sphere equivalent of Fourier functions. I use here harmonics of order up to 10, a total of 121 functions. The harmonics are fitted and the residual found. The fit is shown in the fourth figure below, and the residuals in the fifth. The SH fit is what I show each month for TempLS. The results of fitting wereVariance Anomalies | Variance residuals after SH | |
Total | 1.388 | 0.76 |
% | 100 | 54.7 |
LOESS type prediction
LOESS uses a local weighting function to fit some low order polynomial to the nearby points, and that then becomes the predictor for the center. The weights are usually radially symmetric, and I use an exponential. The rate of decrease is interesting, and Hansen found that you can reasonably use exponentials that stretch out to 1200 km. But having taken out LF variation with harmonics, I can get a slightly better result with a characteristic decay distance of optimally 60 km or so. Here is the result:Variance Anomalies after SH | Variance residuals after LOESS | |
Total | 0.76 | 0.59 |
% | 100 | 77.6 |
No comments:
Post a Comment