Wednesday, October 18, 2017

GISS September down 0.04°C from August.

GISS showed a small decrease, going from 0.84°C in August to 0.80°C in September (GISS report here). It was the fourth warmest September in the record. That decrease is very similar to the 0.06°C fall in TempLS mesh.

The overall pattern was similar to that in TempLS. Warm almost everywhere, especially across N America, and S America and Middle/near East. Cool spots in W Europe and N central Siberia.

As usual here, I will compare the GISS and previous TempLS plots below the jump.

Sunday, October 8, 2017

September global surface temperature down 0.06°C

TempLS mesh anomaly (1961-90 base) was down from 0.673°C in August to 0.613°C in September. This compares with the smaller drop of 0.02°C in the NCEP/NCAR index, and a substantial rise (0.12) in the UAH LT satellite index. UAH is up a lot in the last two months.

Despite the fall, most of the world looked pretty warm. Warmth in Canada, S America, near East, China. A cool spot in Siberia, but warmth in Antarctica, which contrasts with the earlier NCEP/NCAR report, which showed predominant cold. Here is the temperature map: :


Tuesday, October 3, 2017

September NCEP/NCAR global anomaly down 0.02°C from August

In the Moyhu NCEP/NCAR index, the monthly reanalysis average declined from 0.337°C in August to 0.317°C in September, 2017. It was again a very varied month; it looked as if it would come out quite warm until a steep dip about the 23rd; that cool spell then lasted until the end of the month.

The main feature was cold in Antarctica, so again we can expect this to be strongly reflected in GISS and TempLS, and less in NOAA and HADCRUT. Elsewhere, cold in Central Russia, but warm in the west; fairly warm around the Arctic.







Friday, September 29, 2017

Nested gridding, Hadcrut, and Cowtan/Way .

Update I had made an error in coding for the HADCRUT/C&W example - see below. The agreement with C&W is now much improved.

In my previous post, I introduced the idea of hierarchical, or nested gridding. In earlier posts, eg here and here, I had described using platonic solids as a basis for grids on the sphere that were reasonably uniform, and free of the pole singularities of latitude/longitude. I gave data files for icosahedral hexagon meshes of various degrees of resolution, usually proceeding by a factor of two in cell number or length scale. And in that previous post, I emphasised the simplicity of a scheme for working out which cell a point belonged by finding the nearest centre point. I foreshadowed the idea of embedding each such grid in a coarser parent, with grid averaging proceeding downward, and using the progressive information to supply estimates for empty cells.

The following graph from HADCRUT illustrates the problem. It shows July 2017 temperature anomalies on a 5°x5° grid, with colors for cells that have data, and white otherwise. They average the area colored, and omit the rest from the average. As I often argue, as a global estimate, this effectively replaces the rest by the average value. HADCRUT is aware of this, because they actually average by hemispheres, which means the infilling is done with the hemisphere average rather than global. As they point out, this has an important benefit in earlier years when the majority of missing cells were in the SH, which was also behaving differently, so the hemisphere average is more eappropriate than global. On the right, I show the same figure, but this time with my crude coloring in (with Paint) of that hemisphere average. You can assess how appropriate the infill is:



A much-discussed paper by Cowtan and Way 2013 noted that this process led to bias in that the areas thus infilled tended not to have the behaviour of the average, but were warming faster, and this was underestimated particularly since 2000 because of the Arctic. They described a number of remedies, and I'll concentrate on the use of kriging. This is a fairly elaborate geostatistical interpolation method. When applied, HADCRUT data-based trends increased to be more in line with other indices which did some degree of interpolation.

I think the right way to look at this is getting infilling right. HADCRUT was on the right track in using hemisphere averages, but it should be much more local. Every missing cell should be assigned the best estimate based on local data. This is in the spirit of spatial averaging. The cells are chosen as regions of proximity to a finite number of measurement points, and are assigned an average from those points because of the proximity. Proximity does not end at an artificial cell boundary.

In the previous post, I set up a grid averaging based on an inventory of about 11000 stations (including GHCN and ERSST) but integrated not temperature but a simple function sin(latitude)^2, which should give 1/3. I used averaging omitting empty cells, and showed that at coarse resolution the correct value was closely approximated, but this degraded with refinement, because of the accession of empty cells. I'll now complete that table using nested integration with hexagonal grid. At each successive level, if a cell is empty, it is assigned the average value of the smallest cell from a previous integration that includes it. (I have fixed the which.min error here too; it made little difference).

levelNumcellsSimple averageNested average
11320.32920.3292
221220.33110.3311
332720.32750.3317
444820.32560.3314
5510820.32060.3317
6619220.31670.332
7736320.3110.3313
8876820.30960.3315


The simple average shows that there is an optimum; a grid fine enough to resolve the (small) variation, but coarse enough to have data in most cells. The function is smooth, so there is little penalty for too coarse, but a larger one for too fine, since the areas of empty cells coincides with the function peak at the poles. The merit of the nested average is that it removes this downside. Further refinement may not help very much, but it does no harm, because a near local value is always used.

The actual coding for nested averaging is quite simple, and I'll give a more complete example below.

HADCRUT and Cowtan/Way

Cowtan and Way thankfully released a very complete data set with their paper, so I'll redo their calculation (with kriging) with nested gridding and compare results. They used HADCRUT 4.1.1.0, data ending at end 2012. Here is a plot of results from 1980, with nested integration of the HADCRUT gridded data at centres (but on a hex grid). I'm showing every even step as hier1-4, with hier4 being the highest resolution at 7682 cells. All anomalies relative to 1961-90..


UpdateI had made an error in coding for the HADCRUT/C&W example - see code. I had used which.min instead of which.max. This almost worked, because it placed locations in the cells on the opposite side of the globe, consistently. However, the result is now much more consistent with C&W. With refining, the integrals now approach from below, and also converge much more tightly.

The HADCRUT 4 published monthly average (V4.1.1.0) is given in red, and the Cowtan and Way Version 1 kriging in black. The nested integration makes even more difference than C&W, mainly in the time from 1995 to early 2000's. Otherwise, a As with C&W, it adheres closely to HADCRUT in earlier years, when presumably there isn't much bias associated with the missing data. C&W focussed on the effect on OLS trends, particularly since 1/1997. Here is a table, in °C/Century:

Trend 1997-2012Trend 1980-2012
HAD 40.4621.57
Hier10.9021.615
Hier20.9291.635
Hier30.9671.625
Hier40.9561.624
C&W krig0.971.689


Convergence is very good to the C&W trend I calculate. In their paper, for 1997-2012 C&W give a trend of 1.08 °C/Cen (table III) which would agree very well with the nested results. C&W used ARMA(1,1) rather than OLS, but the discrepancy seems too large for that.Update: Kevin Cowtan has explained the difference in a comment below.

Method and Code

This is the code for the integration of the monthly sequence. I'll omit the reading of the initial files and the graphics, and assume that we start with the HADCRUT 4.1.1.0 gridded 1980-2012 data reorganised into an array had[72,36,12,33] (lon,lat,month,year). The hexmin[[]] lists are as described and posted previously. The 4 columns of $cells are the cell centres and areas (on sphere). The first section is just making pointer lists from the anomaly data into the grids, and from each grid into its parent. If you were doing this regularly, you would store the pointers and just re-use as needed, since it only has location data. The result is the gridpointer and datapointer lists. The code takes a few seconds.
monthave=array(NA,c(12,33,8)) #array for monthly averages 
datapointer=gridpointer=cellarea=list(); g=0;
for(i in 1:8){ # making pointer lists for each grid level
 g0=g;  # previous g
 g=as.matrix(hexmin[[i]]$cells); ng=nrow(g);
 cellarea[[i]]=g[,4]
 if(i>1){ # pointers to coarser grid i-1
  gp=rep(0,ng)
  for(j in 1:ng)gp[j]=which.max(g0[,1:3]%*%g[j,1:3])
  gridpointer[[i]]=gp
 }
 y=inv; ny=nrow(y); dp=rep(0,ny) #  y is list of HAD grid centres in 3D cartesian
 for(j in 1:ny) dp[j]=which.max(g[,1:3]%*%y[j,])
 datapointer[[i]]=dp  # datapointers into grid i
}
Update: Note the use of which.max here, which is the key instruction locating points in cells. I had originally used which.min, which actually almost worked, because it places ponts on the opposite side of the globe, and symmetry nearly makes that OK. But not quite. Although the idea is to minimise the distance, that is implemented as maximising the scalar product.

The main data loop just loops over months, counting and adding the data in each cell (using datapointer); forming a cell average. It then inherits from the parent grid values (for empty cells) from the parent average vector using gridpointer to find the match, so at each ave is complete. There is an assumption that the coarsest level has no empty cells. It is then combined with area weighting (cellarea, from hexmin) for the monthly average. Then on to the next month. The result is the array monthave[month, year, level] of global averages.
for(I in 1:33)for(J in 1:12){ # looping over months in data from 1980
 if(J==1)print(Sys.time())
 ave=rep(NA,8)  # initialising
 #tab=data.frame(level=ave,Numcells=ave,average=ave)
 g=0
 for(K in 1:8){ # over resolution levels
  ave0=ave
  integrand=c(had[,,J,I+130])  # Set integrand to HAD 4 for the month 
  area=cellarea[[K]]; 
  cellsum=cellnum=rep(0,length(area))  # initialising
  dp=datapointer[[K]]
  for(i in 1:n){  # loop over "stations"
   ii=integrand[i]
   if(is.na(ii))next  # no data in cell
   j=dp[i]
   cellsum[j]=cellsum[j]+ii
   cellnum[j]=cellnum[j]+1
  }
  j=which(cellnum==0) # cells without data
  gp=gridpointer[[K]]
  if(K>1)for(i in j){cellnum[i]=1;cellsum[i]=ave0[gp[i]]}
  ave=cellsum/cellnum # cell averages
  Ave=sum(ave*area)/sum(area) # global average (area-weighted)
  if(is.na(Ave))stop("A cell inherits no data")
  monthave[J,I,K] = round(Ave,4) # weighted average
 }
}# end I,J

Data

Moyhuhexmin has the hex cell data and was given in the earlier post. I have put a new zipped ascii version here



Thursday, September 28, 2017

Simple use of a complex grid - Earth temperature.

This is a follow-up to my last post, which refined ideas from an earlier post on using platonic solids as a basis for grids on the sphere that were reasonably uniform, and free of the pole singularities of latitude/longitude. I gave data files for use, as I did with an earlier post on a special case, the cubed sphere.

The geometry involved can be complicated, but a point I made in that last post was that users need never deal with the complexity. I gave a minimal set of data for grids of varying resolution, which basically recorded the mid-points of the cells, and their area. That is all you need to make use of them.

I should add that I don't think the hexagon method I recommend is a critical improvement over, say, the cubed sphere method. Both work well. But since this method of application is the same for any variant, just using cell centres and areas in the same way, there is no cost in using the optimal.

In this post, I'd like to demonstrate that with an example, with R code for definiteness. I'd also like to expand on the basic idea, which is that near-regular grids of any complexity have he Voronoi property, which is that cells are the domain of points closest to the mid-points. That is why mid-point location is sufficient information. I can extend that to embedding grids in grids of lower resolution; I will recommend a method of hierarchical integration in which empty cells inherit estimates from the most refined grid that has information for their area. I think this is the most logical answer to the empty cell problem.

In the demonstration, I will take the inventory of stations that I use for TempLS. It has all GHCN V3 stations together with a selection of ERSST cells, treated as stations located at grid centres. It has 10997 locations. I will show how to bin these, and use the result to do a single integration of data on those points.

I start with calling the data for the inventory ("invo.sav") (posted in the dataset for cubed sphere above). Then I call the Moyhuhexmin data that I posted in the last post. I am going to do the integration over all 8 resolution levels, so I loop over variable K, collecting results in ave[]:
load("invo.sav")
load("Moyhuhexmin.sav")
ave=rep(NA,8) # initialising
for(K in 1:8){
 h=hexmin[[K]]  # dataframe for level K
 g=as.matrix(h$cells) # 3D coords of centres, and areas
 y=invo$z; n=nrow(y);  # invo$z are stations; 
 pointer=rep(0,n)
This is just gathering the information. g and y are the two sets of 3D cartesian coordinates on the sphere to work with. Next I locate y in the cells which have centre g:
 pointer=rep(0,n)
 for(i in 1:n) pointer[i]=which.min(g[,1:3]%*%y[i,]) # finding closest g to y 
If this were a standalone calculation, I wouldn't have done this as a separate loop. But the idea is that, once I have found the pointers, I would  store them as a property of the stations, and never have to do this again. Not that it is such a pain; although I discussed last time a multi-stage process, first identifying the face and then searching that subset, in fact with near 11000 nodes and 7682 cells (highest resolution), the time taken is still negligible - maybe 2 seconds on my PC.

Now to do an actual integration. I'll use a simple known function, where one would normally use temperature anomalies assigned to a subset of stations y. I'll use the y-coord in my 3D, which is sin(latitude),, and sunce that has zero integral, I'll integrate the square. The answer should be 1/3.
integrand=y[,2]^2
area=g[,4]; 
cellsum=cellnum=rep(0,nrow(g))  # initialising
for(i in 1:n){
 j=pointer[i]
 cellsum[j]=cellsum[j]+integrand[i]
 cellnum[j]=cellnum[j]+1
}
area[] is just the fourth column of data from hexmin; it is the area of each cell on the sphere. cellsum[] with be the sum of integrand values in the cell, and cellnum[] the count (for averaging). This is where the pointers are used. The final stage is the weighted summation:
o=cellnum>0 # cells with data
ave[K] = sum(cellsum[o]*area[o]/cellnum[o])/sum(area[o]) # weighted average
} # end of K loop
This is, I hope, fairly obvious R stuff. o[] marks cells with data which are the only ones included in the sum. area[o] are the weights, and to get the averages I divide by the sum of weights. This is just conventional grid averaging.

Integration results

Here are the results of grid integration of sin^2(lat) at various resolution levels:

levelNumber of cellsaverage
1320.3292
21220.3311
32720.3275
44820.3256
510820.3206
619220.3167
736320.3108
876820.3096
The exact answer is 1/3. This was reached at fairly coarse resolution, which is adequate for this very smooth function. At finer resolution, empty cells are an increasing problem. Simple averaging ignoring empty cells effectively assigns to those eells the average of the rest. Because the integrand has peak value 1 at the poles, where many cells are empty, those cells are assigned a value of about 1/3, when they really should be 1. That is why the integral diminishes with increasing resolution. It is also why the shrinkage tapers off, because once most cells in the region are empty, further refinement can't make much difference.

Empty cells and HADCRUT

This is the problem that Cowtan and Way 2013 studied with HADCRUT. HADCRUT averages hemispheres separately, so they effectively infill empty cells with hemisphere averages. But Arctic areas especially are warming faster than average, and HADCRUT tends to miss this. C&W tried various methods of interplating, particularly polar values, and got what many thought was an improvement, more in line with other indices. I showed at the time that just averaging by latitude bands went a fair way in the same direction.

With the new grid methods here, that can be done more systematically. The Voronoi based matching can be used to embed grids in grids of lower resolution, but fewer empty cells. Integrtaion can be done starting with a coarse grid, and then going to higher resolution. Infilling of an empty cell can be done with the best value from the heirarchy.

I use an alternative diffusion based interpolation as one of the four methods for TempLS. It works very well, and gives results similar to the other two of the three best (node-nased triangular mesh and spherical harmonics). I have tried variants of the heirarchical method, with similar effect.

Next

In the next post, I will check out the hierarchical method applied to this simple example, and also to HADCRUT4 gridded version. I'm hoping from a better match with Cowtan and Way.









Tuesday, September 26, 2017

The best grid for Earth temperature calculation.

Earlier this month, I wrote about the general ideas of gridding, and how the conventional latitude/longitude grids were much inferior to grids that could be derived from various platonic solids. The uses of gridding in calculating temperature (or other field variables) on a sphere are
  1. To gather like information into cells of known area
  2. To form an area weighted sum or average, representative of the whole sphere
  3. a necessary requirement is that it is feasible to work out in which cell an arbitrary location belongs
So a good grid must have cells small enough that variation within them has little effect on the result ("like"), but large enough that they do significant gathering. It is not much use having a grid where most cells are empty of data. This leads to two criteria for cells that balance these needs:
  • The cells should be of approximately equal area.
  • The cells should be compact, so that cells of a given area can maximise "likeness".
Lat/lon fails because:
  • cells near poles are much smaller
  • the cells become long and thin, with poor compactness
I showed platonic solid meshes with triangles and squares that are much less distorted, and with more even area distribution, Clive Best, too, has been looking at icosahedra. I have also been looking at ways of improving area uniformity. But I haven't been thinking much about compactness. The ideal there is a circle. Triangles deviate most; rectangles are better, if nearly square. But better still are regular hexagons, and that is my topic here.

With possibly complex grids, practical usability is important. You don't want to keep having to deal with complicated geometry. With the cubed sphere, I posted  here a set of data which enables routine use with just lookup. It includes a set of meshes with resolution increasing by factors of 2. The nodes have been remapped to optimise area uniformity. There is a lookup process so that arbitrary points can be celled. But there is also a list showing in which cell the stations of the inventory that I use are found. So although the stations that report vary each month, there is a simple geometry-free grid average process
  • For each month, sort the stations by cell label
  • Work out cell averages, then look up cell areas for weighted sum.
I want to do that here for what I think is an optimal grid.

The optimal grid is derived from the triangle grid for icosahedra, although it can also be derived from the dual dodecahedron. If the division allows, the triangles can be gathered into hexagons, except near vertices of the icosahedron, where pentagons will emerge. This works provided the triangles faces are initially trisected, and then further divided. There will be 12 pentagons, and the rest hexagons. I'll describe the mapping for uniform sphere surface area in an appendix.

Lookup

I have realised that the cell finding process can be done simply and generally. Most regular or near-regular meshes are also Voronoi nets relative to their centres. Thatis, a cell includes the points closest to its centre, and not those closer to any other centre. So you can find the cell for a point by simply looking for the closest cell center. For a sphere that is even easier; it is the centre for which the scalar product (cos angle) of 3D coordinates is greatest.

If you have a lot of points to locate, this can still be time-consuming, if mechanical. But it can be sped up. You can look first for the closest face centre (of the icosahedron). Then you can just check the cells within that face. That reduces the time by a factor of about 20.

The grids

Here is a WebGL depiction of the results. I'm using the WebGL facility, V2.1. The sphere is a trackball. You can choose the degree of resolution with the radio buttons on the right; hex122, for example, means a total of 122 cells. They progress with factors of approx 2. The checkboxes at the top let you hide various objects. There are separate objects for red, yellow and green, but if you hide them all, you see just the mesh. The colors are designed to help see the icosahedral pattern. Pentagons are red, surrounded by a ring of yellow.



The grid imperfections now are just a bit of distortion near the pentagons. This is partly because I have forced them to expand to have simiar area to the hexagons. For grid use, the penalty is just a small loss of compactness.

Data

The data is in the form of a R save file, for which I use the suffix .sav. There are two. One here is a minimal set for use. It includes the cell centre locations, areas, and a listing of the cells within each face, for faster search. That is all you need for routine use. There is a data-frame with this information for each of about 8 levels of resolution, with maximum 7682 cells (hex7682). There is a doc string.

The longer data set is here. This has the same levels, but for each there are dataframes for cells, nodes, and the underlying triangular mesh. A dataframe is just R for a matrix that can have columns of various types, suitably labelled. It gives all the nodes of the triangular mesh, with various details. There are pointers from one set to another. there is also a doc string with details.

Appendix - equalising area

As I've occasionally mentioned, I've spent a lot of time on this interesting math problem. The basic mapping from platonic solid to sphere is radial projection. But that distorts areas that were uniform on the solid. Areas near the face centres are projected further (thinking of the solid as within the sphere) and grow. There is also, near the edges, an effect due to the face plane slanting differently to the sphere (like your shadow gets smaller when the sun is high). These distortions get worse when the solid is further from spherical.

I counter this with a mapping which moves the mesh on the face towards the face centre. I initially used various polynomials. But now I find it best to group the nodes by symmetry - subsets that have to move in step. Each has one (if on edge) or two degrees of freedom. Then the areas are also constrained by symmetry, and can be grouped. I use a Newton-Raphson method (actually secant) to move the nodes so that the triangles have area closest to the ideal, which is the appropriate fraction of the sphere. There are fewer degrees of freedom than areas, so it is a kind of regression calculation. It is best least squares, not exact. You can check the variation in areas; it gets down to a few percent.



















Tuesday, September 19, 2017

GISS August up 0.01°C from July.

GISS showed a very small rise, going from 0.84°C in July to 0.85°C in August (GISS report here). TempLS mesh showed a very slight fall, which I posted at 0.013°C, although with further data is is now almost no change at all. I see that GISS is now using ERSST V5, as TempLS does.

The overall pattern was similar to that in TempLS. Warm almost everywhere, with a big band across mid-latitude Eurasia and N Africa. Cool in Eastern US and high Arctic, which may be responsible for the slowdown in ice melting..

As usual here, I will compare the GISS and previous TempLS plots below the jump.

Monday, September 18, 2017

Grids, Platonic solids, and surface temperature (and GCMs)

This follows a series, predecessor here, in which I am looking at ways of dealing with surface variation of Earth's temperature, particularly global averaging. I have written a few posts on the cubed sphere, eg here. I have also shown some examples of using an icosahedron, as here. Clive Best has been looking at similar matters, including use of icosahedron. For the moment, I'd like to write rather generally about grids and ways of mapping the sphere.

Why grids?

For surface temperature, grids are commonly used to form local averages from data, which can then be combined with area weighting to average globally. I have described the general considerations here. All that is really required is any subdivision of reasonable (compact) shapes. They should be small enough that the effect of variation within is minimal, but large enough that there is enough data for a reasonable estimate. So they should be of reasonably equal area.

The other requirement, important later for some, is that any point on the sphere can be associated with the cell that contains it. For a regular grid like lat/lon, this is easy, and just involves conversion to integers. So if each data point is located, and each cell area is known, that is all that is needed. As a practical matter, once the cell locations are known for an inventory of stations, the task of integrating the subset for a given month is just a look-up, whatever the geometry.

I described way back a fairly simple subdivision scheme that works for this. Equal latitude bands are selected. Then each band is divided as nearly as possible into square elements (on the sphere). The formula for this division can then be used to locate arbitrary points within the cells. I think this is all that is required for surface averaging.

However, for anything involving partial differentiation, such as finite element or GCM modelling, more is required. Fluxes between cells need to be measured, so hey have to line up. Nodes have to be related to each cell they abut. This suggests regular grids. In my case, I sometimes want to use a kind of diffusive process to estimate what is happening in empty cells. Again, regular is better.

Platonic solids

Something that looks a bit like a sphere and is easy to fit with a regular grid is a Platonic solid. There are five of them - I'll show the Wiki diagram:



Regular means that each side has the same length, and each face is a congruent regular polygon. The reason why there are only five is seen if you analyse what has to happen at vertices (Wiki):

Friday, September 8, 2017

August global surface temperature down 0.013°C

TempLS mesh anomaly (1961-90 base) was down from 0.69°C in July to 0.677°C in August. This very small drop compares with the small rise of 0.038°C in the NCEP/NCAR index, and a bigger rise (0.12) in the UAH LT satellite index. The August value is less than August 2015 or 2016, but higher than 2014.

There was a moderate fall in Antarctica, which as usual affects TempLS mesh and GISS more than others. I'd expect NOAA and HADCRUT to show increases for August. Regionally, the Old World was mostly warm; US was cold cental and East, but N Canada was warm. S America mostly warm (still awaiting a few countries). :


Thursday, September 7, 2017

August NCEP/NCAR global anomaly up 0.038°C

In the Moyhu NCEP/NCAR index, the monthly reanalysis average rose from 0.299°C in July to 0.337°C in August, 2017. The results were late this month; for a few days NCEP/NCAR was not posting new results. It was a very up and down month; a dip at at the start, then quite a long warm period, and then a steep dip at the end. Now that a few days in September are also available, there is some recovery from that late dip. August 2017 was a bit cooler than Aug 2016, but warmer than 2015.

It was cool in Eastern US, but warm in the west and further north. Cool in Atlantic Europe, but warm further east. Mostly cool in Antarctica.







Wednesday, August 30, 2017

Gulf SST - warm before Harvey, cool after.

I maintain a page showing high resolution (1/4degree) AVHRR SST data from NOAA - in detail: "NOAA High Resolution SST data provided by the NOAA/OAR/ESRL PSD, Boulder, Colorado, USA, from their Web site at http://www.esrl.noaa.gov/psd/". It renders it in WebGL and goes back with daily maps for a decade or so, then les frequently. It shows anomalies relative to 1971-2000. I have been tracking the effect of Hurricane Harvey. It was said to have grown rapidly because of warm Gulf waters; they were warm, but not exceptionally, as this extract from 15 August shows:



It remained much the same to 24th August, when Harvey grew rapidly, and gained Hurricane status late in the day (). But by 25th, there is some sign of cooling. 26th (not shown) was about the same. But by 27th, There was marked cooling, and by 28th more so. The cooling seems to show up rather belatedly alomg the path of the hurricane.



Here's is the latest day at higher resolution:



A few years ago, I developed a set of movies based on recent hurricanes of the time, showing their locations and SST at the time. Some showed a big effect, some not so much. Harvey was interesting in that it covered a fairly confined area of ocean, and moved slowly.



Thursday, August 24, 2017

Surface temperature sparsity error modes

This post follows last week's on temperature integration methods. I described a general method of regression fitting of classes of integrable functions, of which the most used to date is spherical harmonics (SH). I noted that the regression involved inverting a matrix HH consisting of all the scalar product integrals of the functions in the class. With perfect integration this matrix would be a unit matrix, but as the SH functions become more oscillatory, the integration method loses resolution, and the representation degrades with the condition number of the matrix HH. The condition number is the ratio of largest eigenvalue to smallest, so what is happening is that some eigenvectors become small, and the matrix is near singular. That means that the corresponding eigenvector might have a large multiplier in the representation.

I also use fitted SH for plotting each month's temperature. I described some of the practicalities here (using different functions). Increasing the number of functions improves resolution, but when HH becomes too ill-conditioned, artefacts intrude, which are multiples of these near null eigenvectors.

In the previous post, I discussed how the condition of HH depends on the scalar product integral. Since the SH are ideally orthogonal, better integration improves HH. I have been taking advantage of that in recent TempLS to increase the order of SH to 16, which implies 289 functions, using mesh integration. That might be overdoing it - I'm checking.

In this post, I will display those troublesome eigen modes. They are of interest because they are associated with regions of sparse coverage, and give a quantification of how much they matter. Another thing quantified is how much the integration method affects the condition number for a given order of SH. I'll develop that further in another post.

I took N=12 (169 functions), and looked at TempLS stations (GHCN+ERSST) which reported in May 2017. Considerations on choice of N are that if too low, the condition number is good, and the minimum modes don't show features emphasising sparsity. If the number is too high, each region like Antarctica can have several split modes, which confuses the issue.

The integration methods I chose were mostly described here
  • OLS - just the ordinary scalar product of the values
  • grid - integration by summing on a 5x5° latitude/longitude grid. This was the earliest TempLS method, and is used by HADCRUT.
  • infill - empty cells are infilled with an average of nearby values. Now the grid is a cubed sphere with 1536 cells
  • mesh - my generally preferred method using an irregular triangular grid (complex hull of stations) with linear interpolation.
OLS sounds bad, but works quite well at moderate resolution, and was used in TempLS until very recently.

I'll show the plots of the modes as an active lat/lon plot below, and then the OLS versions in WebGL, which gives a much better idea of the shapes. But first I'll show a table of the tapering eigenvalues, numbering from smallest up. They are scaled so that the maximum is 1, so reciprocal of the lowest is the condition number.
OLS grid infilled mesh
Eigen10.02110.01470.06950.135
Eigen20.03690.02750.1380.229
Eigen30.04230.04690.2120.283
Eigen40.05720.04990.2440.329
Eigen50.0840.0890.2480.461
Eigen60.1040.1070.3730.535
Eigen70.1080.1460.4060.571
Eigen80.1240.1640.4290.619
And here is a graph of the whole sequence, now largest first:


The hierarchy of condition numbers is interesting. I had expected that it would go in the order of the columns, and so it does until near the end. Then mesh drops below infilled grid, and OLS below grid, for the smallest eigenvalues. I think what determines this is the weighting of the nodes in the sparse areas. For grid, this is not high, because each just gets the area of its cell. For both infilled and mesh, the weight rises with the area, and apparently with infilled, more so.

Thursday, August 17, 2017

Temperature averaging and integration - the basics

I write a lot about spatial integration, which is at the heart of global temperature averaging. I'll write more here about the principles involved. But I'm also writing to place in context methods I use in TempLS, which I think are an advance on what is currently usual. I last wrote a comparison of methods in 2015 here, which I plan to update in a sequel. Some issues here arose in a discussion at Climate Audit.

It's a long post, so I'll include a table of contents. I want to start from first principles and make some math connections. I'll use paler colors for the bits that are more mathy or that are just to make the logic connect, but which could be skipped.

Wednesday, August 16, 2017

GISS July up 0.15°C from June.

GISS was up from 0.68°C in June to 0.83°C in July. It was the warmest July in the record, though the GISS report says it "statistically tied" with 2016 (0.82). The increase was similar to the 0.12°C rise in TempLS.

The overall pattern was similar to that in TempLS. Warm almost everywhere, with a big band across mid-latitude Eurasia and N Africa. Cool in parts of the Arctic, which may save some ice.

I'll show the plot of recent months on the same 1981-2010 base, mainly because they are currently unusually unanimous. The group HADCRUT/NOAA/TempLS_grid tend to be less sensitive to the Antarctic variations that have dominated recent months, and I'd expect them to be not much changed in July also, which would leave them also in much the same place.



Recently, August reanalysis has been unusually warm. As usual here, I will compare the GISS and previous TempLS plots below the jump.

Tuesday, August 8, 2017

July global surface temperature up 0.11°C

TempLS mesh anomaly (1961-90 base) was up from 0.568°C in June to 0.679°C in July. This follows the smaller rise of 0.06°C in the NCEP/NCAR index, and a similar rise (0.07) in the UAH LT satellite index. The July value is just a whisker short of July 2016, which was a record warm month. With results for Mexico and Peru still to come, that could change..

Again the dominant change was in Antarctica, from very cold in June to just above average in July. On this basis, I'd expect GISS to also rise; NOAA and HADCRUT not so much. Otherwise as with the reanalysis, Middle East and around Mongolia were warm, also Australia and Western USA. Nowhere very hot or cold. Here is the map:



Thursday, August 3, 2017

July NCEP/NCAR up 0.058°C

In the Moyhu NCEP/NCAR index, the monthly reanalysis average rose from 0.241°C in June to 0.299°C in July, 2017. This is lower than July 2016 but considerably higher than July 2015. The interesting point was a sudden rise on about July 24, which is responsible for all the increase since June. It may be tapering off now.

It was generally warm in temperate Asia and the Middle East, and even Australia. Antarctica was mixed, not as cold as June. The Arctic has been fairly cool.





Saturday, July 22, 2017

NOAA's new ERSST V5 Sea surface temperature and TempLS

The paper describing the new version V5 of ERSST has been published in the Journal of Climate. The data is posted, and there is a NOAA descriptive page here. From the abstract of the (paywalled) paper, by Huang et al:
This update incorporates a new release of ICOADS R3.0, a decade of near-surface data from Argo floats, and a new estimate of centennial sea-ice from HadISST2. A number of choices in aspects of quality control, bias adjustment and interpolation have been substantively revised. The resulting ERSST estimates have more realistic spatio-temporal variations, better representation of high latitude SSTs, and ship SST biases are now calculated relative to more accurate buoy measurements, while the global long-term trend remains about the same.
A lot of people have asked about including ARGO data, but it may be less significant than it seems. ARGO floats only come to the surface once every ten days, while the more numerous drifter buoys are returning data all the time. There was a clamor for the biases to be calculated relative to the more accurate buoys, but as I frequently argued, as a matter of simple arithmetic it makes absolutely no difference to the anomaly result. And sure enough, they report that it just reduces all readings by 0.077°C. That can't affect trends, spatial patterns etc.

The new data was not used for the June NOAA global index, nor for any other indices that I know of. But I'm sure it will be soon. So I have downloaded it and tried it out in TempLS. I have incorporated it in place of the old V3b. So how much difference does it make? The abstract says
Furthermore, high latitude SSTs are decreased by 0.1°–0.2°C by using sea-ice concentration from HadISST2 over HadISST1. Changes arising from remaining innovations are mostly important at small space and time scales, primarily having an impact where and when input observations are sparse. Cross-validations and verifications with independent modern observations show that the updates incorporated in ERSSTv5 have improved the representation of spatial variability over the global oceans, the magnitude of El Niño and La Niña events, and the decadal nature of SST changes over 1930s–40s when observation instruments changed rapidly. Both long (1900–2015) and short (2000–2015) term SST trends in ERSSTv5 remain significant as in ERSSTv4.
The sea ice difference may matter most - this is a long standing problem area in incorporating SST in global measures. On the NOAA page, they show a comparison graph:



There are no obvious systematic trend differences. The most noticeable change is around WWII, which is a bit of a black spot for SST data. A marked and often suspected peak around 1944 has diminished, with a deeper dip around 1942.

TempLS would be expected to reflect this, since most of its data is SST. Here is the corresponding series for TempLS mesh plotted:



Global trends (in °C/century) are barely affected. Reduced slightly in recent decades, increased slightly since 1900:

start yearend yearTempLS with V4TempLS with V5
190020160.7690.791
194020160.9780.974
196020161.4891.465
198020161.6311.607

Almost identical behaviour is seen with TempLS grid.







Thursday, July 20, 2017

NOAA global surface temperature down just 0.01°C

Down from 0.83°C in May to 0.82C in June (report here). I don't normally post separately about NOAA, but here I think the striking difference from GISS/TempLS mesh is significant. GISS went down 0.19°C, and TempLS mesh by 0.12°C. But TempLS grid actually rose, very slightly. I have often noted the close correspondence between NOAA and TempLS grid (and the looser one between TempLS mesh and GISS) and attributed the difference to GISS etc better coverage of the poles.

This month, the cause of that difference is clear, as is the relative coolness of June in GISS. With TempLS reports, I post a breakdown of the regional contributions. These are actual contributions, not just average temperature. So in the following:



you see that the total dropped by about 0.12°C, while Antarctica dropped from conributing 0.07C to -0.07C, a difference that slightly exceeded the global total drop of 0.12C.

That doesn't mean that, but for Antarctica, there would have been no cooling. May had been held up by the relative Antarctic warmth. But it is a further illustration of the difference between the interpolative procedures of GISS and TempLS and the cruder grid-based processes of NOAA and TempLS grid. I would probably have abandoned TempLS grid, or at least replaced it with a more interpolative version (post coming soon), if it were not for the correspondence with NOAA and HADCRUT.

Update: I see that the paper for ERSST V5 has just been published in J Climate. I'll post about that very soon, and also, maybe separately, give an analysis of its effect in TempLS. I see also that NOAA was still using V4 for June; I assume they will use V5 for July, as I expect I will. The NOAA ERSST V5 page is here.

Here is the NOAA map for the month. You can see how the poles are missing.





Saturday, July 15, 2017

GISS June down 0.19°C from May.

GISS was down from 0.88°C in May to 0.69°C in June.The GISS report is here; they say it was the fourth warmest June on record. The drop was somewhat more than the 0.12°C in TempLS. The most recent month that was cooler than that was November 2014.

The overall pattern was similar to that in TempLS. The big feature was cold in Antarctica, to which both GISS and TempLS msh are sensitive, more so than HADCRUT or NOAA. Otherwise, as with TempLS, it was warm in Europe, extending through Africa and the Middle East, and also through the Americas. Apart from Antarctica, the main cold spot was NW Russia.

So far, July is also cold, although with some signs of warming a little from June. As usual, I will compare the GISS and previous TempLS plots below the jump.

Saturday, July 8, 2017

June global surface temperature down 0.12°C

TempLS mesh was down from 0.704°C in May to 0.586°C in June. This follows the slightly larger fall of 0.16°C in the NCEP/NCAR index, and falls in the satellite indices, which had risen in May. The June anomaly (1961-90 base) is now a little below mid-2015 values, and is the coolest month since Nov 2014. In fact, it is similar to the 2014 annual average, which was still a record in its day.

The big turnaround was in the Antarctic, which went from quite warm to very cool. This is reflected in the TempLS grid values, which are less sensitive to the poles; T grid actually warmed. This pattern tends to be reflected in the main indices, with GISS generally picking up the polar changes; NOAA and HADCRUT less so. Otherwise as with the reanalysis, Europe was warm, NW Russia cold, Arctic neutral, warm spots in the Americas. Here is the map, and I'll show below that the breakdown, which emphasises the Antarctic turnaround. :



Breakdown plot:





Tuesday, July 4, 2017

New RSS TLT V4 - comparisons

As mentioned in my previous post, RSS has a new V4 TLT out - announcement here. I'm now using it in place of V3.3. The J Climate paper describing it is here:

A satellite-derived lower tropospheric atmospheric temperature dataset using an optimized adjustment for diurnal effects

Carl A. Mears and Frank J. Wentz
Remote Sensing Systems, 444 Tenth Street, Santa Rosa, CA, 95401

I quoted from the abstract in my previous post.

The changes are described in those links, and are not surprising, given the previous datasets (eg TMT, TTT) that have come out in V4. I thought here I would just show a comparison of recent changes in both UAH and RSS - they are rather complementary. In the graph below, I have converted RSS from 1979-1999 to the UAH base of 1981-2010. I use reddish for UAH, bluish for RSS (12 month running mean):



The effect of the change is clearer if a common measure is subtracted - I use the average of the four sets here for that:



Now you can see what has happened. RSS TLT V4 is close to UAH V5.6, and UAH V6 is close to the old RSS V3.3 (which RSS described as having a known cooling bias). As they noted, the new RSS V4 shows more uniformity over time. The overall picture is that TLT measures are not stable; much less so than surface measures, as I noted here.

Contrary to some (mainly sceptic) opinion, satellite measures are not naturally superior. Measuring the temperature at various levels of the troposphere is a worthwhile endeavour, but it is not a substitute for surface. In fact, I think TLT has had undeserved prominence, and I rather thought RSS should drop it altogether. It is an attempt to get as close to surface as possible, but it isn't very close, and sacrifices much reliability in trying to get there. I notice the John Christy now usually quotes UAH TMT.

The reason for loss of reliability is that the MSU is trying to make deductions from a microwave signal which is a mix of various layers in the troposphere, with a large background noise generated at the surface. It is hard to discriminate, and harder as you try to see closer to the surface. They try to get around this by taking two measures designed for higher levels (TMT and, for UAH, a tropopause level TP), and forming a linear combination which is designed to subtract out the higher troposphere and stratosphere levels. But as with any such differencing, errors increase.

People have the idea that satellites just have to be better, because they can survey the whole Earth with one instrument. But that is far from true. The downsides are described in this UAH overview and the various RSS papers, and include:
  • There is only one instrument, or at most a few, while at the surface there are thousands, creating lots of redundancy. One consequence is that with satellites there is a big problem with the inevitable changeovers. Surface stations needd some adjustment when the instruments or environments change but that is minor compared with changing the whole instrument base every few years.
  • The instrument doesn't read a thermometer at every level. It has to resolve a mixed incoming microwave beam, confounded with surface noise. You can get some resolution with frequency bands, and a little more with differing angles of view. But it is really squinting, and in the end you have to solve an inverse problem, which takes adventurous mathematics.
  • The instrument gives a snapshot just twice a day. At surface, even the old min/max thermometers, though read only once, continuously monitored the minn and max for 24 hours, and of course now we have thousands of stations recording at high frequency. A problem with twice a day is that you have to make adjustments for what time of day it is, because of diurnal variation. And that diurnal pattern depends on the level (not clearly known), season etc. A hard enough problem, but the big one is
  • diurnal drift. It isn't the same time every day, due to orbit changes, and they seem to have trouble deciding exactly what time it is. Roy Spencer says of V6:
    For example, years ago we could use certain AMSU-carrying satellites which minimized the effect of diurnal drift, which we did not explicitly correct for. That is no longer possible, and an explicit correction for diurnal drift is now necessary. The correction for diurnal drift is difficult to do well, and we have been committed to it being empirically–based, partly to provide an alternative to the RSS satellite dataset which uses a climate model for the diurnal drift adjustment.
  • It is a long standing bugbear, and much of the RSS change also seems to be in the drift correction. From their paper abstract:
    Previous versions of this dataset used general circulation model output to remove the effects of drifting local measurement time on the measured temperatures. In this paper, we present a method to optimize these adjustments using information from the satellite measurements themselves. The new method finds a global-mean land diurnal cycle that peaks later in the afternoon, leading to improved agreement between measurements made by co-orbiting satellites.

Those are just some of the problems which lead to such large version changes.

Update: From a tweet from Carl Mears, here is a useful FAQ on the changes.


Further: David asked below about comparison with radiosondes. That FAQ has a diagram showing the comparison:



It is sat - sondes, so when you see in this century that the plot goes down, it means that radiosondes are showing more warming that satellites. With UAHV6.0 it is a lot more; with RSS TLT V4 it is closer, but sondes still show more warming. As the FAQ says:

"Note that all satellite data warm relative to radiosondes before about 2000, and then cool after about 2000. We don't know if this overall pattern is due to problems with the radiosonde data, with the satellite data or (most likely) both."


Monday, July 3, 2017

June NCEP/NCAR down 0.16°C

In the Moyhu NCEP/NCAR index, the monthly reanalysis average fell from 0.40°C in May to 0.241°C in June, 2017. This makes it the coolest month for nearly two years - since 0.164°C in July 2015. Even so, it was still the third warmest in the record for that index, though I comment caution in compare values decades, because of lack of homogeneity. It was only just behind 2013 (0.249) for second place. It's the first time for nearly two years that a month fell behind an earlier corresponding month other than 2016.

The main cool spot was Antarctica, and the main reason for the drop was that, as well, the Arctic dropped back to average, with Siberia mixed. Europe was warm.

in other (tropospheric) news, RSS has brought out a V4 version of TLT, described in a J Climate paper by Wentz and Mears here. I'll start using it for this month's reporting. I was actually wondering whether they would, since the trend seems to be more toward quoting TMT and TTT. AS has been the pattern with V4, the low trend that RSS V3.3 showed until recently, which gave rise to umpteen pause stories, has come closer to other records, mainly, they say, due to a revised diurnal correction. Here is their abstract:

A satellite-derived lower tropospheric atmospheric temperature dataset using an optimized adjustment for diurnal effects

Carl A. Mears and Frank J. Wentz
Remote Sensing Systems, 444 Tenth Street, Santa Rosa, CA, 95401

Temperature sounding microwave radiometers flown on polar-orbiting weather satellites provide a long-term, global-scale record of upper-atmosphere temperatures, beginning in late 1978 and continuing to the present. The focus of this paper is a lower-tropospheric temperature product constructed using measurements made by the Microwave Sounding Unit channel 2, and the Advanced Microwave Sounding Unit channel 5. The temperature weighting functions for these channels peak in the mid to upper troposphere. By using a weighted average of measurements made at different Earth incidence angles, the effective weighting function can be lowered so that it peaks in the lower troposphere. Previous versions of this dataset used general circulation model output to remove the effects of drifting local measurement time on the measured temperatures. In this paper, we present a method to optimize these adjustments using information from the satellite measurements themselves. The new method finds a global-mean land diurnal cycle that peaks later in the afternoon, leading to improved agreement between measurements made by co-orbiting satellites. The changes result in global-scale warming (global trend (70S-80N, 1979-2016) = °0.174 C/decade), ~30% larger than our previous version of the dataset (global trend, (70S-80N, 1979-2016) = 0.134C/decade). This change is primarily due to the changes in the adjustment for drifting local measurement time. The new dataset shows more warming than most similar datasets constructed from satellites or radiosonde data. However, comparisons with total column water vapor over the oceans suggest that the new dataset may not show enough warming in the tropics.


I have updated the data link in the source table.





Tuesday, June 27, 2017

Temperature station distribution - equal area plot

I have been experimenting with maps that are a byproduct of my systematising a cubed sphere grid. I thought it would give a better perspective on the distribution of surface stations and their gaps, especially with the poles. So here are plots of the stations, land and sea, which have reported April 2017 data, as used in TempLS. The ERSST data has already undergone some culling.



It shows the areas in proportion. However, it shows multiple Antarctica's etc, which exaggerates the impression of bare spots, so you have to allow for that. One could try a different projection - here is one focussing on a strip including the America's:



So now there are too many Africa's. However, between them you get a picture of coverage good and bad. Of course, then the question is to quantify the effect of the gaps.





Friday, June 23, 2017

World map equal area projection - more

In my last post, I showed an equal area world map projection that was a by-product of the cubed sphere gridding of the Earth's surface. It was an outline plot, which makes it a bit harder to read. Producing a colored plot was tricky, because the coloring process in R requires an intact loop, which ends where it started, and the process of unfolding the cube onto which the map is initially projected makes cuts.

So I fiddled more with that, and eventually got it working. I'll show the result below. You'll notice more clearly the local distortion near California and Victoria. And it clarifies how stuff gets split up by the cuts marked by blue lines. I haven't shown the lat/lon lines this time; they are much as before.





Monday, June 19, 2017

World map projection using cubed sphere

This post follows on from the previous post, which described the cubed sphere mapping which preserves areas in taking a surface grid from cube to sphere. I should apologise here for messing up the links for the associated WebGL plot for that post. I had linked to a local file version of the master JS file, so while it worked for me, I now realise that it wouldn't work elsewhere. I've fixed that.

If you have an area preserving plot onto the flat surfaces of a (paper) cube, then you only have to unfold the cube to get an equal-area map of the world on a page. It necessarily has distortion, and of course the cuts you make in taking apart the cube. But the area preserving aspect is interesting. So I'll show here how it works.



I've repeated the top and bottom of the cube, so you see multiple poles. Red lines are latitudes, green longitudes. The blue lines indicate the cuts in unfolding the cube, and you should try to not let your eye wander across them, because there is confusing duplication. And there is necessarily distortion near the ends of the lines. But it is an equal area map.

Well, almost. I'm using the single parameter tan() mapping from the previous post. I have been spending far too much time developing almost perfectly 1:1 area mappings. But I doubt they would make a noticeable difference. I may write about that soon, but it is rather geekish stuff.





Saturday, June 17, 2017

Cubing the sphere

I wrote back in 2015 about an improvement on standard latitude/longitude gridding for fields on Earth. That is essentially representing the earth on a cylinder, with big problems at the poles. It is much better to look to a more sphere-like shape, like a platonic solid. I described there a mesh derived from a cube. Even more promising is the icosahedron, and I wrote about that more recently, here and here.

I should review why and when gridding is needed. The original use was in mapping, so you could refer to a square where some feature might be found. The uniform lat/lon grid has a big merit - it is easy to decide which cell a place belongs in (just rounding). That needs to be preserved in any other scheme. Another use is in graphics, where shading or contouring is done. This is a variant of interpolation. If you know some values in a grid cell, you can estimate other places in the cell.

A variant of interpolation is averaging, or integration. You calculate cell averages, then add up to get the global. For this, the cell should be small enough that behaviour within it can be regarded as homogeneous. One sample point is reasonably representative of the whole. Then they are added according to area. Of course, the problem is that "small enough" may mean that many cells have no data.

A more demanding use still is in solution of partial differential equations, as in structural engineering or CFD, including climate GCMs. For that, you need to not only know about the cell, but its neighbors.

A cubed sphere is just a regular rectangular grid (think Rubik) on the cube projected, maybe after re-mapping on the cube, onto the sphere. I was interested to see that this is now catching on in the world of GCMs. Here is one paper written to support its use in the GFDL model. Here is an early and explanatory paper. The cube grid has all the required merits. It's easy enough to find the cell that a given place belongs in, provided you have the mapping. And the regularity means that, with some fiddly bits, you can pick out the neighbors. That supported the application that I wrote about in 2015, which resolved empty cells by using neighboring information. As described there, the resulting scheme is one of the best, giving results closely comparable with the triangular mesh and spherical harmonics methods. I called it enhanced infilling.

I say "easy enough", but I want to make it my routine basis (instead of lat/lon), so that needs support. Fortunately, the grids are generic; they don't depend on problem type. So I decided to make an R structure for standard meshes made by bisection. First the undivided cube, then 4 squares on each face, then 16, and so on. I stopped at 64, which gives 24576 cells. That is the same number of cells as in a 1.6° square mesh, but the lat/lon grid has some cells larger. You have to go to 1.4° to get equatorial cells of the same size.

I'll give more details in an appendix, with a link to where I have posted it. It has a unique cell numbering, with an area of each cell (for weighting), coordinated of the corners on the sphere, a neighbor structure, and I also give the cell numbers of all the measurement points that TempLS uses. There are also functions for doing the various conversions, from 3d coordinates on sphere to cube, and to cell numbering.


There is also a WebGL depiction of the tesselated sphere, with outline world map, and the underlying cube with and without remapping.

Friday, June 16, 2017

GISS May unchanged from April - second warmest May on record.

As with TempLS, GISS showed May unchanged from April, at 0.88°C. Although that is down from the extreme warmth of Feb-Mar, it is still very warm historically. In fact, it isn't far behind the 0.93°C of May 2016. June looks like being cooler, which reduces the likelihood of 2017 exceeding 2016 overall.

The overall pattern was similar to that in TempLS. A big warm band from N of China to Morocco (hot), with warmth in Europe, and cold in NW Russia. Wark Alaska, coolish Arctic and Antarctica mixed.

As usual, I will compare the GISS and previous TempLS plots below the jump.

Tuesday, June 13, 2017

Integrating temperature on sparse subgrids

I've been intermittently commenting on a thread on the long-quiet Climate Audit site. Nic Lewis was showing some interesting analysis on the effect of interpolation length in GISS, using the Python version of GISS code that he has running. So the talk turned to numerical integration, with the usual grumblers saying that it is all too complicated to be done by any but a trusted few (who actually don't seem to know how it is done). Never enough data etc.

So Olof chipped in with an interesting observation that with the published UAH 2.5x2.5° grid data (lower troposphere), an 18 point subset was sufficient to give quite good results. I must say that I was surprised at so few, but he gave this convincing plot:



He made it last year, so it runs to 2015. There was much scepticism there, and some aspersions, so I set out to emulate it, and of course, it was right. My plots and code are here, and the graph alone is here.

So I wondered how this would work with GISS. It isn't as smooth as UAH, and the 250 km less smooth than 1200km interpolation. So while 18 nodes (6x3) isn't quite enough, 108 nodes (12x9) is pretty good. Here are the plots:





I should add that this is the very simplest grid integration, with no use of enlightened infilling, which would help considerably. The code is here.

Of course, when you look at a statistic over a longer period, even this small noise fades. Here are the GISS trends over 50 years:

1967-2016 trend C/CenFull mesh 108 points 18 points
250km1.6581.7031.754
1200km1.7541.7431.768


This is a somewhat different problem from my intermittent search for a 60-station subset. There has already been smoothing in gridding. But it shows that the spatial and temporal fluctuations that we focus on in individual maps are much diminished when aggregated over time or space.





Thursday, June 8, 2017

May global temperature unchanged from April

TempLS mesh was virtually unchanged , from 0.722°C to 0.725°C. This follows the smallish rise of 0.06°C in the NCEP/NCAR index, and larger rises in the satellite indices. The May temperature is still warm, in fact, not much less than May 2016 (0.763°C). But it puts 2017 to date now a little below the annual average for 2016.

The main interest is at the poles, where Antarctica was warm, and the Arctic rather cold, which may help retain the ice. There was a band of warmth running from Mongolia to Morocco, and cold in NW Russia.. Here is the map:







Saturday, June 3, 2017

May NCEP/NCAR up 0.06°C

So far in 2017, in the Moyhu NCEP/NCAR index, January to March were very warm, but April was a lot cooler. May recovered a little, rising from 0.34 to 0.4°C, on the 1994-2013 anomaly base. This is still warm by historic standards, ahead of all annual averages before 2016, but it diminishes the likelihood that 2017 will be warmer than 2016.

There were few notable patterns of hot and cold - cold in central Russia and US, but warm in western US, etc. The Arctic was fairly neutral, which may explain the fairly slow melting of the ice..

Update - UAH lower troposphere V6 ;rose considerably, from 0.27°C to 0.45°C in May.



Wednesday, May 31, 2017

Page on monthly anomalies in WebGL

Moyhu has had for about four years a maintained page with a WebGL display of temperature anomalies over each month since 1900. The anomalies come from TempLS, and use a 1961-90 base period. It is a color-shaded plot, in which the color is correct at each measurement point, and interpolated for the rest of the triangular mesh. The data used is unadjusted GHCN V3 and ERSST V4. The plot is the best source of detailed information about the current month in its early days.

I have been upgrading these pages (trends described here) to use the new versions of the WebGL facility. That has involved also upgrading the facility, and I'll show the new anomaly plot below the jump. I'll leave the old page in place for a few days.

The main upgrade was to enable use of on demand loading of data via XMLHTTPRequest, since it would take far to long to download data for all months. That involved creating selection menus (green block on right). To incorporate this in the facility, I have introduced user functions in the user file, needed to link the menus to URLs for the data. I have taken that further to allow user functions for the color scale and formatting of responses to click queries (you can display data for nodes in the mesh). It's all optional - defaults work as before.

So the plot is below the jump. You can select a year at a time, and the months will show as radio button choices (fast response). I'll describe the new facilities after the plot below.

Friday, May 19, 2017

New local station trends - comments.

Yesterday I posted a new WebGL map of station trends. I'd like to follow up with comments on two topics, both of which follow from a fix to a problem which added noise, and some bias, to the old version. With the clearer picture, I'd like to point out how the trends really do show a quite smooth consistent picture, mostly, even before adjustment. Then I'd like to talk about the exceptions (USA and China) and the effect of homogenisation.

Then (below the jump) I'll talk more about the effect of removing seasonality,. It is substantial, and, I think, instructive.

First I'll show Europe - unadjusted on left, adjusted on right. All images here are of the thirty year period from 1987 to 2016. It shows a pattern typical of most of the world, with a large degree of uniform warm trend, with a few exceptions. The cool blob on the left, in the N Atlantic, is a shadow of a more prominent cooling in that area in more recent years. The effect of adjustment is not so radical, but it does reduce some of the excursions, some almost fully. It's possible the excursions were real, but given the general uniformity, it seems more likely that they were inhomogeneities.



Next is the USA, with some of Canada in contrast. The density of stations is obvious, as is the inconsistent but strong cooling trend. The issue is TOBS. A lot of stations changed with the conversion to MMTS, and the canges were generally in a direction that created artificial cooling. With adjustment, which includes TOBS correction, the picture is much clearer. Still some cooling in the mid-west, but otherwise warming, as in the rest of N America.



Finally, China. The stations are sparser, but again fairly irregular, although te denser regions are more consistent. And this time homogenisation does not make a cosistent warming or cooling change. It does moderate some of the extreme cooling, so that might have a warming effect overall.



Finally, I would urge readers to check the page in detail, to see the overall effect of adjustment (the swap button helps here). The main thing to see is that adjustment does not have a general effect of increasing trends. It's true that it is hard to distinguish shades of red, but at least warm trends are not being created out of nothing.

Below the jump I'll deal with the seasonal issue.

Thursday, May 18, 2017

WebGL map of local station trends - various periods.

I have updated the page where I show trends over various periods at GHCN land stations and ERSST measures at sea. The old page is here. The map shows trends as a shaded color over the triangular mesh. The shade is exact for the nodes, which you can also query by clicking. Posts on the previous page are here and later here.

The page is not automatically updated, since the trends are at least two decades. However, the previous page was made in 2012, so a data update was needed. And it makes sense to use the new MoyGLV2.1 WebGL facility. I had been slow to update the old data partly because I had used a rather neat, but hard to debug, mesh compression scheme, described here. Each period needs a separate mesh, so that helps. However, downloads are now generally quicker than in 2012, so the full 3 Mb of data does not seem so forbidding. So I have sadly let that go. However, for this post I have put the WebGL below the jump, as it still may take quite a few seconds for some.

I also updated the computing method to correct a source of noise in the previous page. I think the issue is instructive, and in 2012, I hadn't done the thinking explained in some of my many pages on averaging, eg here. I have frequently explained why anomalies are used in spatial averaging, to overcome inhomogeneities. But I had not thought they were needed for a trend at a single station. But they are - seasonal variation is a big source of inhomogeneity, and should be subtracted out. It shows itself in two ways:
  • If missing values cluster in a cold or hot time, especially biased toward one end of the period, then it introduces a spurious trend, and
  • you can even get a spurious trend with all data present. Sin(x) between 0 and 360° has a trend, rising almost the full amplitude. Taking 30 cycles reduces this by a factor of 30, but with seasonal range of say 20C, that can still be serious. Fortunately a calendar year is nore like cos, which doesn't have a trend over that period, but not all data runs a full calendar year at the end.


The remedy is to, for each station, calculate the mean observed seasonal cycle,, and subtract that out. I did that, to good effect. So, below the jump, or on the revised page, you can check ot trends from the last two decades to century plus. The radio buttons let you look at unadjusted or adjusted GHCN (prefixes un_ and ad_). One thing I found useful is to compare (swap button) two trends for the same period, one adjusted, one not. It is clear that homogenisation clears up all kinds of aberration, without greatly affecting the main trend pattern, which except for aberrations is quite smooth in space.

So below the jump is the revised map. There are some operating instructions on the page, or more detail on the WebGL page or post.

Tuesday, May 16, 2017

GISS April down 0.23°C - second warmest April on record.

I have been noting records showing a large drop from the very warm levels of March. NCEP/NCAR was down 0.23°C, TempLS down by0.165°C (now 0.16). GISS was also down 0.23°C, from 1.11C in March to 0.88 in April. But that is still warmer than any previous April except 2016. And it is warmer than the annual average for 2015 (0.82C), itself a notable record in its time. Sou has more. The April temperature is back to that of January, after the peaks of Feb and March.

The NCEP/NCAR dsaily record showed what happened. There was a sharp descent through the month, seeming to bottom out at the end. May has recovered somewhat, but is likely to also be much cooler than March, and is so far behind the April average..

I showed last month the year-to-date plot, compared with other warm years, noting that the year so far was ahead of the 2016 average, as shown by the red curve and horizontal line. Now YTD 2017 is right on the 2016 average. May will probably bring it below. Record prospects for 2017 now depend a lot on renewed El Nino activity. Here is the current YTD plot:



As usual, I will compare the GISS and previous TempLS plots below the jump. As with TempLS, there were fewer big features - lingering warmth in Siberia/Arctic, some cold in Antarctic.

Wednesday, May 10, 2017

Global surface anomaly down 0.165°C in April.

I've been waiting for three days for China to report - most others are very punctual lately. So it could change a little. But enough is enough - and last month, when I waited for China, they sent in February data, so it would have been better not to wait. Anyway, TempLS mesh showed a drop from 0.894°C in March to 0.729°C in April. That compares to a larger 0.226°C drop in the reanalysis index. Meanwhile, the troposphere indices went up - 0.08°C for UAH V6. As I often seem to have to say, it is a different place.

Despite the drop, April was still very warm. It was the 16th warmest month of any kind in the TempLS record. It was warmer than any annual average before 2016, including the then record year of 2015.

There was still quite a lot of warmth in the Siberia/Arctic region, and also in the east US. Antarctica was cold. Here is the breakdown plot:



Probably the main point of future interest is that SST is quite a lot higher. Elsewhere mostly moderate, which is a reduction for Siberia and Arctic.



Monday, May 8, 2017

The WebGL facility - versions.

Clive Best has been making good use of the WebGL facility. So I thought I should be more formal about versioning. I have been calling the current V2 a beta; I'll now drop the beta, and stop tinkering with V2, apart from bug fixes. The next version will be 2.1. I'll include that in the URL, and keep old versions posted, so for existing apps you won't be affected by changes, unless you call the update URL.

The main change I made (today) to V2 was to the dragging. There hadn't been any external control on update frequency, and so dragging a globe with a lot of triangles or lines could lead to superposition of successive images, with messy results. I have put in a 20 millisecs delay, so it can only update 50 times per sec. That delay doesn't seem to be perceptible, and mostly fixes that problem. You can vary this; the default is
U.delay=20.

The other main change is that there is now an option in the user file to define an additional function called MoyLate(p,U). This has the same syntax and functionality as MoyDat, but it is implemented after the extra objects like line (_L) edges. You can assign them properties at this stage; it wasn't possible in MoyDat(). You can't define new objects here, and it isn't the place to vary objects defined in MoyDat(). You can set colors, or maybe more usefully, vary the show property, eg
p.Mesh_L.show=0
That means that initially the line edges won't show, and the checkbox will be there but blank.

Another change is that in the calling HTML, you still need to provide a DIV tag before the script calls, but it doesn't need an ID. If you don't provide a DIV, it will go looking for somewhere to hang the app. In principle, this means that you can have several apps running on the same page (without iframes), but I think that needs more work.



Friday, May 5, 2017

Nature paper on the "hiatus".

There is a new Nature paper getting discussed in various places. It is called Reconciling controversies about the 'global warming hiatus'. There is a detailed discussion in the LA Times. The Guardian chimes in. I got involved through a WUWT post on a GWPF paper. They seem to find support in it, but other skeptics seem to think the reconciliation was effective, and are looking for the catch.

I thought it was a surprisingly political article for Nature, in that it traces how the hiatus gained prominence through pressure from contrarians and right wing politics, and scientists gradually came to take it seriously. I think they are right, but the process should be resisted. There really isn't much there, and the fact that contrarians create a hullabaloo doesn't mean that it is worth serious study. I'll show why I think that.

I'm going to show plots of various data since 2001, which is the period quoted (eg by GWPF) which excludes the 1998 El Nino. They weren't so scrupulous about that in the past, but now they want to exclude the recent warm years. Typically "hiatus" periods end about 2013. I recommend using the temperature trend viewer to see this in perspective. The most hiatus-prone of the surface datasets, by far is HADCRUT (Cowtan and Way explain why). Here is the Viewer picture of HADCRUT 4 trends in the period:



Each dot respresents a trend period, between the start year on the y-axis and the end on the x-axis. It's a lot easier to figure out in the viewer, which has an active time series graph which will show you when you click what is represented. If you cherry-pick well, you can find a 13-year period with zero slope, shown by the brown contour. And you'll see that the hiatus periods form two descending columns, headed by a blue blob. These are the periods which end in a fixed year (approx) on the x-axis - ie a dip. There are just two of them, and they are the La Nina years of 2008/9 and 2011/2. The location of those events determines the hiatus. If you look at other sets on the trend viewer, you'll see this much more weakly. At WUWT I listed the 2001-13 trends thus (error range converted to ±1σ):

DatasetTrend °C/cen
HADCRUT0.063 ± 0.301
GISS0.506 ± 0.367
NOAA0.509 ± 0.326
BEST L/O0.468 ± 0.432
C&Way0.489 ± 0.391


All except HADCRUT are quite positive. People sometimes speak of a slowdown. Incidentally, in the triangle plot, there is a reddish horizontal bar, bottom left, that is almost as prominent as the "pause". They are the strong positive trends that you can draw starting in 1999 - ie the 2001-6 warmth seen from the other end. I don't remember anyone getting excited about this feature.

I'd like to talk about the arithmetic of trends. Trend is a first central moment. It has a lot in common with moments of force, or torque. I think of it as a see-saw - a classic torque device. A heavyweight on the end has a lot of effect; in the middle not much. And of course, it depends which end. Trend is an odd see-saw, because it has both weights (cold periods) and uplifts (warm). It also has a progression. Items come on one end, and then progress across, exerting less and then opposite torque, until they drop off the other end (if you keep period fixed). So there isn't actually a lot of the period that is etermining the trend. It is predominantly the end forces.

I'll ilustrate that with this set of graphs (click the buttons below to see various datasets). It shows the mean (green) for 2001-2013 and colors the data (12-month running mean) as deviation from that value. The idea is that there has to be as much or more pulling the trend down rather than up, if it is to be negative. Either blue at the right or red at the left.



Now you can see that there aren't a lot of events that determine that. There is a red block from about 2001-6, which pulls the trend down. Then there are the two blue regions, the La Nina of 2008/9 and 2011/12, which also pull it down. The LN of 2008 has small torque on this period, but would have been effective earlier. @012 has the leverage, and so overcomes the sole uplift period or 2010.

That is just four periods, and it isn't hard to see how their effects can be chancy. It's really the 2001/6 warmth that is the anchor.

And then you see the big red period at the end, which overwhelms all this earlier stuff. GWPF and Co are keen to say that this is just a special case that should be excluded. Something like that it wasn't caused by CO2. But the 2001-6 period is also jus a natural excursion, and wasn't caused by CO2 either.

Basically the pause from 2001 won't come back until that big red is countered by a big blue. That would ensure that the trend returns close to that green line (extended). Of course, the red will be a powerful pauser for trends starting in 2015, and we'll hear about that soon enough.

Here is the same data colored by deviation from the trend from 2001 to present. We're still well on the red side of that too. The point here is that as long as new data lands above that line, it will be more red, and the trend will go up. It won't even reverse direction until you start seeing blue at that end. And if it did, there is a long way to go.



Now that the line has shifted, you can see how the blue periods would have destroyed such a trend earlier. But now, with their reduced leverage and the size of the red, that is where the trend ends up. For Hadcrut it's now 1.4°/Cen (other surface indices are higher).

So my conclusion is that, just as contrarians protest (with some justice) that not too much should be mad eof the current strong warming trends, because they are influenced by a single event, so too should the much waker hiatus be observed with modest interest, because it is the result of the concurrence of two weaker events, La Nina's, which get less noticed because they are less porminent, but are equally rather chance occurrences.