Friday, April 28, 2017

Land masks with distance measure

I wrote earlier about my use of land masks to sharpen up the boundaries of the ERSST data set that I use (and stop SST grid centres turning up on land). I have a more ambitious use in improving the weighting of the TempLS triangular mesh for land/sea difference. At present, many elements have mixed land/sea, and it is largely left to chance to get he balance right. I think that usually works out, but it would be better to have control.

A land mask is a big matrix of 1's and 0's corresponding to a grid, usually lat/lon. It has 1 if the cell is in land; it may also have a % where there is doubt, or may have a binary choice. There are a lot of land masks around, down to kilometer resolution if you want, but common ones are 1°, 1.2 and 1/4. That is what I will use (as used in the ISLSCP 2 project).

My general scheme is to refine the mesh to reduce the area of those spanning triangles. New nodes don't have new data attached, but their weight will be attributed to a land or sea station according to their placement.

I found that I would really like a more advanced mask, that actually gave a measure of the distance to the coast (for land and sea). It doesn't really increase the size of the mask. And it means that when I want to create a new node, I can place it toward the coast, instead of waiting for successive node generation to locate it. My scheme without this worked well for a while, but would create situations where new nodes would force a shift in some triangle that had all nodes on land. This happens because each mesh update is by convex hull formation, and with new nodes such a triangle might lose its tangent status.

So I set about making such a mask. I use a diffusion scheme. I mark the cells where land and sea adjoin, scored zero. Then next step I mark every neighbor cell on the land side +1, and on sea, -1. Then I mark their neighbors +2, -2, and so on.

But there is the problem of lakes. Masks generally show a lot of them, and I don't really want to know the distance to the nearest lake. So I first remove them. I do this by diffusion too. At this stage, I have the original 0,1 mask. I first advance the land by marking each of the 1 cells with a 1, and then again. That fills in most lakes, but also a lot of sea, especially bays etc. So then I diffuse back, advancing the 0's. This won't help the inland lakes, but will restore the sea cells to 0. Then I use the original mask to restore all land to 1 status.

I'll show below how this all works. It has enabled the overall aim, a coast-hugging triangular mesh, which I'll show in my next post. I have put the results as a R data file here. It is a list "mask"; the components are a letter (q for original, a for lake-less, and n with dist to sea), and 1,2,4 for cells per degree.

Wednesday, April 26, 2017

GWPF International Temperature Data Review - second anniversary

I've been intermittently tracking the progress of this review, which seems to have zombie status. The web site is still there, with no sign of news or termination. The project itself was announced here, with banner headlines in the Telegraph ( "Top Scientists Start To Examine Fiddled Global Warming Figures" )  and echoes. I described the state of play in September 2015.

I posted on the previous anniversary. I thought it necessary to maintain a watch, because they had said that despite not proceeding to a report, papers would be written, including one on the submissions. Publication of those would be held back until then. But Sept 2015 was the last news posting, and I have not heard of any progress with papers.

This is probably my last post on the topic - I think we have to deem it totally dead, despite the GWPF website still promising progress.



Sunday, April 23, 2017

Land Masks and ERSST

I use ERSST V4 as the ocean temperature data for TempLS. The actual form of he data is sometimes inconvenient; it probably wasn't intended for my kind of use. I described how it fits in here. My main complaint there was that it sets SST under sea ice to -1.8°C, which is obviously not useful as an air proxy. They obviously can't produce a good proxy, but it would be better to have the area explicitly masked, as you can't tell when the temperature is below about 1° whether it is really so, or whether there was part of the month that was frozen over, pulling down the average.

I described last month a new process I use to get a more evenly distributed subset of the ERSST for processing. The native density of 2x2° is unbalanced relative to land, and biases the temperature toward marine. The new scheme works well, but it draws attention to another issue. ERSST seems to quote a temperature for any cell for which they have a reading, even if the cell is mostly land. And in the new scheme, cell centers can more easily be on land. In particular, one turned up in the English Midlands, just near where I was once told is the point at maximum distance from the sea.

I've been thinking more about land masking lately. I have from a long while ago a set of masks that were used in the ISLSCP 2 project. They come in 1, 1/2 and 1/4° resolution, and in one version have percentages marked. I used the percent version to get land % for the 2° grid, and compared with what ERSST reported. Here is a WebGL version of that:



The ERSST filled cells are marked in pink; the land mask in lilac. The cells in green are both in ERSST and the land mask; white cells are in neither. You can switch the checkboxes top right to look at just ERSST, just mask, or just the green if you want. I called the green OVER, because it seems to mainly show sea intruding on land.

There is a tendency for the green to appear on west coasts, which suggests that the ERSST might be misaligned. One annoying thing about ERSST is that they aren't explicit about wherther the coordinates given for a cell represent the center or a corner. I've assumed center. If you moved ERSST one degree west, the green would then appear, a little more profusely, on the East coasts. I used 60% sea as the cut-off for the lnd mask. This was a result of trial; 50% meant that the land mask tended tp fall short of the coast more than overshoot; 60% seemed to be the balance point. Either is pretty good.

So my remedy has been to remove the green cells from the ERSST data. That seems to fix the problem. It raises anomalies very slightly, because it upweights land, but March rose from 0.89 to just 0.894, with similar rises in earlier months. The area involved is small.

I am now looking at ways to landmask the triangular mesh.



Friday, April 21, 2017

Spherical Harmonics - the movie

This is in a way a follow-up to the Easter Egg post. There I was showing the icosahedral based mesh with various flashing colors, with a background of transitions between spehrical harmonics (SH) to make an evolution. Taking away the visual effects and improving the resolution makes it, IMO, a good way of showing the whole family of spherical harmonics. I described those and how to calculate them here, with a visualisation as radial surfaces here.

Just reviewing - the SH are the analogue of trig functions in 1D Fourier analysis. They are orthogonal with respect to integration on the sirface, and as with 1D Fourier, you can project any function onto a subspace spanned by a finite set of them - that is, a least squares fit. The fit has various uses. I use one regularly in my presentation of TempLS results, and each month I show how it compares with the later GISS plot (well). I also use it as an integration method; all but the first SH's exactly integrate to zero, so with a projection onto SH space, the first coefficient gives the integral. I think it is nearly as good as the triangle mesh integration.

As with trig functions, the orthogonality occurs because they have oscillations that can't be brought into phase, but cancel. That is the main point of the pattern that I will show. There are two integer parameters, L and M, with 0≤M≤L. Broadly, L represents the total number of oscillations, some in latitude and some around the longitude, and M represents how they are divided. With M=0, the SH is a function of latitude only, and with M=L, of longitude only (in fact, a trig function sin(M*φ)). Otherwise there is an array of peaks and dips.

Sunday, April 16, 2017

A Magical Easter Egg

This is a Very Serious Post. Really. It's a follow-up to my previous post about icosahedral tesselation of the sphere (Earth). The idea is to divide the Earth as best possible into equal equilateral triangles. It's an extension of the cubed sphere that I use for gridding in TempLS. The next step is to subdivide the 20 equilateral triangles from the icosahedron in triangles and project that onto the sphere. This creates some distortion near the vertices, but less than for the cube.

So I did it. But not having an immediate scientific use for it, and having some time at Easter, I started playing with some WebGL tricks. So here is the mesh (each triangle divided into 49) with some color features, including some spherical harmonics counterpoint.

Naturally, you can move it around, and there are some controls. Step is the amount of color change per step, speed is frame speed, and drift is the speed of evolution of the pattern. It's using a hacked version of the WebGL facility. Here it is. Happy Easter.

Saturday, April 15, 2017

GISS March up by 0.02°C, now 1.12°C!

As Olof noted, GISS has posted on March temperature. It was 1.12°C, up by 0.02°C from February. That rise is close to the 0.03°C shown by TempLS mesh. It makes March also a very warm month indeed. It's the second warmest March in the record - Mar 2016 was near the peak of the El Nino. And it exceeds any month before 2016.

Here is the cumulative average plot for recent warm years. Although 2016 was much warmer at the start, the average for 2017 so far is 0.06°C higher than for all 2016.





I'll show the globe plot below the jump. It shows the huge warmth in Siberia, and most of N America except NW. And also Australia - yes, it has been a very warm autumn here so far (mostly). GISS escaped the China glitch.


Thursday, April 13, 2017

TempLS update - now March was warmer than Feb by 0.03°C

Commenter Olof R noticed that the TempLS mesh estimate for March had suddenly risen, reversing the previously reported drop of about 0.06°C to a rise of 0.03°C. He attributed the rise to a change in China data, which, as noted in the previous post had been very cold, and was now neutral.

I suspected that the original data supplied by China might have been for February, a relatively common occurrence. Unfortunately when I download GHCN data it overwrites the previous, so I can't check directly. But the GHCN MAX and MIN data are updated at source less frequently than TAVG, and they are currently as of 8 April. So I checked the China data there, and yes, March was very similar to February, though not identical. GHCN does a check for exact repetition.

Then I checked the CLIMAT forms at OGIMET. I checked the first location, HAILAR (way up in Manchuria). The current CLIMAT has a TMAX of -3°C for March and -13.5°C for Feb, and yes, the 8 Apr GHCN has -13.5. So it seems that is what happened, and has been corrected.

So March is warmer than February, and so warmer than any month before Oct 2015. It is also warmer than the record annual average of 2016, and so then is the average for Q1 of 2017. The result is fairly consistent with the NCEP/NCAR average, which showed a very slight fall. I was preparing a progress plot for the next GISS report, so I'll show that for TempLS. It shows the cumulative average for each year, and the annual average as a straight line. 2017 has not started with the El Nino rush of 2016, but is ahead of the average and seems more likely to increase than decrease.





Icosahedral Earth

This post is basically an exercise in using the WebGL facility, with colorful results. It's also the start of some new methods, hopefully. I wrote a while ago about improved gridding methods for integrating surface temperatures. The improvement was basically a scheme for estimating missing cells based on neighbors, and an important enabling feature was a grid that had more uniform cells than the conventional lat/lon grid. I used a cubed sphere - a projection of a gridded cube surface onto the sphere. The corners of the cube are a slight irregularity, that can be mitigated by non-linear scaling of the grid spacing. The cubed sphere has become popular lately - GFDL use it for their GCMs. It worked well for me.

In that earlier post, Victor Venema suggested using an icosahedron instead. This has less irregularity at the vertices, since the solid angle is greater, and the distortion of mapping to a sphere less. The geometry is a bit less familiar than the cube, but quite manageable.

A few days ago, I described methods now built into the facility for mapping triangles that occur in convex hull meshing actually onto the spherical surface. This is basically what is needed to make a finer icosahedral mesh. In this post, I'll use that as provided, but won't do the subdivision - that is for another post.

I also wanted to try another capability. The basic requirement of the facility is that you supply a set of nodes, nodal values (for shading), and links which are a set of pointers to the nodes and declare triangles, line segments etc. From that comes continuous shading, which is usually what is wanted. But WebGL does triangles individually, and you can color them independently. You just regard the nodes of reach triangle as being coincident with others, but having independent values. For the WebGL facility, that means that for each triangle you give a separate copy of the nodal coordinates and a separate corresponding value, and the links point to the appropriate version of the node.

So I thought I should try that in practice, and yes, it works. The colors look better if you switch off the map - checkbox top right. So here is the icosahedral globe, with rather random colors for shading:

Friday, April 7, 2017

March global surface temperature down 0.066C.

Update There was a major revision to GHCN China data, and now March was 0.03°C warmer than February. See update post

TempLS mesh declined in March, from 0.861°C to 0.795°C. This follows the very small drop of 0.01°C in the NCEP/NCAR index, and larger falls in the satellite indices. The March temperature was still warm, however. It was higher than January (just) and higher than any month before October 2015. And the mean for the first quarter at 0.813°C is just above the record high annual mean of 0.809°C, though it could easily drop below (or rise further) with late data. So far all the major countries seem to have reported. With that high Q1 mean, a record high in 2017 is certainly possible.

TempLS grid also fell by a little more by 0.11°C. The big feature this month was the huge warmth over Siberia. It was cold in Canada/Alaska (but warm in ConUS) and cold in China. Here is the map:



The breakdown plot is remarkable enough that I'll show that too here (it's always on the regular report). On land almost all the positive contribution came from Siberia and Arctic - without that, it would have been quite a steep fall. SST has been slowly rising since December, which is another suggestion of a record year possibility.





Incidentally I'm now using the finer and more regular SST mesh I described here. The effect on results is generally small, of order 0.01-02°C either way, which is similar to the amount of drift seen in late data coming in. You may notice small differences in comparing old and new. You'll notice quite a big change in the number of stations reporting, which is due to the greater number of SST. I've set a new minimum for display at 5300 stations.



Wednesday, April 5, 2017

Global 60 Stations and coverage uncertainty

In the early days of this blog, I took up a challenge of the time, and calculated a global average temperature using just 60 land stations. The stations could be selected for long records, rural etc. It has been a post that people frequently come back to. I am a little embarrassed now, because I used the plain grid version of the TEmpLS of the day, and so it really didn't do area weighting properly at all. Still, it gave a pretty good result.

Technology (and TempLS_ has advanced, I next tried using triangular mesh with proper Voronoi cells (I wouldn't bother now). I couldn't display it very well, but the results were arguably better.

Then, about 3 years ago, I was finally able to display the results with WebGL That was mainly a graphic post. Now I'd like to show some more WebGL graphics, but I think the more interesting part may be tracking the coverage uncertainty, which of course grows. I have described here and earlier some ways of estimating coverage uncertainty, different from the usual ways involving reanalysis. This is another way which I think is quite informative.

I start with a standard meshed result for a particular month (Jan 2014), which had 4758 nodes, about half SST. I get the area weights as used in TempLS mesh. This assigns weight to each nodes according to the area of the triangles it is part of. Then I start culling, removing the lowest weights first. My culling aims to remove 10% of nodes with each step, getting down to 60 nodes after about 40 steps. But I introduce a random element by setting a weight cut at about 12.5%, and then selecting 4/5 of those at random. After culling, I re-mesh, so the weights of many nodes change. The rather small randomness in node selection has a big effect on randomising the mesh process.

And so I proceed, calculating the new average temperature at each step from the existing anomalies. I don't do a re-fitting of temperature; this is just an integration of an existing field. I do this 100 times, so I can get an idea of the variability of temperature as culling proceeds.

Then, as a variant, I select for culling with a combination of area and a penalty for SST. The idea is to gradually remove all ocean values, and end up with just 60 land stations to represent the Earth.

Monday, April 3, 2017

NCEP/NCAR global surface temperature down 0.01°C in March

The NCEP/NCAR anomaly for March was 0.566°C, almost the same as Feb 0.576°C. And that is very warm. It makes the average for the first quarter 0.543°C, compared with the 2016 annual average of 0.531°C. In most indices, 2016 was the warmest ever, so with a prospect of El Nino activity later in the year, 2017 could well be the fourth record year in a row.

You can bring up the map for the month here. It was warm in Europe, mixed in N America, warm in Siberia but cool further South, and varied at the poles. So GISS may come down a bit, since it has been buoyed by the Arctic warmth.





Friday, March 31, 2017

Moyhu WebGL interactive graphics facility, documented.

I wrote a post earlier this month updating a general facility for using WebGL for making interactive Earth plots, Google-Earth style. I have now created a web page here which I hope to maintain which documents it. The page is listed near the bottom of the list at top right. I expect to be using the facility a lot in future posts. It has new features since the last post, but since I don't think anyone else has used that yet, I'll still call the new version V2. It should be compatible with the earlier.

Tuesday, March 28, 2017

More ructions in Trump's EPA squad.

As a follow-up to my previous post on the storming out of David Schnare, there is a new article in Politico suggesting that more red guards are unhappy with their appointed one. It seems the "endangerment finding" is less endangered than we thought.
But Pruitt, with the backing of several White House aides, argued in closed-door meetings that the legal hurdles to overturning the finding were massive, and the administration would be setting itself up for a lengthy court battle.

A cadre of conservative climate skeptics are fuming about the decision — expressing their concern to Trump administration officials and arguing Pruitt is setting himself up to run for governor or the Senate. They hope the White House, perhaps senior adviser Stephen Bannon, will intervene and encourage the president to overturn the endangerment finding.

Monday, March 27, 2017

Interesting EPA snippet.

From Politico:
Revitalizing the beleaguered coal industry and loosening restrictions on emissions was a cornerstone of Trump’s pitch to blue collar voters. Yet, two months into his presidency, Trump loyalists are accusing EPA Administrator Scott Pruitt of moving too slowly to push the president’s priorities.

Earlier this month, David Schnare, a Trump appointee who worked on the transition team, abruptly quit. According to two people familiar with the matter, among Schnare’s complaints was that Pruitt had yet to overturn the EPA’s endangerment finding, which empowers the agency to regulate greenhouse gas emissions as a public health threat.

Schnare’s departure was described as stormy, and those who’ve spoken with him say his anger at Pruitt runs deep.

"The backstory to my resignation is extremely complex,” he told E&E News, an energy industry trade publication. “I will be writing about it myself. It is a story not about me, but about a much more interesting set of events involving misuse of federal funds, failure to honor oaths of office, and a lack of loyalty to the president."

Other Trump loyalists at EPA complain they’ve been shut out of meetings with higher-ups and are convinced that Pruitt is pursuing his own agenda instead of the president’s. Some suspect that he is trying to position himself for an eventual Senate campaign. (EPA spokespersons did not respond to requests for comment.)
David Schnare, a former EPA lawyer, has been most notable for his unsuccessful lawsuits (often with Christopher Horner) seeking emails of Michael Mann and others. Here he is celebrating at WUWT his appointment to the Trump transition team.

Update Here is the story at Schnare's home base at E&E.

Update - as William points out below, I had my E&Es mixed up. Here is Schnare at his E&E announcing his appointment. But they have not announced his departure.


Wednesday, March 22, 2017

Global average, integration and webgl.

Another post empowered by the new WebGL system. I've made some additions to it which I'll describe below.

I have written a lot about averaging global temperatures. Sometimes I write as a sampling problem, and sometimes from the point of view of integration.

A brief recap - averaging global temperature at a point in time requires estimating temperatures everywhere based on a sample (what has been measured). You have to estimate everywhere, even if data is sparse. If you try to omit that region, you'll either end up with a worse estimate, or you'll have to specify the subset of the world to which your average applies.

The actual averaging is done by numerical integration, which generally divides the world into sub-regions and estimates those based on local information. The global result always amounts to a weighted average of the station readings for that period (month). It isn't always expressed so, but I find it useful to formulate it so, both conceptually and practically. The weights should represent area.

In TempLS I have used four different methods. In this post I'll display with WebGL, for one month, the weights that each uses. The idea is to see how well each does represent area, and how well they agree with each other. I have added some capabilities to the WebGL system, which I will describe.

I should emphasise that the averaging process is statistical. Errors tend to cancel out, both within the spatial average and when combining averages over time, when calculating trends or just drawing meaningful graphs. So there is no need to focus on local errors as such; the important thing is whether a bias might accumulate. Accurate integration is the best defence against bias.

The methods I have used are:
  • Grid cell averaging (eg 5x5 deg). This is where everyone starts. Each cell is estimated as an average of the datapoints within it, and weighted by cell area. The problem is cells that have no data. My TempLS grid method follows HADCRUT in simply leaving these out. The problem is that the remaining areas are effectively infilled with the average of the points measured, which is often inappropriate. I continue to use it because it has often very closely tracked NOAA and HADCRUT. But the problem with empty cells is serious, and is what Cowtan and Way sought to repair.
  • My preferred method now is based on irregular triangulation, and standard finite element integration. Each triangle is estimated by the average of its nodes. There are no empty areas.
  • I have also sought to repair the grid method by estimating the empty cells based on neighboring cells. This can get a bit complicated, but works well.
  • An effective and elegant method is based on spherical harmonics. The nodes are fitted with a set of harmonics, based on least squares regression. Then in integrating this approximation, all except the first go to zero. The integral is just the coefficient of the constant.


The methods are compared numerically in this post. Here I will just display the weights for comparison in WebGL.

Friday, March 17, 2017

Temperature residuals and coverage uncertainty.

A few days ago I posted an extensive ANOVA-type analysis of the successive reduction of variance as the spatial behaviour of global temperatures was more finely modelled. This is basically a follow-up to show how the temperature field can be partitioned into a smooth part with known reliable interpolation, and a hopefully small residue. Then the size of the residue puts a limit on the coverage uncertainty.

I wrote about coverage uncertainty in January. It's the uncertainty about what would happen if one could measure in different places, and is the main source of uncertainty in the monthly global indices. A different and useful way of seeing it is as the uncertainty that comes with interpolation. Sometimes you see sceptic articles decrying interpolation as "making up data". But it is the complement of sampling, which is how we measure. You can only measure anything at a finite number of places. You infer what happens elsewhere by interpolation; that can't be avoided. Just about everything we know about the physical world, or economic for that matter, is deduced from a finite number of samples.

The standard way of estimating coverage uncertainty was used by Brohan et al 2006. They took a global reanalysis and sampled at sets of places correponding to possible station distributions. The variability of the resulting averages was the uncertainty estimate. The weakness is that the reanalysis may have different variability to the real world.

I think analysis of residuals gives another way. If you have a temperature anomaly field T, you can try to separate it into a smoothed part s and a residual e:
T = s + e
If s is constructed in such a way that you expect much less uncertainty of interpolation than T, then the uncertainty has been transferred to e. That residual is meor intractable to integrate, but you have an upper bound based on its amplitude, and that is an upper bound to coverage uncertainty.

So below the jump, I'll show how I used a LOESS type smoothing for s. This replaces points but a low-order polynomial weighted regression, and the weighting is by a function decaying with distance, in my case exponentially, with characteristic distance t (ie exp(-|x}/r). With r very high, one can be very sure of interpolation (of s), but the approximation will not be very good, so e will be large, and contains a lot of "signal" - ie what you want to include in the average, which will then be inaccurate. If the distance is very small, the residual will be small too, but there will be a lot of noise still in s. I seek a compromise where s is smooth enough, and e is small enough. I'll show the result of various r values for recent months, focussing on Jan 2017. I'll also show WebGL plots of the smooths and residuals.

I should add that the purpose here is not to get a more accurate integral by this partition. Some of the desired integrand is bound to end up in e. The purpose is to get a handle on the error.

Thursday, March 16, 2017

GISS up by 0.18°C, now 1.1°C!

GISS has posted a report on February temperature, though it isn't in their posted file data yet. It was 1.10°C, up by 0.18°C. That rise is a bit more that the 0.13°C shown by TempLS mesh. It also makes February a very warm month indeed, as the GISS article says. It's the second warmest February in the record - Feb 2016 was at the peak of the El Nino. And it is equal to December 20165, which was also an El Nino month, and warmer than any prior month, of any kind.

I'll show the plot below the jump. It shows a lot of warmth in N America and Siberia, and cool in the Middle East.

As I noted in the previous post, TempLS had acquired a bug in the treatment of GHCN data that was entered and later removed (usually flagged). This sometimes caused late drift in the reported numbers. It has been fixed. Last month is up by 0.03°C on initial report.

Wednesday, March 15, 2017

Making an even SST mesh on the globe.

I have been meaning to tidy up the way TempLS deals with the regular lat/lon SST grid on the globe. I use ERSST, which has a 2x2° grid. This is finer than I need; it gives the sea much more coverage tha the land gets, and besides being overkill, it distorts near coasts, making them more marine. So I had reduced it to a regular 4x4 grid, and left it at that.

But that has problems near the poles, as you can see in this image:



The grid packs in lots of nodes along the upper latitudes. This is ugly, inefficient, and may have distorting effects in making the polar region more marine than it should, although I'm not sure about that.

So I looked for a better way of culling nodes to get a much more even mesh. The ideal is to have triangles close to equilateral. I have been able to get it down to something like this:



I don't think there is much effect on the resulting average, mainly because SST is still better resolved than land. But it is safer, and looks more elegant.

And as an extra benefit, in comparing results I found a bug in TempLS that had been puzzling me. Some, but not all, months had been showing a lot of drift after the initial publication of results. I found this was due to my system for saving time by storing meshed weights for past months. The idea is that if the station mix changes, the weights will be recalculated. But for nodes which drop out (mostly through acquiring a quality flag) this wasn't happening. I have fixed that.

Below the jump, I'll describe the algorithm and show a WebGL mesh in the new system.

Sunday, March 12, 2017

Residuals of monthly global temperatures.

I have frequently written about the task of getting a global average surface temperature as one of spatial integration, as here or here. But there is more to be said about the statistical aspect. It's a continuation of what I wrote here about spatial sampling error. In this post, I'll follow a path rather like ANOVA, with a hierarchy of improving approximations leading to smaller and more random residuals. I'll also follow through on my foreshadowed more substantial application of the new WebGL system, to show how the residuals do change over the surface.

So the aims of this post are:
  1. To see how various levels of approximation reduce the variance
  2. To see graphically how predictability is removed from the residuals. The idea here is that if we can get to iid residuals in known locations, that distribution should be extendable to unknown locations, giving a clear basis for estimation of coverage uncertainty.
  3. To consider the implications for accurate estimation of global average. If each approximation is itself integrable, then the residuals make a smaller error. However, unfortunately, they also become themselves harder to integrate, since smoothness is deliberately lost.
A table of contents will be useful:

Friday, March 10, 2017

January HADCRUT and David Rose.

Yet another episode in the lamentable veracity of David Rose and the Daily Mail. Sou covered a kerfuffle last month when Rose proclaimed in the Sunday Mail:

"The ‘pause’ is clearly visible in the Met Office’s ‘HadCRUT 4’ climate dataset, calculated independently of NOAA.
Since record highs caused last year by an ‘el Nino’ sea-warming event in the Pacific, HadCRUT 4 has fallen by more than half a degree Celsius, and its value for the world average temperature in January 2017 was about the same as January 1998."


This caused John Kennedy, of the Met Office, to note drily:



Rose was writing 19 Feb, and Hadcrut does indeed take much longer to come out. But it is there now, and was 0.741°C for the month. That was up quite a lot from December, in line with GISS (and Moyhu TempLS). It was a lot warmer than January 1998, at 0.495°C. And down just 0.33°C from the peak in Feb 2016.

And of course it was only last December that David Rose was telling us importantly that "New official data issued by the Met Office confirms that world average temperatures have plummeted since the middle of the year at a faster and steeper rate than at any time in the recent past".

In fact, January was warmer than any month since April 2016, except for August at 0.77°C.

Update. David Rose was echoed by GWPF, who helpfully provided this graph, sourced to Met Office, no less:

I've added a circle with red line to show where January 2017 actually came in. I don't know where their final red dot could have come from. Even November, the coldest month of the 2016, was 0.524°C, still warmer that Jan 1998.

Wednesday, March 8, 2017

Moyhu WebGL interactive graphics facility, V2.

As mentioned in the previous post, I've been working on a new version of a WebGL graphics facility that I first posted three years ago. Then it was described as a simplified access to WebGL plotting of data on a sphere, using the active and trackball facilities. It could work from a fairly simple user-supplied data file. I followed up with an even simpler grid-based version, which included a text box where you could just paste in the lat/lon grid data values and it would show them on an active sphere.

So now there is an upgrade, which I'll call V2. Again, it consists of just three files; an HTML stub MoyGLV2.html, a functional JavaScript file called MoyGLV2.js, and a user file, with a user name. The names and locations of the JS files are declared in the html. Aside from that, users just amend the user file, which consists of a set of data statements in Javascript. JS syntax is very like C, but the syntax needed here is pretty universal. The user files must be set before the MoyGLV2.js or equivalent in the HTML.

The main new features are:
  • The merging of the old grid input via a new GRID type, which only requires entry of the actual data.
  • An extension of the user input system that came with the grid facility. A variety of items can now be put in via text box (which has a 16000 char limit).
  • A multi-data capability. Each property entered can now be an array. radio buttons appear so that the different instances can be selected. This is very useful for making comparisons.
  • A flat picture capability. The motivation was to show spheres in 3D, but the infrastructure is useful for a lat/lon projection as well.
  • A compact notation for palettes, with color ramps.

I'll set out the data requirements below the jump, and some information on the controls (which haven't changed much. Finally I'll give a grid example, with result, and also below that the code for palette demo from the last post. The zip-file which contains code and example data is here. There is only about 500 lines of JS, but I've included sample data.


February global surface temperature up 0.106°C.

TempLS mesh posted another substantial rise in February, from 0.737°C to 0.843°C. This follows the earlier very similar rise of 0.09°C in the NCEP/NCAR index, and smaller rises in the satellite indices. Exceeding January, February was record warm by any pre-Nino16 standards.It was warmer (in anomaly) than any month before October 2015.

TempLS grid also rose by 0.11°C. The breakdown plot showed the main contributions from Siberia and N America, with Arctic also warm. The map shows those features, and also cold in the Middle East.







Sunday, March 5, 2017

Playing with palettes in WebGL earth plotting.

Three years ago, I described a simplified access to WebGL plotting of data on a sphere, using the active and trackball facilities. It could work from a fairly simple user-supplied data file. I don't know if anyone actually used it, but I certainly did. It is the basis for most of my WebGL work. I followed up with an even simpler grid-based version, which included a text box where you could just insert the lat/lon grid data values and it would show them on an active aphere.

I've been updating this mechanism, and I'll post a new version description in a few days, and also a more substantive application. But this post just displays a visual aspect that users may want to play with.

I almost always use rainbow palettes, and they are the default in the grid program. But they are deprecated in some quarters. I think they are the most efficient, but it is good to explore alternatives. One feature of the new system is that you can show and switch between multiple plots; another is that the textbox system for users to add data has been extended.

The plot below shows January 2016 anomalies, as I regularly plot here. On the top right, you'll see a set of radio buttons. Each will show the same plot in a different color scheme. The abbreviations expand in a title on the left when you click. They are just a few that I experimented with. The good news is, you can insert your own palettes. I'll explain below the plot.



As usual, the Earth is a trackball, and dragging right button vertically will zoom. Clicking brings up data for the nearest station. "Orient" rotates current view to map orientation.

In the new scheme, you can alter data by choosing the correct category in the dropdown menu top right, and then pasting the data into the text box, and then clicking "Apply". There is a shortened format for palettes. Colors are represented by an RGB triple between 0 and 1 (this is the GL custom). 0,0,0 is black, 1,0,0 is red. So you can enter a comma-separated set of numbers in groups of four. The first three are the RGB, and the fourth is the number of colors that ramp to the next one. The total length should be 256. The last set of four needs a final integer for format, but it can be anything. The series should be in square brackets, indicating a Javascript array. Here is a table of the data I used:

Array of dataDescription
[[1,0,0,64, 1,1,0,64, 0,1,0,64, 0,1,1,64, 0,0,1,64]Rainbow spectrum
[1,0,0,96, 1,1,0,80, 0,1,0,48, 0,1,1,32, 0,0,1,64]Rainbow tinged red
[1,0,0,32, 1,1,0,48, 0,1,0,80, 0,1,1,96, 0,0,1,64]Rainbow tinged blue
[1,0,0,64, 1,1,0,64, 1,1,1,64, 0,1,1,64, 0,0,1,64]Red, yellow white and blue
[1,0,0,128, 1,1,1,128, 0,0,1,1],Red white and blue
[0.62,0.32,0.17,128, 0.7,0.7,0.3,128, 0.4,.6,.2,1]Earth: brown to olive
[0.62,0.32,0.17,128, 0.7,0.7,0.3,104, 0.4,.6,.2,24, 0,0,1,1]Earthy with blue
[1,1,1,256, 0,0,0,1]White to Black


You can enter a similar sequence in the text box and see what it looks like. It will replace the currently selected palate. You can even change the button label by selecting "short", or the label top left by selecting "long", in each case entering your phrase with quotes in the text box.





Friday, March 3, 2017

NCEP/NCAR rises again by 0.09°C in February

It's getting very warm again. January was warmer than any month before October 2015 in the MOyhu NCEP/NCAR reanalysis index. February was warmer again, and is warmer than Oct/Nov 2015, and behind only Dec-April in the 2015/6 El Nino. And it shows no downturn at end month.

Karsten also had a rise of 0.1°C from GFS/CFSR. UAH V6 is up by 0.05°C. And as I noted in the previous post, Antarctic sea ice reached a record low level a few days ago.



Wednesday, March 1, 2017

Record low sea ice minimum in Antarctica

I've been tracking the Antarctic Sea Ice. It has been very low since about October, and a new record looked likely. Today I saw in our local paper that the minimum has been announced. And indeed, it was lowest by a considerable margin. The Moyhu radial plot showed it thus:



The NSIDC numbers do show that there is still melting, but if so, it won't last much longer. The Arctic seems to be at record low levels again also, which may be significant for this year's minimum.



Thursday, February 16, 2017

GISS rose 0.13°C in January; now 0.92°C.

Gistemp rose from 0.79°C in December to 0.92°C in January. That is quite similar to TempLS mesh, where the rise has come back to 0.094°C. There were also similar rises in NCEP/NCAR and the satellite indices.

As with the other indices, this is very warm. It is almost the highest anomaly for any month before Oct 2015 (Jan 07 at 0.96°C was higher). And according to NCEP/NCAR, February so far is even warmer.


I'll show the regular GISS plot and TempLS comparison below the fold

Wednesday, February 15, 2017

Changes to Moyhu latest monthly temperature table.

A brief note - I have changed the format of the latest monthly data table. The immediate thing to notice is that it starts with latest month at the top.

Before there were two tables - last six months of some commonly quoted datasets, and below a larger table of data back to start 2013, with more datasets included. This was becoming unwieldy.

Now there is just one table going back to 2013, but starting at the latest month, so you have to scroll down for earlier times. It has those most commonly quoted sets, but there are buttons at the top that you can click to get a similar table of other subsets. "Main" is the start table; "TempLS" has a collection (coming) of other styles of integration, and also results with adjusted GHCN.

I'm gradually moving RSS V4 TTT to replace the deprecated V3 TLT. I'm still rearranging the order of columns somewhat. There is reorganisation happening behind the scenes.





Monday, February 13, 2017

Spatial distribution of flutter in GHCN adjustment.

I posted recently on flutter in GHCN adjustment. This is the tendency of the Pairwise Homogenisation Algorithm (PHA) to produce short-term fluctuations in monthly adjustments. It arose on a recent discussion of the kerfuffle of John Bates and the Karl 2015 paper, and has been investigated by Peter O'Neill, who is currently posting on the topic. In my earlier post, I looked at the distribution of individual month adjustments, and noted that with generally zero mean, they would be heavily damped on averaging.

But I was curious about the mechanics, so here I compare the same two adjusted files (June 2015 and Feb 9 2017) collected by station. I'll show a histogram, but more interesting is the spatial distribution shown on a trackball sphere map. The histogram shows a distribution of station RMS values tapering rather rapidly toward 1°C. The map shows the flutter is strongly associated with remoteness, especially islands.

Update: I have now enabled clicking to show not only the name of the nearest station, but the RMS adjustment change there in °C. I have also adopted William's suggestion about the color scheme (white for zero, red for large).

Friday, February 10, 2017

January global surface temperature up 0.155°C.

TempLS mesh rose significantly in January, from 0.66°C to 0.815°C. This follows the earlier very similar rise of 0.13°C in the NCEP/NCAR index, and rises in the satellite indices, including a 0.18°C rise in the RSS index. January was the warmest month since April, and as with NCEP/NCAR, it was warmer (in anomaly) than any month before October 2015.

TempLS grid also rose by 0.12°C. I think this month temperatures were not greatly affected by the poles. The breakdown plot was interesting, with contributions to warmth from N America, Asia, Siberia and Africa, with Arctic also warm as usual lately.


Thursday, February 9, 2017

Flutter in GHCN V3 adjusted temperatures.

In the recent discussion of the kerfuffle of John Bates and the Karl 2015 paper, the claim of Bates that the GHCN adjustment algorithm was subject to instability arose. Bates claim seemed to be of an actual fault in the code. I explained why I think that is unlikely, but rather it is a feature of the Pairwise Homogenisation Algorithm (PHA).

GHCN V3 adjusted is issued approximately daily, although it is not clear how often the underlying algorithm is run. It is posted here - see the readme file and look for the qca label.

Paul Matthews linked to his analysis of variations in Alice Springs adjusted over time. It did look remarkable; fluctuations of a degree or more over quite short intervals, with maximum excursions of about 3°C. This was in about 2012. However Peter O'Neill had done a much more extensive study with many stations and more recent years (and using many more adjustment files). He found somewhat smaller variations, and of frequent but variable occurrence.

I don't have a succession of GHCN adjusted files available, but I do have the latest (downloaded 9 Feb) and I have one with a file date here of 21 June 2015. So I thought I would look at differences between these to try to get an overall picture of what is going on.

Friday, February 3, 2017

NCEP/NCAR January warmest month since April 2016.

The Moyhu NCEP/NCAR index at 0.486°C was warmer than any month since April 2016. But it was a wild ride. It started very warm, dropped to temperatures lower than seen for over a year, rose again, and the last two weeks were very warm, and it still is. The big dip coincided with the cold snap in E North America, and in Central and East Europe, extending through Russia. Then N America warmed a little, although some of Europe stayed cold, and there was the famous snow in the Sahara. Overall, Arctic and Canada (despite cold snaps) were warm, as was most of Asia. Europe and the Sahara were indeed cold.

I'll note just how warm it still is, historically. I don't make too much of long term records in the reanalysis data, since it can't really be made homogeneous. But January was not only warmest since April, but warmer (by a lot) than any month prior to October 2015.

UAH satellite, which dropped severely in December, rose a little, from 0.24°C to 0.30°C. Arctic sea ice, which had been very low, recovered a bit to be occasionally not the lowest recorded for the time of year. Antarctic ice is still very low, and may well reach a notable minimum.

With the Arctic still warm, I would expect a substantial rise for GISS and TempLS mesh, with maybe less for HADCRUT and NOAA.

Wednesday, February 1, 2017

Homogenisation and Cape Town.

An old perennial in climate wars is the adjustment of land temperature data. Stations are subject to various changes, like moving, which leads to sustained jumps that are not due to climate. For almost any climate analysis that matters, these station records are taken to be representative of some region, so it is important to adjust for the effect of these events. So GHCN publishes an additional list of adjusted temperatures. They are called homogenised with the idea that as far as can be achieved, temperatures from different times are as if measured under like conditions. I have written about this frequently, eg here, here and here.

The contrarian tactic is to find some station that has been changed and beat the drum about rewriting history, or some such. It is usually one where the trend has changed from negative to positive. Since adjustment does change values, this can easily happen. I made a Google Maps gadget here which lets you see how the various GHCN gadgets are affected, and posted histograms here. This blog started its life following a classic 2009 WUWT sally here, based on Darwin. That was probably the most publicised case.

There have been others, and their names are bandied around in skeptic circles as if they were Agincourt and Bannockburn. Jennifer Marohasy has for some reason an irrepressible bee in her bonnet about Rutherglen, and I think we'll be hearing more of it soon. I have a post on that in the pipeline. One possible response is to analyse individual cases to show why the adjustments happened. An early case was David Wratt, of NIWA on Wellington, showing that the key adjustment happened with a move with a big altitude shift. I tried here to clear up Amberley. It's a frustrating task, because there is no acknowledgement - they just go on to something else. And sometimes there is no clear outcome, as with Rutherglen. Reykjavik, often cited, does seem to be a case where the algorithm mis-identified a genuine change.

The search for metadata reasons is against the spirit of homogenisation as applied. The idea of the pairwise algorithm (PHA) used by NOAA is that it should be independent of metadata and rely solely on numerical analysis. There are good reasons for this. Metedata means human intervention, with possible bias. It also inhibits reproducibility. Homogenisation is needed because of the possibility that the inhomogeneities may have a bias. Global averaging is very good at suppressing noise(see here and here), but vulnerable to bias. So identifying and removing possibly biased events is good. It comes with errors, which contribute noise. This is a good trade-off. It may also create a different bias, but because PHA is automatic, it can be tested for that on synthetic data.

So, with that preliminary, we come to Cape Town. There have been rumblings about this from Philip Lloyd at WUWT, most recently here. Sou dealt with it here, and Tamino touched on it here, and an earlier occurrence here. It turns out that it can be completely resolved with metadata, as I explain at WUWT here. It's quite interesting, and I have found out more, which I'll describe below the jump.

Tuesday, January 31, 2017

A guide to the global temperature program TempLS

TempLS is an R program that I have been running for some years at Moyhu. It computes a land/ocean temperature average in the style of Gistemp, HADCRUT or NOAA (see Wiki overview) In this post, I want to collect links to what I have written about it over the years, describe the methods, code and reporting, and something about the graphics. I'll then promote it to a maintained page. I'll start with a Table of Contents with links:
  • Introduction - reporting cycle
  • Summary, methods and code
  • History
  • Graphics and other output
  • Tests and comparisons

Friday, January 27, 2017

Global anomaly spatial sampling error - and why use anomalies?

In this post I want to bring together two things that I seem to be talking a lot about, especially in the wake of our run of record high temperatures. They are
  • What is the main component of the error that is quoted on global anomaly average for some period (month, year)? and
  • Why use anomalies? (an old perennial, see also GISS, NOAA)
I'll use the USHCN V2.5 dataset as a worked example, since I'm planning to write a bit more about some recent misuse of that. In particular I'll use the adjusted USHCN for 2011.

Using anomalies

I have been finding it necessary to go over some essentials of using anomalies. The basic arithmetic is
  • Compute some "normal" (usually a 30-year period time average for each month) for each station in the network,
  • Form local anomalies by subtracting the relevant normal from each reading
  • Average the anomalies (usually area-weighted)
People tend to think that you get the aanomaly average just by averaging, then subtracting an offset. That is quite wrong; they must be formed before averaging. Afterward you can shift to a different anomaly base by offsetting the mean.

Coverage error - spatial sampling error for the mean.

Indices like GISS and HADCRUT usually quote a monthly or annual mean with an uncertainty of up to 0.1°C. In recent years contrarians have seized on this to say that maybe it isn't a record at all - a "statistical tie" is a pet phrase, for those whose head hurts thinking about statistics. But what very few people understand is what that uncertainty means. I'll quote here from something I wrote at WUWT:

The way to think about stated uncertainties is that they represent the range of results that could have been obtained if things had been done differently. And so the question is, which "things". This concept is made explicit in the HADCRUT ensemble approach, where they do 100 repeated runs, looking at each stage in which an estimated number is used, and choosing other estimates from a distribution. Then the actual spread of results gives the uncertainty. Brohan et al 2006 lists some of the things that are varied.

The underlying concept is sampling error. Suppose you conduct a poll, asking 1000 people if they will vote for A or B. You find 52% for A. The uncertainty comes from, what if you had asked different people? For temperature, I'll list three sources of error important in various ways:

1. Measurement error. This is what many people think uncertainties refer to, but it usually isn't. measurement errors become insignificant because of the huge number of data that is averaged. measurement error estimates what could happen if you had used different observers or instruments to make the same observation, same time, same place.

2. Location uncertainty. Ths is dominant for global annual and monthly averages.You measured in sampled locations - what if the sample changed? You measured in different places around the earth? Same time, different places.

3. Trend uncertainty, what we are talking about above. You get trend from a statistical model, in which the residuals are assumed to come from a random distribution, representing unpredictable aspects (weather). The trend uncertainty is calculated on the basis of, what if you sampled differently from that distribution? Had different weather? This is important for deciding if your trend is something that might happen again in the future. If it is a rare event, maybe. But it is not a test of whether it really happened. We know how the weather turned out.


So here I'm talking about location uncertainty. What if you had sampled in different places. And in this exercise I'll do just that. I'll choose subsets of 500 of the USHCN and see what answers we see. That is why USHCN is chosen - there is surplus information from the dense coverage.

Why use anomaly?

We'll see. What I want to show is that it dramatically reduces location sampling error. The reason is that the anomaly set is much more homogeneous, since the expected value everywhere is more or less zero. So there is less variation in switching stations in and out. So I'll measure the error with and without anomaly formation.

USHCN example

So I'll look at the data for the 1218 stations in 2010, with an anomaly relative to the 1981-2010 average. In a Monte Carlo style, I make 1000 choices of 500 random stations, and find the average for 2011, first by just averaging station temperatures, and then the anomalies. The results (in °C) are:

Base 1981-2010, unweighted ..Mean of means ..s.d. of means
Temperatures11.8630.201
Anomalies0.1910.025


So the spatial error is reduced by a factor of 8, to an acceptable value. The error of temperature alone, at 0.201, was quite unacceptable. But anomalies perform even better with area-weighting, which should always be used. Here I calculate state averages and then area-weight the states (as USHCN used to do):

Update: I had implemented the area-weighting incorrectly when I posted about an hour ago. Now I think it is right, and the sd's are further reduced, although now the absolute improves by slightly more than the anomalies.

Base 1981-2010, area-weighted ..Mean of means ..s.d. of means
Temperatures12.1020.137
Anomalies0.1010.016


For both absolute T and anomalies, the mean has gone up, but the SD has reduced. In fact T improves by a slightly greater factor, but is still rather too high. The anomaly sd is now very good.

Does the anomaly base matter? A little, which is why WMO recommends the latest 3 decade period. I'll repeat the last table with the 1951-90 base:

Base 1951-80, area-weighted ..Mean of means ..s.d. of means
Temperatures12.1030.138
Anomalies0.6200.021

The T average is little changed, as expected. The small change reflects the fact that sampling 1000 makes the results almost independent of that random choice. But the anomaly mean is higher, reflecting warming. And the sd is a little higher, showing that subtracting a slightly worse estimate of the 2011 value (the older base) makes a less homogeneous set.

So what to make of spatial sampling error?

It is significant (with 500 station subsets) for anomaly, and the reason why large datasets are sought. In terms of record hot years, I think there is a case for omitting it. It is the error if between 2015 and 2016 the set of stations had been changed, and that happened only to a very small extent. I don't think the theoretical possibility of juggling the station set between years is an appropriate consideration for such a record.

Conclusion

Spatial sampling, or coverage error for anomalies is significant for ConUS. Reducing this error is why a lot of stations are used. It would be an order of magnitude greater without the use of anomalies, because of the much greater inhomogeneity, which is why one should never average raw temperatures spatially.

Wednesday, January 25, 2017

Prospects for 2017.

Early last year I wrote a post called prospects for 2016. It was mainly tracking the progress of the El Nino, and I introduced the series of bar plots comparing corresponding months of 1997/8. I've kept updating those plots to end 2016.

Anyway, commenter Uli and others made good use of the thread to make and monitor predictions for 2016. Uli's prediction of 0.99°C at the time turned out to be exact, although the strength of the El Nino caused him to sometimes think a little higher. So I hope this will continue. Uli's main review comment is here.

For my own part, I don't claim any special insight, but I think substantial cooling from here is not very likely (but of course possible). For those who have been following NCEP/NCAR, it has been a wild ride in January:



Pretty warm at the moment. Sea ice, both N and S, has been fascinating too. After a long excursion way below historic levels, the Arctic has dropped back to the field for the moment. The Antarctic excursion was even more extreme, and is still well below other years. Here it is more interesting, because it is not far from the minimum. Again the earlier very high melting rate has not been maintained, but because of the coastline geometry, that is inevitable. A lot of shore doesn't have much ice left to melt (here is the movie.

So I'll leave the thread there open for comments through the year. Thanks, Uli and all.

Thursday, January 19, 2017

GISS fell 0.12°C in December; NOAA rose 0.04°C.

The contrasting directions are a result of the fading influence of polar warming. Basically, NOAA was down in November because it included the Siberian cold, but not the complementary Arctic warmth, while GISS was high then. The split in TempLS variants was similar, and BEST also dropped by about 0.06, similar to TempLS mesh. The drop in GISS was a little larger than I expected. HADCRUT 4 rose by about 0.07°C, under the same influence as NOAA and TempLS grid.

As noted in my earlier post today, GISS, NOAA and HADCRUT 4 did set records in 2016, GISS by a larger margin.


I'll show the regular GISS plot and TempLS comparison below the fold

GISS and NOAA both find that 2016 was hottest year recorded.

GISS and NOAA have jointly released their 2016 results. GISS's press release is here. NOAA's is here. I'll analyse the December results in a separate post, but for the moment, I'll just post the cumulative record plots:
GISS:


NOAA:



A fuller set of records for various indices is here, along with a discussion of UAH. The discussion of TempLS is here. Every index so far has reported a record year, and I expect the rest to do so. BEST also reported a record; BEST's report is here, and here is their cumulative record plot:





Update: Hot Whopper has more details.

Update. I see that HADCRUT 4 is out too, and they also had a record year, by a small margin. Here is their plot:



The complete set of these plots shown here is now fairly filled with complete data.


Friday, January 6, 2017

First surface temperature 2016 record - TempLS.

TempLS mesh was down a little in December, from 0.699°C in Nov to 0.653°C. That actually makes it the third warmest in the record, behind 2014 (just) and 2015. TempLS grid actually went up by about 0.04°C. As usual, this reflects differences at poles, which were not so warm in December. This affects TempLS mesh more, as it will GISS relative to NOAA/HADCRUT

The main map features are cold in N central Russia and NW America (but not Alaska), and big warmth around Central Asia.



But the main news is that it completes the year average, which was a record high at 0.807°C. All TempLS anomalies are relative to 1961-90 base years. 2016 beat the 2015 record of 0.728°C, so there shouldn't be much chatter about a "statistical tie". I have posted the series of cumulative records plots here. The plot for TempLS mesh is below. It shows a new color and level in each year a record was set.



TempLS grid was also a record at 0.776°C, vs 0.745°C for 2015. That's closer, reflecting again the fact that warmth at the poles was a feature of 2016, and is picked up more strongly by TempLS mesh. I expect that this will be reflected in the major indices, with GISS setting a record by a good margin, but NOAA and HADCRUT closer, In fact, HADCRUT is no sure thing, although I think the rise in TempLS grid this month increases the likelihood.

Data for RSS and UAH5.6 are also in. RSS V3.3 TLT dropped as did UAH V6, but still narrowly set a record for warmth in 2016. UAH5.6 had 2016 warmer by a more substantial margin. You can see the predicted record plots updated as data comes in here.



Wednesday, January 4, 2017

UAH - first index with record warm 2016

if you don't count NCEP/NCAR. But UAH just scraped in, due to a drop of 0.21°C in December. I see Roy Spencer has now adopted the line "not statistically warmer". I guess we'll hear a lot of this. There will be indices that will never be "statistically warmer" while the temperatures go up and up.

Anyway, in previous years I have shown progressive record plots to show how the record has crept (or, sometimes recently, leapt) upward over the years. This year I wanted to take an advance peek. So I did a range of indices, infilling missing months with the minimum month for 2016. That is pretty conservative, although it would have overestimated UAH. I show the plots below. There is more explanation of the style here. This time I have headed the plots with an "Incomplete" and superimposed a pink cross if there was infilling. I'll maintain it, so when the results are all in, this will go away. Use the arrow buttons to flick through the datasets. I'll try to keep the most recent finalised showing first. With conservative infilling, all indices currently show a record, though some are close. Often, the margin is high if that for 2015 was low, and vice versa.



For the meaning of the headings, see the glossary here.


Tuesday, January 3, 2017

NCEP/NCAR December Down 0.09°C - coolest since June.

The Moyhu NCEP/NCAR index fell from 0.48°C in November to 0.391°C (June was 0.369°C). It was an up and down month, cool in the mid, but warm at the end.

I don't normally talk about long term records with reanalysis, because I don't think it is homogeneous enough. But I will note that successive annual averages for the last three years were 0.19, 0.33, and 0.531°C (anomaly base 1994-2013). So up by 0.2°C in 2016. This is an indicator of large margins in other indices. It also means that all months in 2016 were warmer than the 2015 average, including December. El Nino has gone, but it stayed warm.

A feature was again a band of cold from Cairo to Vladivistok through Russia, though most of Asia was warm. Also cold in Canada and north US, though not in the South. Warm in the Arctic - a pattern for the December quarter.

Sea ice in the Arctic became more normal, and the very fast melting in the Antarctic slowed, although still a lot less ice than in other years.