Wednesday, August 16, 2017

GISS July up 0.15°C from June.

GISS was up from 0.68°C in June to 0.83°C in July. It was the warmest July in the record, though the GISS report says it "statistically tied" with 2016 (0.82). The increase was similar to the 0.12°C rise in TempLS.

The overall pattern was similar to that in TempLS. Warm almost everywhere, with a big band across mid-latitude Eurasia and N Africa. Cool in parts of the Arctic, which may save some ice.

I'll show the plot of recent months on the same 1981-2010 base, mainly because they are currently unusually unanimous. The group HADCRUT/NOAA/TempLS_grid tend to be less sensitive to the Antarctic variations that have dominated recent months, and I'd expect them to be not much changed in July also, which would leave them also in much the same place.



Recently, August reanalysis has been unusually warm. As usual here, I will compare the GISS and previous TempLS plots below the jump.

Tuesday, August 8, 2017

July global surface temperature up 0.11°C

TempLS mesh anomaly (1961-90 base) was up from 0.568°C in June to 0.679°C in July. This follows the smaller rise of 0.06°C in the NCEP/NCAR index, and a similar rise (0.07) in the UAH LT satellite index. The July value is just a whisker short of July 2016, which was a record warm month. With results for Mexico and Peru still to come, that could change..

Again the dominant change was in Antarctica, from very cold in June to just above average in July. On this basis, I'd expect GISS to also rise; NOAA and HADCRUT not so much. Otherwise as with the reanalysis, Middle East and around Mongolia were warm, also Australia and Western USA. Nowhere very hot or cold. Here is the map:



Thursday, August 3, 2017

July NCEP/NCAR up 0.058°C

In the Moyhu NCEP/NCAR index, the monthly reanalysis average rose from 0.241°C in June to 0.299°C in July, 2017. This is lower than July 2016 but considerably higher than July 2015. The interesting point was a sudden rise on about July 24, which is responsible for all the increase since June. It may be tapering off now.

It was generally warm in temperate Asia and the Middle East, and even Australia. Antarctica was mixed, not as cold as June. The Arctic has been fairly cool.





Saturday, July 22, 2017

NOAA's new ERSST V5 Sea surface temperature and TempLS

The paper describing the new version V5 of ERSST has been published in the Journal of Climate. The data is posted, and there is a NOAA descriptive page here. From the abstract of the (paywalled) paper, by Huang et al:
This update incorporates a new release of ICOADS R3.0, a decade of near-surface data from Argo floats, and a new estimate of centennial sea-ice from HadISST2. A number of choices in aspects of quality control, bias adjustment and interpolation have been substantively revised. The resulting ERSST estimates have more realistic spatio-temporal variations, better representation of high latitude SSTs, and ship SST biases are now calculated relative to more accurate buoy measurements, while the global long-term trend remains about the same.
A lot of people have asked about including ARGO data, but it may be less significant than it seems. ARGO floats only come to the surface once every ten days, while the more numerous drifter buoys are returning data all the time. There was a clamor for the biases to be calculated relative to the more accurate buoys, but as I frequently argued, as a matter of simple arithmetic it makes absolutely no difference to the anomaly result. And sure enough, they report that it just reduces all readings by 0.077°C. That can't affect trends, spatial patterns etc.

The new data was not used for the June NOAA global index, nor for any other indices that I know of. But I'm sure it will be soon. So I have downloaded it and tried it out in TempLS. I have incorporated it in place of the old V3b. So how much difference does it make? The abstract says
Furthermore, high latitude SSTs are decreased by 0.1°–0.2°C by using sea-ice concentration from HadISST2 over HadISST1. Changes arising from remaining innovations are mostly important at small space and time scales, primarily having an impact where and when input observations are sparse. Cross-validations and verifications with independent modern observations show that the updates incorporated in ERSSTv5 have improved the representation of spatial variability over the global oceans, the magnitude of El Niño and La Niña events, and the decadal nature of SST changes over 1930s–40s when observation instruments changed rapidly. Both long (1900–2015) and short (2000–2015) term SST trends in ERSSTv5 remain significant as in ERSSTv4.
The sea ice difference may matter most - this is a long standing problem area in incorporating SST in global measures. On the NOAA page, they show a comparison graph:



There are no obvious systematic trend differences. The most noticeable change is around WWII, which is a bit of a black spot for SST data. A marked and often suspected peak around 1944 has diminished, with a deeper dip around 1942.

TempLS would be expected to reflect this, since most of its data is SST. Here is the corresponding series for TempLS mesh plotted:



Global trends (in °C/century) are barely affected. Reduced slightly in recent decades, increased slightly since 1900:

start yearend yearTempLS with V4TempLS with V5
190020160.7690.791
194020160.9780.974
196020161.4891.465
198020161.6311.607

Almost identical behaviour is seen with TempLS grid.







Thursday, July 20, 2017

NOAA global surface temperature down just 0.01°C

Down from 0.83°C in May to 0.82C in June (report here). I don't normally post separately about NOAA, but here I think the striking difference from GISS/TempLS mesh is significant. GISS went down 0.19°C, and TempLS mesh by 0.12°C. But TempLS grid actually rose, very slightly. I have often noted the close correspondence between NOAA and TempLS grid (and the looser one between TempLS mesh and GISS) and attributed the difference to GISS etc better coverage of the poles.

This month, the cause of that difference is clear, as is the relative coolness of June in GISS. With TempLS reports, I post a breakdown of the regional contributions. These are actual contributions, not just average temperature. So in the following:



you see that the total dropped by about 0.12°C, while Antarctica dropped from conributing 0.07C to -0.07C, a difference that slightly exceeded the global total drop of 0.12C.

That doesn't mean that, but for Antarctica, there would have been no cooling. May had been held up by the relative Antarctic warmth. But it is a further illustration of the difference between the interpolative procedures of GISS and TempLS and the cruder grid-based processes of NOAA and TempLS grid. I would probably have abandoned TempLS grid, or at least replaced it with a more interpolative version (post coming soon), if it were not for the correspondence with NOAA and HADCRUT.

Update: I see that the paper for ERSST V5 has just been published in J Climate. I'll post about that very soon, and also, maybe separately, give an analysis of its effect in TempLS. I see also that NOAA was still using V4 for June; I assume they will use V5 for July, as I expect I will. The NOAA ERSST V5 page is here.

Here is the NOAA map for the month. You can see how the poles are missing.





Saturday, July 15, 2017

GISS June down 0.19°C from May.

GISS was down from 0.88°C in May to 0.69°C in June.The GISS report is here; they say it was the fourth warmest June on record. The drop was somewhat more than the 0.12°C in TempLS. The most recent month that was cooler than that was November 2014.

The overall pattern was similar to that in TempLS. The big feature was cold in Antarctica, to which both GISS and TempLS msh are sensitive, more so than HADCRUT or NOAA. Otherwise, as with TempLS, it was warm in Europe, extending through Africa and the Middle East, and also through the Americas. Apart from Antarctica, the main cold spot was NW Russia.

So far, July is also cold, although with some signs of warming a little from June. As usual, I will compare the GISS and previous TempLS plots below the jump.

Saturday, July 8, 2017

June global surface temperature down 0.12°C

TempLS mesh was down from 0.704°C in May to 0.586°C in June. This follows the slightly larger fall of 0.16°C in the NCEP/NCAR index, and falls in the satellite indices, which had risen in May. The June anomaly (1961-90 base) is now a little below mid-2015 values, and is the coolest month since Nov 2014. In fact, it is similar to the 2014 annual average, which was still a record in its day.

The big turnaround was in the Antarctic, which went from quite warm to very cool. This is reflected in the TempLS grid values, which are less sensitive to the poles; T grid actually warmed. This pattern tends to be reflected in the main indices, with GISS generally picking up the polar changes; NOAA and HADCRUT less so. Otherwise as with the reanalysis, Europe was warm, NW Russia cold, Arctic neutral, warm spots in the Americas. Here is the map, and I'll show below that the breakdown, which emphasises the Antarctic turnaround. :



Breakdown plot:





Tuesday, July 4, 2017

New RSS TLT V4 - comparisons

As mentioned in my previous post, RSS has a new V4 TLT out - announcement here. I'm now using it in place of V3.3. The J Climate paper describing it is here:

A satellite-derived lower tropospheric atmospheric temperature dataset using an optimized adjustment for diurnal effects

Carl A. Mears and Frank J. Wentz
Remote Sensing Systems, 444 Tenth Street, Santa Rosa, CA, 95401

I quoted from the abstract in my previous post.

The changes are described in those links, and are not surprising, given the previous datasets (eg TMT, TTT) that have come out in V4. I thought here I would just show a comparison of recent changes in both UAH and RSS - they are rather complementary. In the graph below, I have converted RSS from 1979-1999 to the UAH base of 1981-2010. I use reddish for UAH, bluish for RSS (12 month running mean):



The effect of the change is clearer if a common measure is subtracted - I use the average of the four sets here for that:



Now you can see what has happened. RSS TLT V4 is close to UAH V5.6, and UAH V6 is close to the old RSS V3.3 (which RSS described as having a known cooling bias). As they noted, the new RSS V4 shows more uniformity over time. The overall picture is that TLT measures are not stable; much less so than surface measures, as I noted here.

Contrary to some (mainly sceptic) opinion, satellite measures are not naturally superior. Measuring the temperature at various levels of the troposphere is a worthwhile endeavour, but it is not a substitute for surface. In fact, I think TLT has had undeserved prominence, and I rather thought RSS should drop it altogether. It is an attempt to get as close to surface as possible, but it isn't very close, and sacrifices much reliability in trying to get there. I notice the John Christy now usually quotes UAH TMT.

The reason for loss of reliability is that the MSU is trying to make deductions from a microwave signal which is a mix of various layers in the troposphere, with a large background noise generated at the surface. It is hard to discriminate, and harder as you try to see closer to the surface. They try to get around this by taking two measures designed for higher levels (TMT and, for UAH, a tropopause level TP), and forming a linear combination which is designed to subtract out the higher troposphere and stratosphere levels. But as with any such differencing, errors increase.

People have the idea that satellites just have to be better, because they can survey the whole Earth with one instrument. But that is far from true. The downsides are described in this UAH overview and the various RSS papers, and include:
  • There is only one instrument, or at most a few, while at the surface there are thousands, creating lots of redundancy. One consequence is that with satellites there is a big problem with the inevitable changeovers. Surface stations needd some adjustment when the instruments or environments change but that is minor compared with changing the whole instrument base every few years.
  • The instrument doesn't read a thermometer at every level. It has to resolve a mixed incoming microwave beam, confounded with surface noise. You can get some resolution with frequency bands, and a little more with differing angles of view. But it is really squinting, and in the end you have to solve an inverse problem, which takes adventurous mathematics.
  • The instrument gives a snapshot just twice a day. At surface, even the old min/max thermometers, though read only once, continuously monitored the minn and max for 24 hours, and of course now we have thousands of stations recording at high frequency. A problem with twice a day is that you have to make adjustments for what time of day it is, because of diurnal variation. And that diurnal pattern depends on the level (not clearly known), season etc. A hard enough problem, but the big one is
  • diurnal drift. It isn't the same time every day, due to orbit changes, and they seem to have trouble deciding exactly what time it is. Roy Spencer says of V6:
    For example, years ago we could use certain AMSU-carrying satellites which minimized the effect of diurnal drift, which we did not explicitly correct for. That is no longer possible, and an explicit correction for diurnal drift is now necessary. The correction for diurnal drift is difficult to do well, and we have been committed to it being empirically–based, partly to provide an alternative to the RSS satellite dataset which uses a climate model for the diurnal drift adjustment.
  • It is a long standing bugbear, and much of the RSS change also seems to be in the drift correction. From their paper abstract:
    Previous versions of this dataset used general circulation model output to remove the effects of drifting local measurement time on the measured temperatures. In this paper, we present a method to optimize these adjustments using information from the satellite measurements themselves. The new method finds a global-mean land diurnal cycle that peaks later in the afternoon, leading to improved agreement between measurements made by co-orbiting satellites.

Those are just some of the problems which lead to such large version changes.

Update: From a tweet from Carl Mears, here is a useful FAQ on the changes.


Further: David asked below about comparison with radiosondes. That FAQ has a diagram showing the comparison:



It is sat - sondes, so when you see in this century that the plot goes down, it means that radiosondes are showing more warming that satellites. With UAHV6.0 it is a lot more; with RSS TLT V4 it is closer, but sondes still show more warming. As the FAQ says:

"Note that all satellite data warm relative to radiosondes before about 2000, and then cool after about 2000. We don't know if this overall pattern is due to problems with the radiosonde data, with the satellite data or (most likely) both."


Monday, July 3, 2017

June NCEP/NCAR down 0.16°C

In the Moyhu NCEP/NCAR index, the monthly reanalysis average fell from 0.40°C in May to 0.241°C in June, 2017. This makes it the coolest month for nearly two years - since 0.164°C in July 2015. Even so, it was still the third warmest in the record for that index, though I comment caution in compare values decades, because of lack of homogeneity. It was only just behind 2013 (0.249) for second place. It's the first time for nearly two years that a month fell behind an earlier corresponding month other than 2016.

The main cool spot was Antarctica, and the main reason for the drop was that, as well, the Arctic dropped back to average, with Siberia mixed. Europe was warm.

in other (tropospheric) news, RSS has brought out a V4 version of TLT, described in a J Climate paper by Wentz and Mears here. I'll start using it for this month's reporting. I was actually wondering whether they would, since the trend seems to be more toward quoting TMT and TTT. AS has been the pattern with V4, the low trend that RSS V3.3 showed until recently, which gave rise to umpteen pause stories, has come closer to other records, mainly, they say, due to a revised diurnal correction. Here is their abstract:

A satellite-derived lower tropospheric atmospheric temperature dataset using an optimized adjustment for diurnal effects

Carl A. Mears and Frank J. Wentz
Remote Sensing Systems, 444 Tenth Street, Santa Rosa, CA, 95401

Temperature sounding microwave radiometers flown on polar-orbiting weather satellites provide a long-term, global-scale record of upper-atmosphere temperatures, beginning in late 1978 and continuing to the present. The focus of this paper is a lower-tropospheric temperature product constructed using measurements made by the Microwave Sounding Unit channel 2, and the Advanced Microwave Sounding Unit channel 5. The temperature weighting functions for these channels peak in the mid to upper troposphere. By using a weighted average of measurements made at different Earth incidence angles, the effective weighting function can be lowered so that it peaks in the lower troposphere. Previous versions of this dataset used general circulation model output to remove the effects of drifting local measurement time on the measured temperatures. In this paper, we present a method to optimize these adjustments using information from the satellite measurements themselves. The new method finds a global-mean land diurnal cycle that peaks later in the afternoon, leading to improved agreement between measurements made by co-orbiting satellites. The changes result in global-scale warming (global trend (70S-80N, 1979-2016) = °0.174 C/decade), ~30% larger than our previous version of the dataset (global trend, (70S-80N, 1979-2016) = 0.134C/decade). This change is primarily due to the changes in the adjustment for drifting local measurement time. The new dataset shows more warming than most similar datasets constructed from satellites or radiosonde data. However, comparisons with total column water vapor over the oceans suggest that the new dataset may not show enough warming in the tropics.


I have updated the data link in the source table.





Tuesday, June 27, 2017

Temperature station distribution - equal area plot

I have been experimenting with maps that are a byproduct of my systematising a cubed sphere grid. I thought it would give a better perspective on the distribution of surface stations and their gaps, especially with the poles. So here are plots of the stations, land and sea, which have reported April 2017 data, as used in TempLS. The ERSST data has already undergone some culling.



It shows the areas in proportion. However, it shows multiple Antarctica's etc, which exaggerates the impression of bare spots, so you have to allow for that. One could try a different projection - here is one focussing on a strip including the America's:



So now there are too many Africa's. However, between them you get a picture of coverage good and bad. Of course, then the question is to quantify the effect of the gaps.





Friday, June 23, 2017

World map equal area projection - more

In my last post, I showed an equal area world map projection that was a by-product of the cubed sphere gridding of the Earth's surface. It was an outline plot, which makes it a bit harder to read. Producing a colored plot was tricky, because the coloring process in R requires an intact loop, which ends where it started, and the process of unfolding the cube onto which the map is initially projected makes cuts.

So I fiddled more with that, and eventually got it working. I'll show the result below. You'll notice more clearly the local distortion near California and Victoria. And it clarifies how stuff gets split up by the cuts marked by blue lines. I haven't shown the lat/lon lines this time; they are much as before.





Monday, June 19, 2017

World map projection using cubed sphere

This post follows on from the previous post, which described the cubed sphere mapping which preserves areas in taking a surface grid from cube to sphere. I should apologise here for messing up the links for the associated WebGL plot for that post. I had linked to a local file version of the master JS file, so while it worked for me, I now realise that it wouldn't work elsewhere. I've fixed that.

If you have an area preserving plot onto the flat surfaces of a (paper) cube, then you only have to unfold the cube to get an equal-area map of the world on a page. It necessarily has distortion, and of course the cuts you make in taking apart the cube. But the area preserving aspect is interesting. So I'll show here how it works.



I've repeated the top and bottom of the cube, so you see multiple poles. Red lines are latitudes, green longitudes. The blue lines indicate the cuts in unfolding the cube, and you should try to not let your eye wander across them, because there is confusing duplication. And there is necessarily distortion near the ends of the lines. But it is an equal area map.

Well, almost. I'm using the single parameter tan() mapping from the previous post. I have been spending far too much time developing almost perfectly 1:1 area mappings. But I doubt they would make a noticeable difference. I may write about that soon, but it is rather geekish stuff.





Saturday, June 17, 2017

Cubing the sphere

I wrote back in 2015 about an improvement on standard latitude/longitude gridding for fields on Earth. That is essentially representing the earth on a cylinder, with big problems at the poles. It is much better to look to a more sphere-like shape, like a platonic solid. I described there a mesh derived from a cube. Even more promising is the icosahedron, and I wrote about that more recently, here and here.

I should review why and when gridding is needed. The original use was in mapping, so you could refer to a square where some feature might be found. The uniform lat/lon grid has a big merit - it is easy to decide which cell a place belongs in (just rounding). That needs to be preserved in any other scheme. Another use is in graphics, where shading or contouring is done. This is a variant of interpolation. If you know some values in a grid cell, you can estimate other places in the cell.

A variant of interpolation is averaging, or integration. You calculate cell averages, then add up to get the global. For this, the cell should be small enough that behaviour within it can be regarded as homogeneous. One sample point is reasonably representative of the whole. Then they are added according to area. Of course, the problem is that "small enough" may mean that many cells have no data.

A more demanding use still is in solution of partial differential equations, as in structural engineering or CFD, including climate GCMs. For that, you need to not only know about the cell, but its neighbors.

A cubed sphere is just a regular rectangular grid (think Rubik) on the cube projected, maybe after re-mapping on the cube, onto the sphere. I was interested to see that this is now catching on in the world of GCMs. Here is one paper written to support its use in the GFDL model. Here is an early and explanatory paper. The cube grid has all the required merits. It's easy enough to find the cell that a given place belongs in, provided you have the mapping. And the regularity means that, with some fiddly bits, you can pick out the neighbors. That supported the application that I wrote about in 2015, which resolved empty cells by using neighboring information. As described there, the resulting scheme is one of the best, giving results closely comparable with the triangular mesh and spherical harmonics methods. I called it enhanced infilling.

I say "easy enough", but I want to make it my routine basis (instead of lat/lon), so that needs support. Fortunately, the grids are generic; they don't depend on problem type. So I decided to make an R structure for standard meshes made by bisection. First the undivided cube, then 4 squares on each face, then 16, and so on. I stopped at 64, which gives 24576 cells. That is the same number of cells as in a 1.6° square mesh, but the lat/lon grid has some cells larger. You have to go to 1.4° to get equatorial cells of the same size.

I'll give more details in an appendix, with a link to where I have posted it. It has a unique cell numbering, with an area of each cell (for weighting), coordinated of the corners on the sphere, a neighbor structure, and I also give the cell numbers of all the measurement points that TempLS uses. There are also functions for doing the various conversions, from 3d coordinates on sphere to cube, and to cell numbering.


There is also a WebGL depiction of the tesselated sphere, with outline world map, and the underlying cube with and without remapping.

Friday, June 16, 2017

GISS May unchanged from April - second warmest May on record.

As with TempLS, GISS showed May unchanged from April, at 0.88°C. Although that is down from the extreme warmth of Feb-Mar, it is still very warm historically. In fact, it isn't far behind the 0.93°C of May 2016. June looks like being cooler, which reduces the likelihood of 2017 exceeding 2016 overall.

The overall pattern was similar to that in TempLS. A big warm band from N of China to Morocco (hot), with warmth in Europe, and cold in NW Russia. Wark Alaska, coolish Arctic and Antarctica mixed.

As usual, I will compare the GISS and previous TempLS plots below the jump.

Tuesday, June 13, 2017

Integrating temperature on sparse subgrids

I've been intermittently commenting on a thread on the long-quiet Climate Audit site. Nic Lewis was showing some interesting analysis on the effect of interpolation length in GISS, using the Python version of GISS code that he has running. So the talk turned to numerical integration, with the usual grumblers saying that it is all too complicated to be done by any but a trusted few (who actually don't seem to know how it is done). Never enough data etc.

So Olof chipped in with an interesting observation that with the published UAH 2.5x2.5° grid data (lower troposphere), an 18 point subset was sufficient to give quite good results. I must say that I was surprised at so few, but he gave this convincing plot:



He made it last year, so it runs to 2015. There was much scepticism there, and some aspersions, so I set out to emulate it, and of course, it was right. My plots and code are here, and the graph alone is here.

So I wondered how this would work with GISS. It isn't as smooth as UAH, and the 250 km less smooth than 1200km interpolation. So while 18 nodes (6x3) isn't quite enough, 108 nodes (12x9) is pretty good. Here are the plots:





I should add that this is the very simplest grid integration, with no use of enlightened infilling, which would help considerably. The code is here.

Of course, when you look at a statistic over a longer period, even this small noise fades. Here are the GISS trends over 50 years:

1967-2016 trend C/CenFull mesh 108 points 18 points
250km1.6581.7031.754
1200km1.7541.7431.768


This is a somewhat different problem from my intermittent search for a 60-station subset. There has already been smoothing in gridding. But it shows that the spatial and temporal fluctuations that we focus on in individual maps are much diminished when aggregated over time or space.





Thursday, June 8, 2017

May global temperature unchanged from April

TempLS mesh was virtually unchanged , from 0.722°C to 0.725°C. This follows the smallish rise of 0.06°C in the NCEP/NCAR index, and larger rises in the satellite indices. The May temperature is still warm, in fact, not much less than May 2016 (0.763°C). But it puts 2017 to date now a little below the annual average for 2016.

The main interest is at the poles, where Antarctica was warm, and the Arctic rather cold, which may help retain the ice. There was a band of warmth running from Mongolia to Morocco, and cold in NW Russia.. Here is the map:







Saturday, June 3, 2017

May NCEP/NCAR up 0.06°C

So far in 2017, in the Moyhu NCEP/NCAR index, January to March were very warm, but April was a lot cooler. May recovered a little, rising from 0.34 to 0.4°C, on the 1994-2013 anomaly base. This is still warm by historic standards, ahead of all annual averages before 2016, but it diminishes the likelihood that 2017 will be warmer than 2016.

There were few notable patterns of hot and cold - cold in central Russia and US, but warm in western US, etc. The Arctic was fairly neutral, which may explain the fairly slow melting of the ice..

Update - UAH lower troposphere V6 ;rose considerably, from 0.27°C to 0.45°C in May.



Wednesday, May 31, 2017

Page on monthly anomalies in WebGL

Moyhu has had for about four years a maintained page with a WebGL display of temperature anomalies over each month since 1900. The anomalies come from TempLS, and use a 1961-90 base period. It is a color-shaded plot, in which the color is correct at each measurement point, and interpolated for the rest of the triangular mesh. The data used is unadjusted GHCN V3 and ERSST V4. The plot is the best source of detailed information about the current month in its early days.

I have been upgrading these pages (trends described here) to use the new versions of the WebGL facility. That has involved also upgrading the facility, and I'll show the new anomaly plot below the jump. I'll leave the old page in place for a few days.

The main upgrade was to enable use of on demand loading of data via XMLHTTPRequest, since it would take far to long to download data for all months. That involved creating selection menus (green block on right). To incorporate this in the facility, I have introduced user functions in the user file, needed to link the menus to URLs for the data. I have taken that further to allow user functions for the color scale and formatting of responses to click queries (you can display data for nodes in the mesh). It's all optional - defaults work as before.

So the plot is below the jump. You can select a year at a time, and the months will show as radio button choices (fast response). I'll describe the new facilities after the plot below.

Friday, May 19, 2017

New local station trends - comments.

Yesterday I posted a new WebGL map of station trends. I'd like to follow up with comments on two topics, both of which follow from a fix to a problem which added noise, and some bias, to the old version. With the clearer picture, I'd like to point out how the trends really do show a quite smooth consistent picture, mostly, even before adjustment. Then I'd like to talk about the exceptions (USA and China) and the effect of homogenisation.

Then (below the jump) I'll talk more about the effect of removing seasonality,. It is substantial, and, I think, instructive.

First I'll show Europe - unadjusted on left, adjusted on right. All images here are of the thirty year period from 1987 to 2016. It shows a pattern typical of most of the world, with a large degree of uniform warm trend, with a few exceptions. The cool blob on the left, in the N Atlantic, is a shadow of a more prominent cooling in that area in more recent years. The effect of adjustment is not so radical, but it does reduce some of the excursions, some almost fully. It's possible the excursions were real, but given the general uniformity, it seems more likely that they were inhomogeneities.



Next is the USA, with some of Canada in contrast. The density of stations is obvious, as is the inconsistent but strong cooling trend. The issue is TOBS. A lot of stations changed with the conversion to MMTS, and the canges were generally in a direction that created artificial cooling. With adjustment, which includes TOBS correction, the picture is much clearer. Still some cooling in the mid-west, but otherwise warming, as in the rest of N America.



Finally, China. The stations are sparser, but again fairly irregular, although te denser regions are more consistent. And this time homogenisation does not make a cosistent warming or cooling change. It does moderate some of the extreme cooling, so that might have a warming effect overall.



Finally, I would urge readers to check the page in detail, to see the overall effect of adjustment (the swap button helps here). The main thing to see is that adjustment does not have a general effect of increasing trends. It's true that it is hard to distinguish shades of red, but at least warm trends are not being created out of nothing.

Below the jump I'll deal with the seasonal issue.

Thursday, May 18, 2017

WebGL map of local station trends - various periods.

I have updated the page where I show trends over various periods at GHCN land stations and ERSST measures at sea. The old page is here. The map shows trends as a shaded color over the triangular mesh. The shade is exact for the nodes, which you can also query by clicking. Posts on the previous page are here and later here.

The page is not automatically updated, since the trends are at least two decades. However, the previous page was made in 2012, so a data update was needed. And it makes sense to use the new MoyGLV2.1 WebGL facility. I had been slow to update the old data partly because I had used a rather neat, but hard to debug, mesh compression scheme, described here. Each period needs a separate mesh, so that helps. However, downloads are now generally quicker than in 2012, so the full 3 Mb of data does not seem so forbidding. So I have sadly let that go. However, for this post I have put the WebGL below the jump, as it still may take quite a few seconds for some.

I also updated the computing method to correct a source of noise in the previous page. I think the issue is instructive, and in 2012, I hadn't done the thinking explained in some of my many pages on averaging, eg here. I have frequently explained why anomalies are used in spatial averaging, to overcome inhomogeneities. But I had not thought they were needed for a trend at a single station. But they are - seasonal variation is a big source of inhomogeneity, and should be subtracted out. It shows itself in two ways:
  • If missing values cluster in a cold or hot time, especially biased toward one end of the period, then it introduces a spurious trend, and
  • you can even get a spurious trend with all data present. Sin(x) between 0 and 360° has a trend, rising almost the full amplitude. Taking 30 cycles reduces this by a factor of 30, but with seasonal range of say 20C, that can still be serious. Fortunately a calendar year is nore like cos, which doesn't have a trend over that period, but not all data runs a full calendar year at the end.


The remedy is to, for each station, calculate the mean observed seasonal cycle,, and subtract that out. I did that, to good effect. So, below the jump, or on the revised page, you can check ot trends from the last two decades to century plus. The radio buttons let you look at unadjusted or adjusted GHCN (prefixes un_ and ad_). One thing I found useful is to compare (swap button) two trends for the same period, one adjusted, one not. It is clear that homogenisation clears up all kinds of aberration, without greatly affecting the main trend pattern, which except for aberrations is quite smooth in space.

So below the jump is the revised map. There are some operating instructions on the page, or more detail on the WebGL page or post.

Tuesday, May 16, 2017

GISS April down 0.23°C - second warmest April on record.

I have been noting records showing a large drop from the very warm levels of March. NCEP/NCAR was down 0.23°C, TempLS down by0.165°C (now 0.16). GISS was also down 0.23°C, from 1.11C in March to 0.88 in April. But that is still warmer than any previous April except 2016. And it is warmer than the annual average for 2015 (0.82C), itself a notable record in its time. Sou has more. The April temperature is back to that of January, after the peaks of Feb and March.

The NCEP/NCAR dsaily record showed what happened. There was a sharp descent through the month, seeming to bottom out at the end. May has recovered somewhat, but is likely to also be much cooler than March, and is so far behind the April average..

I showed last month the year-to-date plot, compared with other warm years, noting that the year so far was ahead of the 2016 average, as shown by the red curve and horizontal line. Now YTD 2017 is right on the 2016 average. May will probably bring it below. Record prospects for 2017 now depend a lot on renewed El Nino activity. Here is the current YTD plot:



As usual, I will compare the GISS and previous TempLS plots below the jump. As with TempLS, there were fewer big features - lingering warmth in Siberia/Arctic, some cold in Antarctic.

Wednesday, May 10, 2017

Global surface anomaly down 0.165°C in April.

I've been waiting for three days for China to report - most others are very punctual lately. So it could change a little. But enough is enough - and last month, when I waited for China, they sent in February data, so it would have been better not to wait. Anyway, TempLS mesh showed a drop from 0.894°C in March to 0.729°C in April. That compares to a larger 0.226°C drop in the reanalysis index. Meanwhile, the troposphere indices went up - 0.08°C for UAH V6. As I often seem to have to say, it is a different place.

Despite the drop, April was still very warm. It was the 16th warmest month of any kind in the TempLS record. It was warmer than any annual average before 2016, including the then record year of 2015.

There was still quite a lot of warmth in the Siberia/Arctic region, and also in the east US. Antarctica was cold. Here is the breakdown plot:



Probably the main point of future interest is that SST is quite a lot higher. Elsewhere mostly moderate, which is a reduction for Siberia and Arctic.



Monday, May 8, 2017

The WebGL facility - versions.

Clive Best has been making good use of the WebGL facility. So I thought I should be more formal about versioning. I have been calling the current V2 a beta; I'll now drop the beta, and stop tinkering with V2, apart from bug fixes. The next version will be 2.1. I'll include that in the URL, and keep old versions posted, so for existing apps you won't be affected by changes, unless you call the update URL.

The main change I made (today) to V2 was to the dragging. There hadn't been any external control on update frequency, and so dragging a globe with a lot of triangles or lines could lead to superposition of successive images, with messy results. I have put in a 20 millisecs delay, so it can only update 50 times per sec. That delay doesn't seem to be perceptible, and mostly fixes that problem. You can vary this; the default is
U.delay=20.

The other main change is that there is now an option in the user file to define an additional function called MoyLate(p,U). This has the same syntax and functionality as MoyDat, but it is implemented after the extra objects like line (_L) edges. You can assign them properties at this stage; it wasn't possible in MoyDat(). You can't define new objects here, and it isn't the place to vary objects defined in MoyDat(). You can set colors, or maybe more usefully, vary the show property, eg
p.Mesh_L.show=0
That means that initially the line edges won't show, and the checkbox will be there but blank.

Another change is that in the calling HTML, you still need to provide a DIV tag before the script calls, but it doesn't need an ID. If you don't provide a DIV, it will go looking for somewhere to hang the app. In principle, this means that you can have several apps running on the same page (without iframes), but I think that needs more work.



Friday, May 5, 2017

Nature paper on the "hiatus".

There is a new Nature paper getting discussed in various places. It is called Reconciling controversies about the 'global warming hiatus'. There is a detailed discussion in the LA Times. The Guardian chimes in. I got involved through a WUWT post on a GWPF paper. They seem to find support in it, but other skeptics seem to think the reconciliation was effective, and are looking for the catch.

I thought it was a surprisingly political article for Nature, in that it traces how the hiatus gained prominence through pressure from contrarians and right wing politics, and scientists gradually came to take it seriously. I think they are right, but the process should be resisted. There really isn't much there, and the fact that contrarians create a hullabaloo doesn't mean that it is worth serious study. I'll show why I think that.

I'm going to show plots of various data since 2001, which is the period quoted (eg by GWPF) which excludes the 1998 El Nino. They weren't so scrupulous about that in the past, but now they want to exclude the recent warm years. Typically "hiatus" periods end about 2013. I recommend using the temperature trend viewer to see this in perspective. The most hiatus-prone of the surface datasets, by far is HADCRUT (Cowtan and Way explain why). Here is the Viewer picture of HADCRUT 4 trends in the period:



Each dot respresents a trend period, between the start year on the y-axis and the end on the x-axis. It's a lot easier to figure out in the viewer, which has an active time series graph which will show you when you click what is represented. If you cherry-pick well, you can find a 13-year period with zero slope, shown by the brown contour. And you'll see that the hiatus periods form two descending columns, headed by a blue blob. These are the periods which end in a fixed year (approx) on the x-axis - ie a dip. There are just two of them, and they are the La Nina years of 2008/9 and 2011/2. The location of those events determines the hiatus. If you look at other sets on the trend viewer, you'll see this much more weakly. At WUWT I listed the 2001-13 trends thus (error range converted to ±1σ):

DatasetTrend °C/cen
HADCRUT0.063 ± 0.301
GISS0.506 ± 0.367
NOAA0.509 ± 0.326
BEST L/O0.468 ± 0.432
C&Way0.489 ± 0.391


All except HADCRUT are quite positive. People sometimes speak of a slowdown. Incidentally, in the triangle plot, there is a reddish horizontal bar, bottom left, that is almost as prominent as the "pause". They are the strong positive trends that you can draw starting in 1999 - ie the 2001-6 warmth seen from the other end. I don't remember anyone getting excited about this feature.

I'd like to talk about the arithmetic of trends. Trend is a first central moment. It has a lot in common with moments of force, or torque. I think of it as a see-saw - a classic torque device. A heavyweight on the end has a lot of effect; in the middle not much. And of course, it depends which end. Trend is an odd see-saw, because it has both weights (cold periods) and uplifts (warm). It also has a progression. Items come on one end, and then progress across, exerting less and then opposite torque, until they drop off the other end (if you keep period fixed). So there isn't actually a lot of the period that is etermining the trend. It is predominantly the end forces.

I'll ilustrate that with this set of graphs (click the buttons below to see various datasets). It shows the mean (green) for 2001-2013 and colors the data (12-month running mean) as deviation from that value. The idea is that there has to be as much or more pulling the trend down rather than up, if it is to be negative. Either blue at the right or red at the left.



Now you can see that there aren't a lot of events that determine that. There is a red block from about 2001-6, which pulls the trend down. Then there are the two blue regions, the La Nina of 2008/9 and 2011/12, which also pull it down. The LN of 2008 has small torque on this period, but would have been effective earlier. @012 has the leverage, and so overcomes the sole uplift period or 2010.

That is just four periods, and it isn't hard to see how their effects can be chancy. It's really the 2001/6 warmth that is the anchor.

And then you see the big red period at the end, which overwhelms all this earlier stuff. GWPF and Co are keen to say that this is just a special case that should be excluded. Something like that it wasn't caused by CO2. But the 2001-6 period is also jus a natural excursion, and wasn't caused by CO2 either.

Basically the pause from 2001 won't come back until that big red is countered by a big blue. That would ensure that the trend returns close to that green line (extended). Of course, the red will be a powerful pauser for trends starting in 2015, and we'll hear about that soon enough.

Here is the same data colored by deviation from the trend from 2001 to present. We're still well on the red side of that too. The point here is that as long as new data lands above that line, it will be more red, and the trend will go up. It won't even reverse direction until you start seeing blue at that end. And if it did, there is a long way to go.



Now that the line has shifted, you can see how the blue periods would have destroyed such a trend earlier. But now, with their reduced leverage and the size of the red, that is where the trend ends up. For Hadcrut it's now 1.4°/Cen (other surface indices are higher).

So my conclusion is that, just as contrarians protest (with some justice) that not too much should be mad eof the current strong warming trends, because they are influenced by a single event, so too should the much waker hiatus be observed with modest interest, because it is the result of the concurrence of two weaker events, La Nina's, which get less noticed because they are less porminent, but are equally rather chance occurrences.







Thursday, May 4, 2017

Land masks, mesh and global temperature

I have been writing articles about land masks, leading up to using them to check and maybe improve my triangular mesh based TempLS. As I have tried to emphasise, the core of estimating global average temperature anomaly (or average anything) is numerical spatial integration. The temperature is known at a finite number of points. It has to be inferred for all the rest (interpolation) and the result complete data integrated. To do this successfully, the data has to be fairly homoigeneous, so anomalies are formed to take out variance in long term mean values. Then in the triangle method, linear interpolation is done within triangles.

But another kind of inhomogeneity is between land and sea, and indices often use a land mask to try to pin that down. In the mesh context, and in general, the idea is to ensure that values on land are only interpolated from land data; sea likewise.

The method corresponding to what is done with grids would be to count the mask elements within each triangle, and to divide coast-crossing triangle into a land and sea part. Since all that matters in the end is the weighting of each node, it's only necessary to get the area right. Assigning maybe a million grid elements to triangles is a rather heavy computation. So I tried something more flexible.

Here is a snapshot from the WebGL graphic below. It shows a problem section in East Africa. Light blue triangles are those that have two sea nodes, one land, and orange are those with two land, one sea. The Horn of Africa is counted as sea, and there is a good deal of encroachment of sea on land. That is about as bad as it gets, and of course there is some cancelling where land encroaches on sea.


So I refine the mesh. On the longest 20% of lines in such triangles, with land at one end and sea at the other, I make an extra node, and test whether it is sea/land with the mask. Then I give it the value of its matching end type. With the new nodes, I then re-mesh. This process I repeat several times. After respectively four and seven steps I get:

As you see, the situation improves greatly. new nodes cluster around the coast. There is however, still two rather large triangles at sea with a land node. These can show up when everything else seems converged; it is because of the convex hull re-meshing which may make different decisions about some of the large triangles bordering the coast. It slows convergence.

As to placement of that new node on the line, that is where the mask with a metric comes in. I know the approx distance of each node from the coast, and can place the new node where I estimate the cost to cross. I don't want it to be too exact, just to minimise the interior nodes created.


What I really want to know is what this does to the integral. So I tried first integrating the mask itself. That is a severe test; the result should show land/sea proportions as in a count of the mask. Then I tried integrating anomalies for February 2017. I'll show those below, but first, here is the WebGL showing of the seven stages of refinement (radio buttons).

Integration results

The table below shows the results of the progression. The left column is the area of the mixed triangles (part land, part sea), as a proportion of total surface. The next shows the result of integrating the mask itself, which should converge to 0.314. The third are the successive integrals of the anomalies for February 2017.

Mixed area
(fraction of sphere)
Integral of mask Integral of anomaly
°C, Feb 2017
Step 00.17660.31180.8728
Step 10.12680.32280.8583
Step 20.10970.31920.8645
Step 30.08450.32050.8655
Step 40.06820.32120.8646
Step 50.05780.32030.8663
Step 60.04890.32080.8624
Step 70.04290.31990.8611

Conclusion

I think it was a coincidence that the mask integration turned out near its target value of 0.314 at step 0 (no mesh change). As I said above, this is the most demanding case, maximising inhomogeneity. It doesn't improve because of the occasional flipping of triangles which leads to the occasional exceptions that show in the WebGL, but also because it started so close. For anomalies, the difference it makes to February 2017 is small at around 0.01°C.

So, while I am glad to have checked on the coast issue, I don't think it is worth incorporating this method in TempLS. It means extra convex hull calculation for each month, which is slow.







Wednesday, May 3, 2017

ERSST and Sea Ice

I use the NOAA ERSST V4 SST (Sea surface temperature) dataset as part of TempLS. It has the virtue of coming out promptly at the start of the month, and of course is the product of a lot of scientific work. But it has two nuisance aspects. One that I described last month, is that its 2x2° cells don't align very well with the coastal boundaries, and some repair action is needed. The other is the treatment of sea ice. ERSST returns values (if it can) for all non-land regions, and where there is sea ice, returns -1.8°C, which is the melting point of ice in sea, and so is indeed presumably the temperature of the water. But it isn't much use as a climate proxy there. Polar air over ice is often very much colder.

My aim is to mark these regions as no result, so that they will be interpolated, mostly from land. But that is complicated because, while -1.8 is clear enough, there are often temperatures close to that, which presumably mean mostly ice, or maybe ice for part of the month. So I have used a cut-off of -1°C.

I have been working recently with land masks to improve the accuracy of TempLS near coasts. My preferred version uses a triangular mesh with nodes at measurement points, so triangles will often be part land, part sea. It would be desirable to ensure that the implied interpolation uses land values for land locations. I'll post soon on how this can be done. But it sharpens the problem of sea ice, because the land mask doesn't recognise it. So I need to use some data, and ERSST is to hand, to mark this as land rather than sea.

So I have been reviewing the criterion for making that determination. I actually still think that -1°C is reasonable. To see that, I mapped the ERSST grid for Jan-Mar 2017 to show where the in-between regions are. I used WebGL.

It might seem that WebGL is overkill, since the polar regions can be easily projected onto 2D. But the WebGL facility makes it the easiest way. I just set all positive temperatures to zero, use the GRID type so I don't have to work out triangles, and then the color mapping automatically devotes the color range to the region of interest (and makes a color key).

So here is the plot (drag to see poles); in those months (radio buttons) it is Arctic that is of most interest. You can see that most of the region expected to be sea ice is in fact at -1.8C, and the fringe regions are intermediate. But there are also regions around the Canadian islands, for example, which show up as higher than -1.8, but would be expected to be frozen. A level of -1 seems to capture all that, without unduly modifying the front to clear ocean.

April NCEP/NCAR down 0.226°C

Temperatures rose from January to March but dropped right back in April, from March's 0.566°C to 0.34°C. That makes it the coldest month since the 2016 El Nino, behind December's 0.391°C. But even so, it was warmer than the annual averages of both 2014 and 2015, each a record in its time.

The main cool places were Canada, N Europe and Antarctica. China, E Siberia and the Arctic Ocean were warms was even most of the US.

Update - slightly OT, but you may notice that the TempLS report for March has a strange number (0.653°C) for that month. The main table above it has the correct number. The reason seems to be that in GHCN in the last few days, a whole lot of March data has gone missing, as you can see in the station map of the report. I hope they fix it soon. Fortunately, the lack of data prevents it updating the main table.

Update 2 - I wrote to GHCN but no response so far. Meanwhile, the pattern has changed - no longer whole countries missing, but more stations overall, so that now TempLS won't report at all.

Update 3. I got a response from GHCN saying that it was an ingest problem, now fixed. And it does seem OK now.



Friday, April 28, 2017

Land masks with distance measure

I wrote earlier about my use of land masks to sharpen up the boundaries of the ERSST data set that I use (and stop SST grid centres turning up on land). I have a more ambitious use in improving the weighting of the TempLS triangular mesh for land/sea difference. At present, many elements have mixed land/sea, and it is largely left to chance to get he balance right. I think that usually works out, but it would be better to have control.

A land mask is a big matrix of 1's and 0's corresponding to a grid, usually lat/lon. It has 1 if the cell is in land; it may also have a % where there is doubt, or may have a binary choice. There are a lot of land masks around, down to kilometer resolution if you want, but common ones are 1°, 1.2 and 1/4. That is what I will use (as used in the ISLSCP 2 project).

My general scheme is to refine the mesh to reduce the area of those spanning triangles. New nodes don't have new data attached, but their weight will be attributed to a land or sea station according to their placement.

I found that I would really like a more advanced mask, that actually gave a measure of the distance to the coast (for land and sea). It doesn't really increase the size of the mask. And it means that when I want to create a new node, I can place it toward the coast, instead of waiting for successive node generation to locate it. My scheme without this worked well for a while, but would create situations where new nodes would force a shift in some triangle that had all nodes on land. This happens because each mesh update is by convex hull formation, and with new nodes such a triangle might lose its tangent status.

So I set about making such a mask. I use a diffusion scheme. I mark the cells where land and sea adjoin, scored zero. Then next step I mark every neighbor cell on the land side +1, and on sea, -1. Then I mark their neighbors +2, -2, and so on.

But there is the problem of lakes. Masks generally show a lot of them, and I don't really want to know the distance to the nearest lake. So I first remove them. I do this by diffusion too. At this stage, I have the original 0,1 mask. I first advance the land by marking each of the 1 cells with a 1, and then again. That fills in most lakes, but also a lot of sea, especially bays etc. So then I diffuse back, advancing the 0's. This won't help the inland lakes, but will restore the sea cells to 0. Then I use the original mask to restore all land to 1 status.

I'll show below how this all works. It has enabled the overall aim, a coast-hugging triangular mesh, which I'll show in my next post. I have put the results as a R data file here. It is a list "mask"; the components are a letter (q for original, a for lake-less, and n with dist to sea), and 1,2,4 for cells per degree.

Wednesday, April 26, 2017

GWPF International Temperature Data Review - second anniversary

I've been intermittently tracking the progress of this review, which seems to have zombie status. The web site is still there, with no sign of news or termination. The project itself was announced here, with banner headlines in the Telegraph ( "Top Scientists Start To Examine Fiddled Global Warming Figures" )  and echoes. I described the state of play in September 2015.

I posted on the previous anniversary. I thought it necessary to maintain a watch, because they had said that despite not proceeding to a report, papers would be written, including one on the submissions. Publication of those would be held back until then. But Sept 2015 was the last news posting, and I have not heard of any progress with papers.

This is probably my last post on the topic - I think we have to deem it totally dead, despite the GWPF website still promising progress.



Sunday, April 23, 2017

Land Masks and ERSST

I use ERSST V4 as the ocean temperature data for TempLS. The actual form of he data is sometimes inconvenient; it probably wasn't intended for my kind of use. I described how it fits in here. My main complaint there was that it sets SST under sea ice to -1.8°C, which is obviously not useful as an air proxy. They obviously can't produce a good proxy, but it would be better to have the area explicitly masked, as you can't tell when the temperature is below about 1° whether it is really so, or whether there was part of the month that was frozen over, pulling down the average.

I described last month a new process I use to get a more evenly distributed subset of the ERSST for processing. The native density of 2x2° is unbalanced relative to land, and biases the temperature toward marine. The new scheme works well, but it draws attention to another issue. ERSST seems to quote a temperature for any cell for which they have a reading, even if the cell is mostly land. And in the new scheme, cell centers can more easily be on land. In particular, one turned up in the English Midlands, just near where I was once told is the point at maximum distance from the sea.

I've been thinking more about land masking lately. I have from a long while ago a set of masks that were used in the ISLSCP 2 project. They come in 1, 1/2 and 1/4° resolution, and in one version have percentages marked. I used the percent version to get land % for the 2° grid, and compared with what ERSST reported. Here is a WebGL version of that:



The ERSST filled cells are marked in pink; the land mask in lilac. The cells in green are both in ERSST and the land mask; white cells are in neither. You can switch the checkboxes top right to look at just ERSST, just mask, or just the green if you want. I called the green OVER, because it seems to mainly show sea intruding on land.

There is a tendency for the green to appear on west coasts, which suggests that the ERSST might be misaligned. One annoying thing about ERSST is that they aren't explicit about wherther the coordinates given for a cell represent the center or a corner. I've assumed center. If you moved ERSST one degree west, the green would then appear, a little more profusely, on the East coasts. I used 60% sea as the cut-off for the lnd mask. This was a result of trial; 50% meant that the land mask tended tp fall short of the coast more than overshoot; 60% seemed to be the balance point. Either is pretty good.

So my remedy has been to remove the green cells from the ERSST data. That seems to fix the problem. It raises anomalies very slightly, because it upweights land, but March rose from 0.89 to just 0.894, with similar rises in earlier months. The area involved is small.

I am now looking at ways to landmask the triangular mesh.



Friday, April 21, 2017

Spherical Harmonics - the movie

This is in a way a follow-up to the Easter Egg post. There I was showing the icosahedral based mesh with various flashing colors, with a background of transitions between spehrical harmonics (SH) to make an evolution. Taking away the visual effects and improving the resolution makes it, IMO, a good way of showing the whole family of spherical harmonics. I described those and how to calculate them here, with a visualisation as radial surfaces here.

Just reviewing - the SH are the analogue of trig functions in 1D Fourier analysis. They are orthogonal with respect to integration on the sirface, and as with 1D Fourier, you can project any function onto a subspace spanned by a finite set of them - that is, a least squares fit. The fit has various uses. I use one regularly in my presentation of TempLS results, and each month I show how it compares with the later GISS plot (well). I also use it as an integration method; all but the first SH's exactly integrate to zero, so with a projection onto SH space, the first coefficient gives the integral. I think it is nearly as good as the triangle mesh integration.

As with trig functions, the orthogonality occurs because they have oscillations that can't be brought into phase, but cancel. That is the main point of the pattern that I will show. There are two integer parameters, L and M, with 0≤M≤L. Broadly, L represents the total number of oscillations, some in latitude and some around the longitude, and M represents how they are divided. With M=0, the SH is a function of latitude only, and with M=L, of longitude only (in fact, a trig function sin(M*φ)). Otherwise there is an array of peaks and dips.

Sunday, April 16, 2017

A Magical Easter Egg

This is a Very Serious Post. Really. It's a follow-up to my previous post about icosahedral tesselation of the sphere (Earth). The idea is to divide the Earth as best possible into equal equilateral triangles. It's an extension of the cubed sphere that I use for gridding in TempLS. The next step is to subdivide the 20 equilateral triangles from the icosahedron in triangles and project that onto the sphere. This creates some distortion near the vertices, but less than for the cube.

So I did it. But not having an immediate scientific use for it, and having some time at Easter, I started playing with some WebGL tricks. So here is the mesh (each triangle divided into 49) with some color features, including some spherical harmonics counterpoint.

Naturally, you can move it around, and there are some controls. Step is the amount of color change per step, speed is frame speed, and drift is the speed of evolution of the pattern. It's using a hacked version of the WebGL facility. Here it is. Happy Easter.

Saturday, April 15, 2017

GISS March up by 0.02°C, now 1.12°C!

As Olof noted, GISS has posted on March temperature. It was 1.12°C, up by 0.02°C from February. That rise is close to the 0.03°C shown by TempLS mesh. It makes March also a very warm month indeed. It's the second warmest March in the record - Mar 2016 was near the peak of the El Nino. And it exceeds any month before 2016.

Here is the cumulative average plot for recent warm years. Although 2016 was much warmer at the start, the average for 2017 so far is 0.06°C higher than for all 2016.





I'll show the globe plot below the jump. It shows the huge warmth in Siberia, and most of N America except NW. And also Australia - yes, it has been a very warm autumn here so far (mostly). GISS escaped the China glitch.


Thursday, April 13, 2017

TempLS update - now March was warmer than Feb by 0.03°C

Commenter Olof R noticed that the TempLS mesh estimate for March had suddenly risen, reversing the previously reported drop of about 0.06°C to a rise of 0.03°C. He attributed the rise to a change in China data, which, as noted in the previous post had been very cold, and was now neutral.

I suspected that the original data supplied by China might have been for February, a relatively common occurrence. Unfortunately when I download GHCN data it overwrites the previous, so I can't check directly. But the GHCN MAX and MIN data are updated at source less frequently than TAVG, and they are currently as of 8 April. So I checked the China data there, and yes, March was very similar to February, though not identical. GHCN does a check for exact repetition.

Then I checked the CLIMAT forms at OGIMET. I checked the first location, HAILAR (way up in Manchuria). The current CLIMAT has a TMAX of -3°C for March and -13.5°C for Feb, and yes, the 8 Apr GHCN has -13.5. So it seems that is what happened, and has been corrected.

So March is warmer than February, and so warmer than any month before Oct 2015. It is also warmer than the record annual average of 2016, and so then is the average for Q1 of 2017. The result is fairly consistent with the NCEP/NCAR average, which showed a very slight fall. I was preparing a progress plot for the next GISS report, so I'll show that for TempLS. It shows the cumulative average for each year, and the annual average as a straight line. 2017 has not started with the El Nino rush of 2016, but is ahead of the average and seems more likely to increase than decrease.





Icosahedral Earth

This post is basically an exercise in using the WebGL facility, with colorful results. It's also the start of some new methods, hopefully. I wrote a while ago about improved gridding methods for integrating surface temperatures. The improvement was basically a scheme for estimating missing cells based on neighbors, and an important enabling feature was a grid that had more uniform cells than the conventional lat/lon grid. I used a cubed sphere - a projection of a gridded cube surface onto the sphere. The corners of the cube are a slight irregularity, that can be mitigated by non-linear scaling of the grid spacing. The cubed sphere has become popular lately - GFDL use it for their GCMs. It worked well for me.

In that earlier post, Victor Venema suggested using an icosahedron instead. This has less irregularity at the vertices, since the solid angle is greater, and the distortion of mapping to a sphere less. The geometry is a bit less familiar than the cube, but quite manageable.

A few days ago, I described methods now built into the facility for mapping triangles that occur in convex hull meshing actually onto the spherical surface. This is basically what is needed to make a finer icosahedral mesh. In this post, I'll use that as provided, but won't do the subdivision - that is for another post.

I also wanted to try another capability. The basic requirement of the facility is that you supply a set of nodes, nodal values (for shading), and links which are a set of pointers to the nodes and declare triangles, line segments etc. From that comes continuous shading, which is usually what is wanted. But WebGL does triangles individually, and you can color them independently. You just regard the nodes of reach triangle as being coincident with others, but having independent values. For the WebGL facility, that means that for each triangle you give a separate copy of the nodal coordinates and a separate corresponding value, and the links point to the appropriate version of the node.

So I thought I should try that in practice, and yes, it works. The colors look better if you switch off the map - checkbox top right. So here is the icosahedral globe, with rather random colors for shading:

Friday, April 7, 2017

March global surface temperature down 0.066C.

Update There was a major revision to GHCN China data, and now March was 0.03°C warmer than February. See update post

TempLS mesh declined in March, from 0.861°C to 0.795°C. This follows the very small drop of 0.01°C in the NCEP/NCAR index, and larger falls in the satellite indices. The March temperature was still warm, however. It was higher than January (just) and higher than any month before October 2015. And the mean for the first quarter at 0.813°C is just above the record high annual mean of 0.809°C, though it could easily drop below (or rise further) with late data. So far all the major countries seem to have reported. With that high Q1 mean, a record high in 2017 is certainly possible.

TempLS grid also fell by a little more by 0.11°C. The big feature this month was the huge warmth over Siberia. It was cold in Canada/Alaska (but warm in ConUS) and cold in China. Here is the map:



The breakdown plot is remarkable enough that I'll show that too here (it's always on the regular report). On land almost all the positive contribution came from Siberia and Arctic - without that, it would have been quite a steep fall. SST has been slowly rising since December, which is another suggestion of a record year possibility.





Incidentally I'm now using the finer and more regular SST mesh I described here. The effect on results is generally small, of order 0.01-02°C either way, which is similar to the amount of drift seen in late data coming in. You may notice small differences in comparing old and new. You'll notice quite a big change in the number of stations reporting, which is due to the greater number of SST. I've set a new minimum for display at 5300 stations.



Wednesday, April 5, 2017

Global 60 Stations and coverage uncertainty

In the early days of this blog, I took up a challenge of the time, and calculated a global average temperature using just 60 land stations. The stations could be selected for long records, rural etc. It has been a post that people frequently come back to. I am a little embarrassed now, because I used the plain grid version of the TEmpLS of the day, and so it really didn't do area weighting properly at all. Still, it gave a pretty good result.

Technology and TempLS_ has advanced, I next tried using triangular mesh with proper Voronoi cells (I wouldn't bother now). I couldn't display it very well, but the results were arguably better.

Then, about 3 years ago, I was finally able to display the results with WebGL That was mainly a graphic post. Now I'd like to show some more WebGL graphics, but I think the more interesting part may be tracking the coverage uncertainty, which of course grows. I have described here and earlier some ways of estimating coverage uncertainty, different from the usual ways involving reanalysis. This is another way which I think is quite informative.

I start with a standard meshed result for a particular month (Jan 2014), which had 4758 nodes, about half SST. I get the area weights as used in TempLS mesh. This assigns weight to each nodes according to the area of the triangles it is part of. Then I start culling, removing the lowest weights first. My culling aims to remove 10% of nodes with each step, getting down to 60 nodes after about 40 steps. But I introduce a random element by setting a weight cut at about 12.5%, and then selecting 4/5 of those at random. After culling, I re-mesh, so the weights of many nodes change. The rather small randomness in node selection has a big effect on randomising the mesh process.

And so I proceed, calculating the new average temperature at each step from the existing anomalies. I don't do a re-fitting of temperature; this is just an integration of an existing field. I do this 100 times, so I can get an idea of the variability of temperature as culling proceeds.

Then, as a variant, I select for culling with a combination of area and a penalty for SST. The idea is to gradually remove all ocean values, and end up with just 60 land stations to represent the Earth.

Monday, April 3, 2017

NCEP/NCAR global surface temperature down 0.01°C in March

The NCEP/NCAR anomaly for March was 0.566°C, almost the same as Feb 0.576°C. And that is very warm. It makes the average for the first quarter 0.543°C, compared with the 2016 annual average of 0.531°C. In most indices, 2016 was the warmest ever, so with a prospect of El Nino activity later in the year, 2017 could well be the fourth record year in a row.

You can bring up the map for the month here. It was warm in Europe, mixed in N America, warm in Siberia but cool further South, and varied at the poles. So GISS may come down a bit, since it has been buoyed by the Arctic warmth.





Friday, March 31, 2017

Moyhu WebGL interactive graphics facility, documented.

I wrote a post earlier this month updating a general facility for using WebGL for making interactive Earth plots, Google-Earth style. I have now created a web page here which I hope to maintain which documents it. The page is listed near the bottom of the list at top right. I expect to be using the facility a lot in future posts. It has new features since the last post, but since I don't think anyone else has used that yet, I'll still call the new version V2. It should be compatible with the earlier.

Tuesday, March 28, 2017

More ructions in Trump's EPA squad.

As a follow-up to my previous post on the storming out of David Schnare, there is a new article in Politico suggesting that more red guards are unhappy with their appointed one. It seems the "endangerment finding" is less endangered than we thought.
But Pruitt, with the backing of several White House aides, argued in closed-door meetings that the legal hurdles to overturning the finding were massive, and the administration would be setting itself up for a lengthy court battle.

A cadre of conservative climate skeptics are fuming about the decision — expressing their concern to Trump administration officials and arguing Pruitt is setting himself up to run for governor or the Senate. They hope the White House, perhaps senior adviser Stephen Bannon, will intervene and encourage the president to overturn the endangerment finding.

Monday, March 27, 2017

Interesting EPA snippet.

From Politico:
Revitalizing the beleaguered coal industry and loosening restrictions on emissions was a cornerstone of Trump’s pitch to blue collar voters. Yet, two months into his presidency, Trump loyalists are accusing EPA Administrator Scott Pruitt of moving too slowly to push the president’s priorities.

Earlier this month, David Schnare, a Trump appointee who worked on the transition team, abruptly quit. According to two people familiar with the matter, among Schnare’s complaints was that Pruitt had yet to overturn the EPA’s endangerment finding, which empowers the agency to regulate greenhouse gas emissions as a public health threat.

Schnare’s departure was described as stormy, and those who’ve spoken with him say his anger at Pruitt runs deep.

"The backstory to my resignation is extremely complex,” he told E&E News, an energy industry trade publication. “I will be writing about it myself. It is a story not about me, but about a much more interesting set of events involving misuse of federal funds, failure to honor oaths of office, and a lack of loyalty to the president."

Other Trump loyalists at EPA complain they’ve been shut out of meetings with higher-ups and are convinced that Pruitt is pursuing his own agenda instead of the president’s. Some suspect that he is trying to position himself for an eventual Senate campaign. (EPA spokespersons did not respond to requests for comment.)
David Schnare, a former EPA lawyer, has been most notable for his unsuccessful lawsuits (often with Christopher Horner) seeking emails of Michael Mann and others. Here he is celebrating at WUWT his appointment to the Trump transition team.

Update Here is the story at Schnare's home base at E&E.

Update - as William points out below, I had my E&Es mixed up. Here is Schnare at his E&E announcing his appointment. But they have not announced his departure.


Wednesday, March 22, 2017

Global average, integration and webgl.

Another post empowered by the new WebGL system. I've made some additions to it which I'll describe below.

I have written a lot about averaging global temperatures. Sometimes I write as a sampling problem, and sometimes from the point of view of integration.

A brief recap - averaging global temperature at a point in time requires estimating temperatures everywhere based on a sample (what has been measured). You have to estimate everywhere, even if data is sparse. If you try to omit that region, you'll either end up with a worse estimate, or you'll have to specify the subset of the world to which your average applies.

The actual averaging is done by numerical integration, which generally divides the world into sub-regions and estimates those based on local information. The global result always amounts to a weighted average of the station readings for that period (month). It isn't always expressed so, but I find it useful to formulate it so, both conceptually and practically. The weights should represent area.

In TempLS I have used four different methods. In this post I'll display with WebGL, for one month, the weights that each uses. The idea is to see how well each does represent area, and how well they agree with each other. I have added some capabilities to the WebGL system, which I will describe.

I should emphasise that the averaging process is statistical. Errors tend to cancel out, both within the spatial average and when combining averages over time, when calculating trends or just drawing meaningful graphs. So there is no need to focus on local errors as such; the important thing is whether a bias might accumulate. Accurate integration is the best defence against bias.

The methods I have used are:
  • Grid cell averaging (eg 5x5 deg). This is where everyone starts. Each cell is estimated as an average of the datapoints within it, and weighted by cell area. The problem is cells that have no data. My TempLS grid method follows HADCRUT in simply leaving these out. The problem is that the remaining areas are effectively infilled with the average of the points measured, which is often inappropriate. I continue to use it because it has often very closely tracked NOAA and HADCRUT. But the problem with empty cells is serious, and is what Cowtan and Way sought to repair.
  • My preferred method now is based on irregular triangulation, and standard finite element integration. Each triangle is estimated by the average of its nodes. There are no empty areas.
  • I have also sought to repair the grid method by estimating the empty cells based on neighboring cells. This can get a bit complicated, but works well.
  • An effective and elegant method is based on spherical harmonics. The nodes are fitted with a set of harmonics, based on least squares regression. Then in integrating this approximation, all except the first go to zero. The integral is just the coefficient of the constant.


The methods are compared numerically in this post. Here I will just display the weights for comparison in WebGL.