Wednesday, February 29, 2012

Miracles of 2LoT

I've been arguing over at Tallbloke's. It's one of those posts where a sceptic does an elementary analysis and makes elementary errors which contradict "consensus" science. A new scientific discovery is announced. Being Galileos, they don't have to check their work.

The sceptic here is Hans Jelbring. He looks at a simple problem, two concentric spheres without heat sources, and checks their radiation balance to find what the temperature difference should be. Consensus science says, of course, that there should be none, but he found one, and then spent time working out the resulting perpetual motion machine. I'm not sure what was the point of that, but Trenberth was mentioned.

I have sometimes done these analyses myself, being intrigued when what looks like a problem determined by geometry turns out to have a solution constrained by the Second Law of Thermodynamics (2LoT). Given the complexity, that can look like a miracle.

Monday, February 27, 2012

Hansen's 1988 predictions - a JS explorer.

Hansen's famous 1988 paper used runs of an early GISS GCM to forecast temperatures for the next thirty years. These forecasts are now often checked against observations.

The forecasts are subject to scenarios, which are often misunderstood. A GCM can calculate all sorts of things, but some have to be supplied as inputs. CO2 is the most notable; climate science can't predict how much carbon will be burnt. Volcanoes are another. But there is also TSI, other trace gases. More marginal is ENSO. Only recently have GCM's been able to compute it, so it can make sense to treat it as a forcing, subject to scenario.

Scenarios are not predictions. You have to calculate a range of them to have a chance of getting a result that will correspond to what really happened. When you look back, the sole test of which scenario to apply is what corresponds best to the history. It doesn't matter what Hansen or anyone esle thought was likelier. You check against the scenario that fits what happened.

However, I'm not talking here about scenario confusion, but a more basic issue - what observation dataset to use. It's prompted by a post today at WUWT in which the predictions were rated against satellite indices for the lower troposphere. That certainly wasn't what Jim Hansen was predicting.

We know what actual index he had in mind, because he graphed it against model results. It was the index compiled from meteorological stations, described in the paper of Hansen and Lebedeff 1987, the year before. This eventually became the GISS Ts index. I believe that is the index that should be looked at first. The more modern Land/Sea indices were not available in 1988.

However, there is an argument that Land/Ocean indices are a better representative of GMST, and Real Climate, in their periodic reviews of model predictions and observations, use the GISS and Hadcrut Land/Sea indices. I think this is a little unfair to Hansen, as they rose more slowly than the GISS Ts. But there is a rationale.

So here is where a JS/HTML 5 graphic can help. I found out how to incorporate bitmaps in HTMS 5 canvases, so today's active plot allows you to choose from a wide variety of indices, and superimpose them on Hansen's original graphic.

Anomaly offsets

With multiple indices there is always an issue with differently calculated anomalies. Here I used the GISS Ts index unchanged; it does (still!) match Hansen's observed data in the overlap period. I then shifted the other indices so that they would have the same average as GISS Ts from 1980 to 2009 (30 years). But I've allowed users to modify this offset if they have a better idea.

So here is the plot. Below is advice on how to use it.

Sunday, February 19, 2012

Combined GMST trend viewer.

In a few posts I have been showing plots of all possible trends that you could calculate from global temperature time series, with some emphasis on statistical significance. The original post just showed trends; the next showed trends that lacked statistical significance in faded colors, and then I tried to show more detail with plots of confidenced limits and t-statistics.

The plots are available for various land, ocean and combined datasets. I added SST at some stage, and the various gadget capabilities improved. There are some similar posts with special datasets.

I did some more programming to allow a fuller set of summary statistics when you click on the main plot. So I thought it was a good time to gather the various facilities in a single plot, since they use the same dataset.

Here is a brief summary of the things you can do:
  • You can choose from 11 datasets (right buttons)
  • You can choose time periods - this shows the same data on an enlarged scale for shorter recent periods
  • You can select the variable to display - just trend, trend with significance marked, or the upper or lower confidence limit (CI), or the t-statistic, which is the ratio of the calculated trend to its standard error.
  • With each dataset selection, a graph of the time series appears top right. There are two balls, blue and red, indicating the ends of the selected trend period (showing the trend). You can click on the red and blue bars to change this selection; there are also nudgers at each corner of the plot.
  • The triangle plot has the start of the trend period on the y-axis, and the end on the left. You can click on any point to show details. There appear next to the plot on the right, and the time series plot will update to show the trend.
  • At the bottom right is a url which incorporates the current state. You can copy it to use as a link; it will bring up the post with the selections as you had them when you copied.

I'll briefly discuss some of the scientific/technical issues below the plot, but the main discussion is in the posts linked above. Here is the plot:

Plot Data
Display Years
Upper CI trend
Lower CI trend
t-statistic trend
Land and Ocean
Land Only
Sea Surface

Wednesday, February 15, 2012

GISS Temp for Jan 2012 - and TempLS

TempLS showed a decline in global mean anomaly in January, 2012, from 0.35 °C to 0.219 °C. GISS showed a similar fall, from 0.45 °C to 0.36 °C. Time series graphs are shown here

As usual, I compared the previously posted TempLS distribution to the GISS plot. Unusually, there was a significant discrepancy, which turned out to be a kind of Y2012 issue in my code.
Update - on further checking the error was simple - I had plotted Jan 2011.

 Here is GISS:

It's broadly similar to the one I posted here, but it gets the USA and N Africa quite wrong. So I went back to check. I was aware of the date issue, because my plot was dated Jan 2011, as I noted. I thought that was the only effect, but I was wrong (in fact, the whole plot was Jan 2011). When I re-ran, ,aking sure my script now detected the year as well as the month, I got:

This now matches well. The white blob far NW is where the cold runs off the color scale. The date error did not affect the monthly average number, only the plot.

Previous Months


More data and plots

Sunday, February 12, 2012

January 2012 TempLS down 0.14°C

The TempLS analysis, based on GHCNV3 land temperatures and the ERSST sea temps, showed a monthly average of 0.219°C, down from 0.35 °C in December. This echoes a greater drop in the satellite averages. There are more details at the latest temperature data page.

Below is the graph (lat/lon) of temperature distribution for January (It's Jan 2012, despite the title).
Update - despite my confidence here, it is actually Jan 2011 - a sort of Y2012 error (I had the year hard-coded). The correct plot, with GISS comparison, is here.

This is done with the GISS colors and temperature intervals, and as usual I'll post a comparison when GISS comes out. There was a cold region mid-Asia whic h goes outside the standard color range.

And here, from the data page, is the plot of the major indices for the last four months:

Saturday, February 11, 2012

GHCN V3 homogeneity adjustments - revised data.

The previous post was A study of GHCN V3 homogeneity adjustments, Commenter PaulM noted that GHCN had notified a change, with an apparent error in the data version I used. He suggested a recalc, and I think that needs to be done. So here it is. It uses almost the same code as before. There is an issue with a station called COLOSO which seems to be duplicated in the inventory. In the previous version, the unadjusted data file has data for both entries, the adjusted only for one. That required a fix. Now the adjusted file refers to both, so I had to unfix. It's not an issue for here, because the station has little data and falls below the 30-yr threshhold.

Below are the revised histograms and the map. PaulM noted that some of the larger adjustments were moderated - however, the mean adjustment to the trend increased significantly.

Update: The R code to download and generate a CSV file of trend diffs is here

Distribution of trend changes

Again there are three subgroups - stations that have histories with actual reports for at least
  1. 360 months (30 years)
  2. 540 months
  3. 720 months

Mean 0.031 °C/Decade

Mean 0.0302 °C/Decade

Mean 0.0283 °C/Decade

The histograms seem similar, but the means are about 50% higher.

The Google maps app

Details for this app are as previously. I didn't revise the discussion of extreme cases - CORONA NM comes down from about 8 C/Cen to about 6 C/Cen.

Tuesday, February 7, 2012

A study of GHCN V3 homogeneity adjustments.

From time to time, bloggers discover that GHCN produces an adjusted temperature file, and are shocked to find that in the process, temperatures are altered. A noted example occurred in late 2009, when Willis Eschenbach became excited about GHCN V2 adjustments to the temperature at Darwin. He intoned sternly:
Those, dear friends, are the clumsy fingerprints of someone messing with the data Egyptian style ... they are indisputable evidence that the "homogenized" data has been changed to fit someone’s preconceptions about whether the earth is warming.
This created quite a stir, and drew a response from no less than the Economist, among others. It also drew a response from me - in fact, it was the stimulus to start this blog. I showed (following Giorgio Gilestro) that the V2 adjustments could be quite large, but were fairly balanced overall in trend effect. Darwin was an outlier, and I showed one case in particular which went in the opposite direction, to a greater extent.

So now an adjusted version of V3 is out, and there was a similar discovery at WUWT. This time it was Iceland, particularly Reykjavik, and I wrote about that particular case here. There were similar thunderings about rewriting of history etc (and calls for legal sanctions etc). What these protesters are reluctant to acknowledge is that GHCN has always produced two files - one unadjusted, one adjusted. The unadjusted file is not altered, and has at least until recently been the prime reference source. The adjusted file is derived fromn it using what seems at the time to be the best available algorithm. This changes.

What is also not understood is that honogenization is not done to correct the record. It is done to reduce bias prior to calculation of an index. Many irregularities occur in a large temperature record - instruments change, stations move. Records are patchy. Some of these artefacts will cancel, but there is a possibility of bias. So an algorithm is used to try to identify and correct them. In the process, there will inevitably be false positives. The process is valuable as long as the introduced errors have smaller bias than the errors removed.

In this post, I want to update (for V3) the statistical analysis of the effect on trend. I'll also produce a "cherry pickers guide" so that the stations which have their trends changed markedly up or down can be readily identified. I'll do this using the Google Maps application that I developed earlier for GHCN stations.

Thursday, February 2, 2012

Visualizing 2011 temperature anomalies

Jim Hansen and GISS coauthors have posted an analysis of surface temperatures through 2011. They have many GISS-style plots of months, seasons etc. Tamino has also posted a roundup.

I'm presenting here a visualization in the style I did for trends, and also for the November readings (more details on methods in those links). It shows GHCN V3 unadjusted station (and ERSST) anomalies for each month on a spherical projection, with mesh shading to show the anomalies. There's no grid averaging; the color for the anomaly of each station is shown. You can choose any combination of months, and it will show the average. You can orient the globe, zoom, display station data etc.

I calculated the anomalies using the TempLS offsets for the period 1951-2011. The base period is not exactly comparable to that for GISS etc, as the offsets are computed taking account of trend, and so should be fairly independent of the base time period. I'll describe below the treatment of missing data.

So here is the plot. The little map on top right is the navigator - click to mark the point you want to appear in the centre of the spherical projection. You can choose any combination of months - if you choose none, it will calculate the annual average. Whenever you want to change this choice, you must press Recalc (button above). You can zoom, and ask to see stations or mesh; to make these effective, press the Refresh button.

You can click on the main plot, and the anomaly for the nearest station will be printed on the right.

your browser does not support the canvas tag
Click on this map to orient the world plot.

Show Stations
Show Mesh
Jan Feb Mar
Apr May Jun
Jul Aug Sep
Oct Nov Dec

Missing data

One reason why I did not post this sooner is that GHCN info trickles in, and there is still quite a lot missing for December, and for some stations a few gaps in the year. I also had to do something about another problem - many ocean sites report -1.8%deg;C when they are frozen. This is uninformative, and does not represent the air temperature, so I regarded it as missing data.

I can't modify the mesh easily in Javascript, so I really have to use the same mesh for each month whether data is missing or not. So the rule I used was that stations with less than nine months of informative data were omitted. Otherwise missing values were set to zero anomaly. There are a few of these in December, particularly. If you see an outlier color, check its anomaly by clicking. If it's zero, it's probably a missing data point.