Wednesday, December 23, 2015

Merry Christmas - and a new kind of movie

Firstly I wish a Merry Christmas to all readers. Here is a picture of a local solstice scene:




What AGW can do for you! As mentioned, I've been dabbling with NOMADS systems, and downloading data from GFS. I showed here a conventional animation of relative humidity, as it is convected around the world.

Regular readers will know that I have a strong preference for using spherical projections, in conjunction with active viewing (Javascript). The most powerful way of doing this with movies is with WebGL. I rigged up a system here with high resolution SST, but it's rather slow, and not in good repair. The problem is that too much data needs to be downloaded with each frame. I don't think it's practical to preload a whole movie, though I haven't really tried.

So I thought to go back to a pre-WebGL (and HTML 5) system that I had with displaying ordinary graphics files, as here. You can look at the world from a few viewpoints, typically faces of a Platonic solid. So in this movie, it's the faces of a cube - ie six views, 2 polar, 4 equator. Use the radio buttons to switch. It's again of surface relative humidity - this time of 33 days, starting November 11. It's GFS data from NOAA NOMADS, on a 0.5° grid. Below the fold.

Tuesday, December 15, 2015

November GISS down by only 0.01° on record October.

As reader David Sanger noted November GISS global average is out, at 1.05°C anomaly. That would be the hottest in the record, if they had not increased October to 1.06°C. The late rise in Oct is not unexpected, since as Olof noted, Brazil and Greenland came in late and relatively warm. TempLS Oct went up too.

Most of the indices now agree on a very slight reduction from October to November. TempLS mesh is down 0.035°C; the NCEP/NCAR index was down about 0.05°. TempLS grid was down 0.01°C, and even the troposphere indices from satellite showed a similar small drop. TempLS mesh and GISS are generally more sensitive to polar changes, which were not large this time.

In other news, December is looking very warm indeed, in the NCEP/NCAR index. I had earlier written about a huge peak in early October, which made October a record month by a great margin. The peak of recent days is much larger again, almost reaching 1°C (1994-2013 base) and staying there for several days, though the latest reading was down to a mere 0.&°C. The average for December so far stands at 0.794°C, 0.23°C higher than October's record.

Even the sea ice is responding. Both Arctic and Antarctic are well down.

Saturday, December 12, 2015

A chaotic process, and climate analogy.

In my previous post, I described an article at WUWT called A simple demonstration of chaos and unreliability of computer models. It claimed issues about solving a simple differential equation which cast general doubt on the reliability of models, but in fact was using iterations far to large to say anything sensible about the DE.

But the recurrence does, for large parameter, generate a chaotic sequence which has useful analogies to waether, climate and their emulation. Here is a plot of part of that sequence. The green, red and blue points are initially separated by a distance of 0.0001 K. The recurrence starts at T0=300K, and follows the sequence
Tn+1=Tn+b*(1-(Tn/Te)4)         Te=243K, b=170
I follow three sequences, red starting at T0=300K (red), with green and blue separated there from red by 0.0001 K.

They are initially indistinguishable, but start to separate after about 25 iterations, and although they then sometimes come together, after about 50 timeteps there is really no association between them. You can't usefully predict where any starting point will lead to at that time, because the differences in those initial values are unlikely to be measurable. In fact, as I indicated in the last post, the growth in separation is initially exponential, so the slightest difference in floating value, say 1e-12, will lead to separation delayed by only another thirty or so iterations. I'll give much more detail about this process below.

Weather forecasting is similar. An initial state can be measured with finite accuracy, but small discrepancies grow, so the calculated forecast is useful for only about ten days at most. There is every reason to believe that the Earth itself has a corresponding magnification of small changes. So a perfect emulator could still not predict.

However, some things can be said. The plot remains within finite bounds, the same for each color. You can see that there are patterns whereby sequences of temperature around T0 are frequent. I will show histograms of frequency of temperatures, which are strongly patterned, and the same for each trajectory. These are the "climate", which is what can be determined in the presence of chaos. And they apply independent of starting points. Finding them is a boundary value problem, not an initial value problem.

I will also show that the climate is quite dependent on the parameter b, and that its dependence can be reliably determined from solution sequences, analogous to GCM runs.

Wednesday, December 9, 2015

Differential equations gone wrong

There was a post at WUWT called A simple demonstration of chaos and unreliability of computer models. It puported to solve a simple problem of a radiating black body coming to equilibrium with an internal heat source. But it was formulated as a non-linear recurrence relation. Some chaotic behaviour was demonstrated, but mis-attributed to methods of evaluating powers, and floating point issues.

I commented, first taking issue with the foolishness of this line of posting which quotes a simple and easily solved issue to say that there is something wrong with computer models in general. By which they mean, of course, climate models, but the alleged problem is quite generic. If the proposition were accepted, large areas of much-used engineering maths would have to be abandoned. Which is nonsense. David Evans claiming some issue about partial derivatives was another example. Also Hans Jelbring.

I went on to talk about the fact that the recurrence relation did not at all solve the underlying differential equation (DE). I made reference to stiff differential equations and suggested ways in which this could be improved. In a way, this missed the mark, because the author had never formulated it as a de. He just wrote down the recurrence for what was a very large time step, and found fault with it. But the recurrence does not in itself describe any real physics. It only does so insofar as it provides approximate solutions to the differential equation which does represent the process. And in his case, it certainly didn't. Because of the excessive timestep, the solutions were oscillatory, instead of the proper exponential approach to equilibrium. In one case, there was still convergence, in the other not. The latter case did show chaotic behaviour, where he got lost with red herrings about floating point and how powers are calculated. This was reflected in the discussion. In fact, the relevant maths is the magnification of small differences, which is the key point of chaos. The source of the differences is immaterial. But the magnification is already present in the linearised equation.

So in this post, I'll talk more about the differential equations aspect. Posters at WUWT frequently have no concept of what is involved in numerical DE. But there was also a real chaos problem. It's not connected with any physics here, but it is an example of the same kind as the standard quadratic recurrence, often related to predator-prey, and described at WUWT here. I think I can use this as a useful example of the chaotic aspect of climate modelling - how dependence on initial conditions fails, but this doesn't affect the ability to model climate. That will be Part 2 (probably more interesting).

Tuesday, December 8, 2015

TempLS November, down .04°C from October, but very warm.

There are now more than 4000 stations reporting for November. TempLS mesh estimated the global temperature anomaly at 0.88°C, down from 0.922 in October. It is still a lot higher than any other month in the record (0.76°C in Jan 2007). TempLS grid was very similar. The report is here.

The result is in line with the Moyhu NCEP/NCAR index, which dropped by about 0.05°C, but again exceeded any other month by about 0.1°C.

There was warmth in Europe, N America (except western US) and Brazil. Cool around Mongolia and N Atlantic. More detail in the WebGL plot. This month, most land areas contributed equally, to the anomaly, along with a small rise in SST.

Since this month NCEP and TempLS mesh and grid seem to be in agreement, it seems likely that other indices will follow. GISS was 1.04°C in October, so whether it will exceed 1 is a near thing. I'd say likelier than not.

In other news, the weekly OiSST NINO measures seem to be just below their peak levels. The satellite indices were down slightly. However, NCEP in December has been high, and the forecast is looking very warm.


Sunday, December 6, 2015

Big UAH adjustment.

I noticed in Roy Spencer's latest post the following observation:
Of course, everyone has their opinions regarding how good the thermometer temperature trends are, with periodic adjustments that almost always make the present warmer or the past colder.
It's true that adjustments at his UAH are less frequent. But when they happen, they are large. I decided to plot adjustments to UAH in this year, compared to the adjuatments in GISS (thermometer land/ocean) made over four years. The GISS version of Dec 2011 was the earliest I could find on the wayback machine. UAH brought out v6 in beta during 2015, replacing v5.6, which is however still maintained.

Update. I have found on wayback more GISS data going back to 2005 (the directory name had changed). I won't add it to the original graph; it is too close to the other GISS to show. I've added below the fold a graph of differences between each dataset, new minus old, to show adjustments on a better scale. The accumulation of 10 years of "periodic adjustments" to GISS is still dwarfed by the adjustment made to UAH in 2015.

I've set GISS to the UAH anomaly base, 1981-2010, and smoothed the monthly data with a running 12-month mean. I've used reddish for UAH, and blue for GISS.
Update: I have appended a plot including GISS 2015 an RSS, with better scaling, below.


AS you see, GISS adjustments are much smaller. I should mention that if you use the GISS base of 1951-1980 the adjustments look larger. The reason is that GISS is a much longer record, and adjustments are cumulative, and the earlier base period brings in all the adjustments since 1951.

Eli has a forceful critique of UAH here. Measurement by satellite interpretation of a very indirect signal in a place that is hard to locate exactly is always going to be chancy. As Dr Mears, the man behind the RSS satellite measure, said, in discussing measurement errors:
A similar, but stronger case can be made using surface temperature datasets, which I consider to be more reliable than satellite datasets (they certainly agree with each other better than the various satellite datasets do!).
His comment on agreement was made before UAH v6, which improved the agreement, but not confidence in their stability. I suspect that UAH (and RSS) should adjust more often, but that it is not done because of the inherent uncertainty.
Difference plot below


Thursday, December 3, 2015

NCEP/NCAR November, cooler than Oct, but warmer than any earlier month.

The Moyhu NCEP/NCAR index for November was 0.513°C, down from October's 0.567°C, but easily ahead (by about 0.1°C) of any other month in the record.

The warmth first shown in the October index corresponded to the rise later shown in GISS, and I would expect similar behaviour here. On the GISS anomaly base, the November NCEP index was 1.04°C.

Saturday, November 28, 2015

Why is cumulative CO2 Airborne Fraction nearly constant?.

Airborne Fraction of CO2 is the ratio of the amount observed in the atmosphere to the amount emitted. I have been writing (here and here) about how it seems to be extraordinarily stable. In saying this I define and plot it in a different way to the usual, in which it appears more variable, leading to speculation about trend. I'll say more about this different way below. But I think I have worked out the explanation for the stability, and it isn't obvious.

People tend to think first of Henry's Law, which suggests a fixed partition of a solute (including gas) between two phases. This is a material property, and refers to equilibrium, which does not apply to CO2 in air/sea. It applies even less to the land sink, which is quite important.

In this note, I will show that the constancy, perversely, depends on the dynamics, and is a result of the near exponential increase in CO2 emissions. This effect is mostly independent of the actual mechanism for the sinks. It is really a consequence of linearity with exponential increase.

Since this post is something of a math proof, here is a TOC:

Wednesday, November 25, 2015

GWPF Temperature Adjustments inquiry - no news.

Two months ago, I wrote about the inquiry announced by the Global Warming Policy Foundation. You know, the one fanfared in the Telegraph. "Top Scientists Start To Examine Fiddled Global Warming Figures"

The news then was that after receiving submissions on June 30, they decided that they wouldn't write a report re the terms of reference, but maybe some papers. They had however said that they would publish the submissions. So I thought I should look in every two months or so to see what has happened.

But this time, no news. Just the report of September 29, confirming intended inaction. I'll check again next year.



Monday, November 23, 2015

Using NOMADS data - movies

I've cracked the system for efficiently using NOMADS data. I'm using an R package rNOMADS. It's a system where you can quiz and selectively download a large set of gridded data. It gives me access to many new resources for high frequency data, including reanalysis.

Anyway, as a first experiment, I downloaded GFS 0.5° surface relative humidity data ("gfs_0p50","rh2m"). This data is prepared for the forecast system, and held for about 12 days. So here we have a movie of the data from 11th to 23rd November, at 6 hour intervals. This is all experimental; the movie is ogg, so should show on Firefox and Chrome, but probably not on IE or Safari. Update - I've made an alternative mp4 version, which works in Safari at least. It has the usual controls - you may need to pass the mouse pointer below the picture to make them show. Blue is high humidity, red dry. A color key will appear at some stage.




Thursday, November 19, 2015

NOAA October 0.98°C!

The NOAA report is out, and shows the global anomaly rose from 0.90°C in September to 0.98°C in October. The report says that it was the hottest measured October, but in fact it was the highest anomaly of any month in the record by quite a long way, ahead of 0.9°C in just the previous month.

As expected, the rise was less than for GISS. The reason is coverage of Antarctica. Antarctica had been very cold, and switched to warm in October. GISS, which weights by total area, is very sensitive to this, and so lagged in Sept, followed by a big jump. My TempLS grid does the same. But NOAA only counts the grid cells with information, which are few in Antarctica. Consequently, it rose to a record anomaly in September, with a relatively smaller rise in October. Still, that means it upped the record by 0.08°C.

TempLS grid behaved in much the same way as NOAA, as it usually does. It rose by 0.06. The regional pattern of warmth described in the NOAA report is much as described in the TempLS report.

Update. I'll comment further on some questions in the TempLS October post. First, as Olof noted, Barzil and Greenland, not then in, had a considerable warming effect. So the rise in TempLS mesh, at 0.24°C, ended up exactly the same as GISS. And TempLS grid at 0.06 was very close to the 0.08°C rise for NOAA. This is the usual correspondence, relating to the respective methods.

There was also the question of SST. TempLS, in its attribution analysis, showed a small contribution from SST. But HADSST3 had actually declined. This made me more cautious about TempLS-based prediction. But the NOAA report showed NOAA ocean at 0.85°C, a rise of 0.04, very similar to the TempLS attribution. And that figure was also a record for any month, improving on the previous month's record.

Update. I've shown below the latest recent plot, from here. It shows the global indices set to a common anomaly period of 1981-2010. You can see how TempLS and GISS are currently moving in tandem, as are TempLS grid and NOAA.

Tuesday, November 17, 2015

GISS October 1.04°C, record month anomaly.

GISS is out, a bit late. But it is at or above expectations. At 1.04°C, up by 0.24°C from September 0.8°C. That is well clear of the previous highest, 0.97°C in Jan 2007, which was itself something of an outlier.

Update. Needless to say, 2015 is pulling away in the progress to hottest year. Sou has the story here, with her updated chart. Looks like 2015 will be at least 0.1°C hotter than any previous year. It will be hard to gin up uncertainty about that.

The rise is almost the same as TempLS mesh, 0.235°C. Here is a plot of the last 20 years, monthly, with annual average overlaid:


Graphs are below the jump.

Monday, November 16, 2015

Airborne fraction CO2 and the Bern model

We've been discussing IPCC projections and RCPs in relation to the airborne fraction (AF) of CO2. The AF is the fraction of CO2 emitted that remains in the air, relative to the amount emitted. In an earlier post I showed that if you plotted cumulative emissions against total CO2, it was very linear, implying a close to constant AF. You get different constants depending on whether you look at just FF emissions, or total, including land use.

At first constant AF might seem to be a consequence of Henry's Law. But that gives a fixed phase partitioning at equilibrium, which we don't have (not to mention acid/base chemistry). It might seem surprising that the time varying sink uptake could give that result.

So I tried with the dynamics of the Bern model. This is what the IPCC would probably use if they were to express an opinion on the future of AF (which they don't). The Bern model yields an impulse response function for a pulse of CO2 injected into the atmosphere. In this version, it makes that out of the sum of decaying exponentials of periods ∞, 171, 18 and2.57 years. If you filter total emissions with this function (reversed), that gives the modelled growth of the amount of CO2 in the air at any time. A caveat; fitting observed AF was no doubt one of the considerations in designing the Bern model.

So I did that. I'll show the results below. Plotting accumulated CO2 against cumulative emissions, the result is still remarkably linear. And the slope, at 0.436, is remarkably close to what I found with observed CO2 (0.439). OK the caveat applies, but the constancy is the real result.

Tuesday, November 10, 2015

Google Map for GHCN V4

I have a series of Google Maps tools, which you can read about on the Gallery page. The latest (till now) is here, on adjustments. In this post, I have made one in similar style for GHCN V4 Beta.

The tool shows a Google map with the usual facilities, but with markers for the stations in GHCN V4. You can click on a marker to bring up some details. The main utility is that you can choose different marker colors for different subsets. An important color is "Invis". It earlier stood for invisible, though it is now implemented by removing the marker from the map (you can get it back).

At the right you can see an orange-background table of the color options, and a cyan table of selection criteria. The selections come with a more/less sign that you can toggle, and a textbox where you can write a criterion number. Tick the left checkbox to make it live. The color table has radio buttons to make the color happen, and on the other side a count of how many of that color there are.

Nothing happens until you then click on a color radio button. When you do, the markers are sorted according to the live (checked) selection options, and those that qualify change to that color. This is "and" logic; all criteria have to be satisfied. You'll find it useful to sometimes switch to negative logic - make Invisible the options you don't want. If no checkboxes are ticked, everything will change.

I've included Lat and Lon so that you can look at a restricted area. I was thinking of performance there - the map can get sluggish if too many markers are visible. In fact, I haven't had performance problems, but that may be because my computer has upgraded.

At the bottom a selection box allows you to make color itself an option. You can subset from the classes you have discriminated.

Some examples of things you might want to do:
  • Set endyr>2014 to pink the stations currently reporting. Or set endyr<2015 and then Invis to remove those that are not
  • Begin by making everything Invis (just click it).
  • Set duration >100 to pink (or cyan etc) long duration stations.
  • Set startyr>1850 and Invis to leave only the very early stations


The pop-up tags give dates, name and the GHCN Inventory number. The first two letters of that are an abbreviation for the country (details here).

Monday, November 9, 2015

The Ice Age scare

We are often told by sceptics that "alarmist scientists" have swung from predicting an Ice age in the 1970's to later warnings of AGW. It is, of course, not true. Stoat has tracked the myth over the years - paper here.

Recently an old 1978 TV episode has been doing the rounds. But this is of course not a scientific paper, nor even a regular documentary, even if narrated by Leonard Nimoy. It's from a sensationalist "believe it or not" weekly series.

I saw an amusing recurrence a few weeks ago. Lamb 1974 was cited in support. But like many such citations, it turns out to be not a prediction of imminent ice, but rather talking about the progress of the interglacial in future millennia. And it ends up with this summary:
“The question of whether a lasting increase of glaciation and permanent shift of the climatic belts results from any given one of these episodes must depend critically on the radiation available during the recovery phase of the 200-year and other, short-term fluctuations. An influence which may be expected to tip the balance rather more towards warming – and possibly inconveniently rapid warming – in the next few centuries is the increasing output of carbon dioxide and artificially generated heat by Man (MITCHELL 1972).”
That was the general scientific view. I am writing about this now because I see, via SkS, that the AAAS is noting the 50th anniversary of a rather remarkable report to the then President, Lyndon Johnson, of a panel of his Scientific Advisory Committee. You can't get a more authoritative statement of the consensus scientific view of the time than that. LBJ signed it.

A sub-report headed "Atmospheric Carbon Dioxide" was written by big name scientists of the time - Roger Revelle, Wallace Broecker, Charles Keeling, Harmon Craig, and J Smagorisnky. It lists in detail the "other" consequences, as headings -
  • Melting Antarctic Ice Caps
  • Rise of sea level
  • Warming of Sea Water
  • Increased Acidity of fresh waters
  • Increase in photosynthesis
It does not make its own temperature forecasts, but quotes Moller (1963) saying that a 25% increase in CO2 should increase surface temperature by between 0.6°C and 4°C, depending on water vapor response. They did go on to say that because he couldn't do this in enough detail, it may have over-estimated. But clearly, they aren't predicting an ice age. And that is in 1965. To the President.

And so on since - the dominant scientific thinking has focussed on the greenhouse effect. I experienced this myself, in 1976. I had been transferred by CSIRO from Canberra to Perth. CSIRO then, although federal, tended to work closely with the states where it was located, especially in remote WA. The WA government was pondering a scientific issue.

Most of WA is dry, but the Southwest, then, got reliable winter rainfall from the "Roaring Forties" belt of westerlies, which came north at that time, then leaving for a dry summer. This is ideal for wheat farming, and the "Wheat Belt" was very important. It had substantial infrastructure (eg rail) to move the harvest.

Industry economics had changed with automation, and for those with capital, it was viable to plant crops in marginal (mostly North fringe) areas, even if every second crop failed. So there was pressure to spend money on expanded infrastructure, to make that possible. WA had heard about the coming non-Ice Age, and asked CSIRO.

I was a recently arrived junior scientist, and like most of the ags and mining folk there, knew little about such matters. But I did have some contacts at Atmospheric Physics in Victoria, so I was asked to enquire. I did.

The story, unanimously, what that global warming was on the way, and would have particular effects on WA. The Hadley cell which drove those winds would expand, pushing them further south. Dryer times. Bad idea.

That was our recommendation, and the expansion didn't happen. What did happen was three very hot, dry summers, so our recommendation was looking good. And the rainfall never really recovered.




Saturday, November 7, 2015

Temperature records broken in TempLS October

I've been following some very large spikes in the NCEP/NCAR index during October, with an eventual rise of about 0.2°C. Then followed big rises in the satellite indices, with UAH up about 0.18°C and RSS up 0.07. Here is the first surface index. TempLS mesh (report here) rose by 0.21°C, and TempLS grid by 0.03°C. There is rather a contrast here, which I'll say more about. However, TempLS grid had drifted well ahead of mesh, so this represents a degree of catch up. The result is that both are the hottest month (of all months) in their respective records, by quite a long way (plots below).

The reason for disparity seems to involve SST, and maybe sea ice. TempLS grid uses lat/lon grid cells, and has several without data. In this respect is resembles HADCRUT and NOAA, and often follows them closely. TempLS mesh has a full triangular mesh, so interpolates everywhere, more like GISS. In October, you can see a breakdown of the contributions to TempLS Mesh in the report (scroll down). A main feature is the reversal of Antarctica from cold to warm. The grid methods underweight this.

A paradox is that most indications are that SST did not rise. HADSST shows a small decrease. TempLS mesh shows an increased contribution from SST.

You can see the regional patterns in the report, and in more detail here. Antaarctica and Australia were big hotspots. Brazil is still missing. The heat in Australia is noted here.

So what does this all mean? I think there will be substantial rises in the main indices, setting more records for hottest anomaly. The rises probably won't be as high as the reanalysis, since SST will be a larger damping component. Below the fold, I'll show plots of the last 20 years of TempLS monthly, and a WebGL global plot of the differences going from September to October. Again Antarctica and Australia are the big factors.

Here is TempLS mesh



October stands out, though in TempLS there was also a big spike in 1998 - more so than in other indices. Here is TempLS grid:



Obviously the rise from September is less, but it followed a larger earlier build up. Now here is the WebGL gadget. It shows, in color shading, the changes going from September to October. Again Antarctica and Australia stand out, with seemingly no change in SST. SST changes are small and fairly even, so they tend not to show on the scale of the more volatile land. Europe cooled, central Asia warmed. As usual, the global is a trackball - right mouse button to zoom.






Thursday, November 5, 2015

Weekly SST indices

I've been looking at SST data that I could track on a sub-monthly timescale, in the way I use NCEP reanalysis. I initially tried the daily AVHRR data which I download and plot here. That worked, but oiSST V2 draws on a wider range of sources, and is widely used for monthly indices. NOAA produces weekly gridded data, and gives tables of weekly data for the various NINO regions.

But they don't seem to give weekly averages for the globe, nor for latitude bands, and I think that would be useful. So I have been experimenting with downloading and integrating the grids, as with NCEP, with a view to adding to the latest data page.

I tested the NINO region integrals against the NOAA figures here, and they match to the one decimal place that NOAA provides. So I think the anomaly formation, using the NOAA daily climatology here, and the spatial integration are OK.

In this post, I'll show an active plot for the weeks to date of 2015 for the globe, the main NINO regions (details below) and four latitude bands (SH 60°-tropic, SH tropics, and NH likewise). Later I hope to give longer duration plots, and WebGL global maps.

The NINO regions are (NOAA map here):
  • NINO4 (5S-5N, 160E-150W)
  • NINO3.4 (5S-5N, 170W-120W)
  • NINO3 (5S-5N, 150W-90W)
  • NINO1+2 (10S-0, 90W-80W)


The plot is a version of the active plot in the data page. I doubt that you'll need the x-axis drag facility, with only 43 weeks, and the trend won't be meaningful. The plot starts with everything showing, but you can toggle off data that you don't want. You may need to stretch the y-axis by draging (vertically) to the left of it. The anomaly base is 1981-2010. Here is is:










Tuesday, November 3, 2015

NCEP/NCAR index up 0.2°C in October

I posted earlier about a big spike in the Moyhu NCEP/NCAR index in early October. That index is one that I derive by integrating NCEP/NCAR reanalysis data, as explained here. The index came back from the peak, but only back to levels that would have been seen as very high in earlier months, and stayed high right to end month. So the average finished at 0.567°C, as compared to September 0.368°C. These numbers are relative to base years 1994-2013.

That makes October by far the highest monthly anomaly in the record; in fact, it beats the previous record (Jan 2007) by 0.15°C. That can be seen in the following graph of all monthly anomalies since 1994:

Relative to the 1951-80 base of GISS, October would be 1.18°C, and on the NOAA 20th Cen base, it would be 1.14°C. I wouldn't expect to see those indices rise so high, because they have been somewhat lagging the NCEP/NCAR index recently. In September, GISS was only 0.81°C. Still, there is clearly a possibility of GISS reaching 1°C, and a very strong probability of being the highest anomaly ever, in all indices.

In a related news item, Australia's October was the hottest month ever. Also very dry, where I am. We had a very unusual heat wave at the start of the month, and it continued mostly warm and sunny. It looks like a dangerous fire season coming.




Sunday, November 1, 2015

Coverage of GHCN V4 compared

In an earlier post, I took a first look at GHCN V4 beta. Details, sources etc are there. I've been looking with a practical eye, because at some stage I will have to adapt TempLS to use it. As remarked there, by me and others, GHCN V4 has a lot of extra stations, but noy proportionately better coverage. It's merit may be with homogenisation, rather than a better global average.

In this post I'll look in more detail at that issue of coverage. In the back of my mind, I am thinking about how to use a reduced set. This is not just to save computing time; big disparities in station density actually create accuracy problems.

I will compare using the cubed sphere, that I described recently. It gives a grid of almost equal area cells. I since noticed that it has recently been adopted by GFDL, and is described here.

I'll show a WebGL plot (16x16 faces) with cells colored by the number of datapoints within, for the data of August 2015. You can switch between the data I currently use and the GHCN V4 data with full ERSST. It shows cells with zero (and sparse) data, and also cells with a great deal. I'll also show histograms for comparison. I'll then briefly discuss strategies for rationalization.

Here is the WebGL plot. As usual, the Earth is a trackball that you can rotate. The "Switch" Button switches between V3 and V4 data. Cells are colored by number of data points (dots) within. The key shows the number of data per cell. I'll discuss below the plot.



The V3 picture is influenced by my thinning of the ERSST data from 2°x2° to 4°x4°, basically to match the land density. So ocean cells have typically 1 or 2 datapoints (but not 0). I think for SST this is quite satisfactory, since spatial variability is modest. With V4 I haven't thinned, so ocean cells have a lot of data. It's better for comparison to focus on the land.

Because it is SH winter, there are large areas of sea ice. ERSST assigns these a value of -1.8°C (freezing point of sea water), but I remove them. This creates a lot of empty cells. Six months earlier, the Arctic would have appeared thus. The main land area to compare is Africa. V4 does have fewer empty cells, but still some.

Here are histogram plots of numbers in cells. The right blocks embrace a range, of which the one shown is the minimum. With V4, there is a big block of cells with about 6-9 stations. This happens because of the regular SST grid, which tends to give 9 points per cell, but with frequent variation. With V3, with thinned SST, the ocean majority are in the 1-3 range.

Version 4 GHCN with ERSSTV3 GHCN with reduced ERSST
I think a reasonable number of data per cell to aim for is four. That gives about half the standard error of mean, and about 6000 data in total. I would probably create a reduced set to be used for all months, so to cover back to 1900, there would be more than four needed in total. But there is ample scope for being choosy in many places.


Tuesday, October 27, 2015

Hansen's 1988 predictions revisited.

Hansen's famous 1988 paper used runs of an early GISS GCM to forecast temperatures for the next thirty years. These forecasts are now often checked against observations. I wrote about them here. That post had an active plotter which allowed you to superimpose various observation data on Hansen's original model results.

We now have nearly four more years of results, so I thought it would be worth catching up. I've updated to Sept 2015, or latest available. Hansen's original plot matched to GISS Ts (met stations only), and used a baseline of 1951-80. I have used that base where possible, but for the satellite measures UAH and RSS I have matched to GISS Ts (Hansen's original index) in the 1981-2010 mean. That is different to the earlier post, where I matched all the data to GISS Ts. But there is also a text window where you can enter your own offset if you have some other idea.

A reminder that Hansen did his calculations subject to three scenarios, A,B,C. GCM models do not predict the future of GHG gas levels, etc - that must be supplied as input. People like to argue about what these scenarios meant, and which is to be preferred. The only test that matters is what actually occurred. And the test of that are the actual GHG concentrations that he used, relative to what we now measure. The actual numbers are in files here. Scenario A, highest emissions, has 410 ppm in 2015. Scen B has 406, and Scen C has 369.5. The differences between A and B mainly lie elsewhere - B allowed for a volcano (much like Pinatubo), and of course there are other gases, including CFC's, which were still being emitted in 1988. Measured CO2 fell a little short of Scenarios A and B, and methane fell quite a lot short, as did CFCs. So overall, the actual scenario that unfolded was between B and C.

Remember, Hansen was not just predicting for the 2010-15 period. In fact, his GISS Ts index tracked Scenario B quite well untill 2010, then his model warmed while the Earth didn't. But then the model stabilised while lately the Earth has warmed, so once again the Scenario B projections are coming close. Since the projections actually cool now to 2017, it's likely that surface air observation series will be warmer than Scen B. GISS Ts corresponds to the actual air measure that his model provided. Land/ocean indices include SST, which was not the practice in 1988.

So in the graphic below, you can choose with radio buttons which indices to plot. You can enter a prior offset if you wish. It's hard to erase on a HTML canvas, so there is a clear all button to let you start again. The data is annual average; 2015 is average to date. You can check the earlier post for more detail.

Update - I have hopefully improved the Javascript to keep everything together.




Friday, October 23, 2015

Looking at GHCN V4 beta

The long-awaited GHCN V4 is out in beta, here (dir). The readme file is here. It is a greatly increased dataset, which transfers a lot of data from The large daily archive to monthly. There are 26129 stations in the inventory, instead of 7280. 11741 reported in September, where usually only about 1800 report in GHCN V3. But it is not so clear that the extra numbers add a great deal. In V3, the stations were reasonably evenly distributed, except for a lot in the US, and some bare patches. In V4, there seem to be more regions with unnecessary coverage, and the bare patches, at least in stations currently reporting, are not much improved.

I'll be interested in practicalities such as how propmtly the data will appear in each month. GHCN V3 was prone to aberrations in reporting; that may get worse. We'll see. Anyway, I've run it through TempLS grid. The initial run of the mesh version will take several hours, so I'm waiting to get some decisions right. It's possible I should opt for a subset of stations, and I'll probably want to modify the policy on SST stations. Currently I reduce from a 2°x2° grid to a 4°x4°, mainly because otherwise SST would be over-treated relative to land. This is partly to reduce effort, but also to reduce the tendency for SST values to drift into undercovered land regions. Now the original SST grid is comparable to land, so the case for saving effort is less. The encroachment issue may remain.

The other issue is whether the land data should be pruned in some way. I think it probably should.

Anyway, I'll show below the WebGL plot, with land stations marked, for September 2015, done by TempLS grid. I think it is the best way to see the distribution. You can zoom with the right button (N-S motion), and Shift-Click to show details of the nearest station. The gadget is similar to the maintained GHCN V3 monthly page, which you can use for comparison. You can move the earth by dragging, or by clicking on the top right map. Incidentally, the gadget download is now about 4Mb, and may take a few seconds. I'll work on this.




The plot shows the shaded anomalies for September. Incidentally, the average was 0.788°C, vs 0.761°C for V3. Generally, the differences in TempLS grid are very small. August was almost identical. You can see that some regions have very dense cover, for example, Germany, Japan, Australia and the US. Regions that were sparsely covered in V3, such as the swathe through Nabibia, Zaire etc, are still sparse. I don't see much improvement in Antarctic, and Arctic has some extra, but still not good coverage.

Here is a list of the top 20 countries reporting in September. You can see the excessive numbers in the US, Australia and Canada, and proportionately, in Germany and Japan. Almost half are in the US. I think I will have to thin these out. The inventory, however, is now just bare data of lat, Lon, Alt and name. So it isn't easy to pick out rurals, for example.

NumberCountry
5212 United States
711 Canada
577 Australia
495 Russia
213 China
152 Japan
152 Germany
82 Spain
79 India
76 Brazil
NumberCountry
73 Indonesia
69 Argentina
62 Mexico
61 Kazakhstan
50 Turkey
49 Ukraine
49 Algeria
47 France
44 Thailand
41 Mongolia



Thursday, October 22, 2015

NOAA Global anomaly up 0.01°C in September

The NOAA monthly report is here. That is a small change - GISS was steady, as was TempLS Grid, which tends to track with NOAA. But August was very warm.

NOAA say they are continuing to transition to GHCN V3.3. That is of interest, because as rader Olof noted, GHCN V4 is now out in beta version. It's is early beta, though, and I think it will be a long while before NOAA is using it. I've been taking a look, and should report soon.

In other news, the NCEP/NCAR index continues very hot for October. I commented here on a remarkable peak early in the month. It eased off from that, but only down to the level of earlier peaks, and is now rising again. With 19 days now gone, and the temperature last above the month-to-date average of 0.609°C, it will be by far the hottest month anomaly in the record. That index has anomaly base 1994-2013; on the 1951-80 base of GISS, the level would be 1.217°C.


Monday, October 19, 2015

How well do temperature indices agree?

In comparing TempLS integration methods, I was impressed by how RMS differences gave a fairly stable measure of agreement, which was quite informative about the processes. So I wanted to apply the same measure to a wider group of published temperature indices, which would also put the differences between TempLS variants in that context.

There are too many pairings to show time series plots, but I can show a tableau of differences over a fixed period. I chose the last 35 years, to include the satellite measures.

It is shown below the fold as a table of colored squares. It tells many things. The main surface measures agree well, HADCRUT and NOAA particularly. As expected, TempLS grid (and infilled) agree well with HAD and NOAA, while TempLS mesh agrees fairly well with GISS. Between classes (land/ocean, land, SST and satellite) there is less agreement. Within other classes, SST measures agree well, satellite only moderately, and land poorly. This probably partly reflects the underlying variability of those classes.

As an interesting side issue, I have now included TempLS variants using adjusted GHCN. It made no visible difference to any of the comparisons. The RMS difference between similar methods was so small that it created a problem for my color scheme. I colored according to the log rms, since otherwise most colors would be used exploring the differences between things not expected to align, like land and SST. But the small differece due to adjustment then so stretched the scale, that few colors remained to describe the pairings of major indices. So I had to truncate the color scheme, as will be explained below.

I am now including the adjusted version of TempLS mesh in the regularly updated plot, from which you can also access the monthly averages.

To recap, I am calculating pairwise the square root of the mean squares of differences, monthwise. I subtract the mean of each data over the 35 years (to Sep 2015) before differencing. Colors are according to the log of this measure. The rainbow scheme has red for the closest agreement. The red end of the scale finishes at the closest pairing involving at least one non-TempLS set. Pairings beyond that red end are shown in a brick red. Later I'll show color schemes with this cut-off relaxed. So here is the pairwise plot, with key in °C. If you want the numbers, they are here html, csv
AbbrevDataset nameLink
HadCRUTHadCRUT 4 land/sea temp anomalylink
GISSloGISS land/sea temp anomalylink
NOAAloNOAA land/sea temp anomalylink
UAH6.betaUAH lower trop anomalylink
RSS-MSURSS-MSU Lower trop anomalylink
TempLSgridTempLS grid weightinglink
BESTloBEST Land/Oceanlink
C@WkrigCowtan/Way Had4 Kriginglink
TempLSmeshTempLS mesh weightinglink
BESTlaBEST Landlink
GISS TsGISS Ts Met stations temp anomalylink
CRUTEM 4CRUTEM CRU global mean Stationslink
NOAAlaNOAA land temp anomalylink
HADSST3HADSST3 Sea Surfacelink
NOAAsstNOAA sea temp anomalylink

Some points to make, in no particular order:
  • TempLS interactions are bottom right. Adj means variants using adjusted GHCN. You can see that the differences in integration method makes much less difference than the variation elsewhere between different indices/datasets.
  • The difference due solely to adjustment is even less - this will be quantified better below.
  • The main global surface indices are top left. NOAA and HADCRUT are particularly close. I'll show comparisons with TempLS in a later plot. BEST agrees moderately with the others; C&W (Cowtan and Way kriging) notable better with GISS and worse with NOAA, and only moderately with HADCRUT, which it sought to improve (meaning probably that it succeeded). The agreement with GISS makes sense, since both improve coverage by interpolation.
  • The troposphere indices RSS and UAH agree only moderately with each other, and with others not much at all.
  • The land indices agree not much with each other, and BEST and NOAA diverge widely from other measures. CRUTEM and GISS Ts less so. Of course, GISS T2 is land data, but weighted to try for global coverage.
  • SST data agree well with each other, and not so much with global (about as well as UAH and RSS). Some agreement is expected, since they are a big component of the global measures.


Here is a plot of just the global surface measures. It shows again how there is a GISS family and a HADCRUT/NOAA group. The distinction seems to be on whether interpolation is used for complete coverage, upweighting polar data.


And here are the plots with the color maps extended. On the left the cut-off level is the minimum of the TempLS plots with different methods. It emphasisees how little difference integration method makes compared with differing indices. And on the right is the map with no cut-off. You can see that it is now dominated by the four cases where only adjustment to GHCN varies. Otherwis, same data, same method. Adjustment makes very little difference. It also shows why I originally restricted the color range. In this new plot, everything else is blue or green.












Saturday, October 17, 2015

New integration methods for global temperature

To get a spatial average, you need a spatial integral. This process has been at the heart of my development of TempLS over the years. A numerical integral from data points ends up being a weighted sum of those points. In the TempLS algorithm, what is actually needed are those weights. But you get them by figuring out how best to integrate.

I started, over five years ago, using a scheme sometimes used in indices. Divide the surface into lat/lon cells, find the average for each cell with data, then make an area-weighted sum of those. I've called that the grid version, and it has worked quite well. I noted last year that it tracked the NOAA index very closely. That is still pretty much true. But a problem is that some regions have many empty cells, and these are treated as if they were at the global average, which may be a biased estimate.

Then I added a method based on an irregular triangle mesh. basically, you linearly interpolate between data points and integrate that approximation, as in finite elements. The advantage is that every area is approximated by local data. It has been my favoured version, and I think it still is.

I have recently described two new methods, which I expect to be also good. My idea in pursuing them is that you can have more confidence if methods based on different principles give concordant results. This post reports on that.

The first new method, mentioned here, uses spherical harmonics (SH). Again you integrate an approximant, formed by least squares fitting (regression). Integration is easy, because all but one (the zeroth, constant) of the SH give zero.

The second I described more recently. It is an upgrade of the original grid method. First it uses a cubed sphere to avoid having the big range of element areas that lat/lon has near the poles. And then it has a scheme for locally interpolating grid values which have no internal data.

I have now incorporated all four methods as options in TempLS. That involved some reorganisation, so I'll call the result Ver 3.1, and post it some time soon. But for now, I just want to report on that question of whether the "better" methods actually do produce more concordant results with TempLS.

The first test is a simple plot. It's monthly data, so I'll show just the last five years. For "Infilled", (enhanced grid) I'm using a 16x16 grid on each face, with the optimisation described here. For SH, I'm using L=10 - 121 functions. "Grid" and "Mesh" are just the methods I use for monthly reports.



The results aren't very clear, except that the simple grid method (black) does seem to be a frequent outlier. Overall, the concordance does seem good. You can compare with the plots of other indices here.

So I've made a different kind of plot. It shows the RMS difference between the methods, pairwise. By RMS I mean the square root of the average sum squares of difference, from now back by the number of years on the x-axis. Like a running standard deviation of difference.



This is clearer. The two upper curves are of simple grid. The next down (black) is of simple grid vs enhanced; perhaps not surprising that they show more agreement. But the advanced methods agree more. Best is mesh vs SH, then mesh vs infill. An interesting aspect is that all the curves involving SH head north (bad) going back more than sixty years. I think this is because the SH set allows for relatively high frequencies, and when large datafree sections start to appear, they can engage in large fluctuations there without restraint.

There is a reason why there is somewhat better agreement in the range 25-55 years ago. This is the anomaly base region, where they are forced to agree in mean. But that is a small effect.

Of course, we don't have an absolute measure of what is best. But I think the fact that the mesh method is involved in the best agreements speaks in its favour. The best RMS agreement is less than 0.03°C which I think is pretty good. It gives more confidence in the methods, and, if that were needed, in the very concept of a global average anomaly.




Monday, October 12, 2015

GISS September steady at 0.81°C

The GISS global anomaly is out. The timing is odd - it appeared on a Sunday, but seems to have been prepared on Friday 9 Oct, which is very early. It was 0.81°C, same as August. I have been writing about the somewhat disparate estimates - NCEP/NCAR was up 0.06, TempLS grid was steady, and TempLS mesh down 0.05°C. I thought differing estimates of Antarctica played a role.

This September was not (for once) the highest ever in the GISS record; it came second behind 2014 at 0.90°C. That was one of the warmest in 2014. Sep 2015 was still well about the record annual average (2014) of 0.75°C, keeping 2015 on track to be hottest ever.

I'll show this time first the GISS polar projections:


Antarctica is indeed cold, though not as uniformly as TempLS had it. But there is also a substantial grey area in the coldest part. That will be assigned the global mean value in averaging. Here I think that is a warm bias; TempLS mesh is probably more accurate there. Below the jump I'll show the usual lat/lon plot and comparison with TempLS.

Update - I see that with earlier comparison plots with TempLS ver 3 (since June) I have been inappropriately subtracting the mean, so the anomaly base was not 1951-80 as stated. Now fixed.



As with the TempLS spherical harmonics map below, it shows cold in Antarctica, W Siberia, N Atlantic, and watmth in US/E Canada, E Europe, E Pacific (ENSO) and Brazil.




Sunday, October 11, 2015

Analysis of the October spike in NCEP/NCAR

I have been plotting a global temperature index based on daily NCEP/NCAR reanalysis. A big spike to unpresedented levels in October has attracted comment. The daily record is characterised by sharp spikes and dips, but bthis was exceptional. I've long been curious about the local temperature changes that might cause such a spike, so I analysed this one.

I calculated trends for each of the 10512 cells of the lat/lon grid over a period of 10 days, from 27 September, to 6 October. In this time, the index rose from 0.271°C to 0.865. The trend of global average was .0634°C/day.

I plotted them with WebGL, with the usual trackball sphere you can see below the fold. You can also click to see the trend at any point.

As people suspected, and I blogged here, Antarctica had been cold in September, and there was a reversal which explains much of the rise. Local trends are very high - up to 3°C per day, which is 30°C over 10 days. But there was also a broad swathe of also very large rises through China and central Asia, continuing through Iran. Further north, there was a cold patch on each side of the Urals, extending to Scandinavia. The US also cooled.

As a sanity check, I note that it says southern Australia also warmed rapidly. Melbourne is in the centre of that, and it shows various trends, as high as 1.264 °C/day. We did indeed have a rapid run-up. Late September was quite cool, which means a daily average of about 13°C, but 5-6 October were very warm, averaging about 27°C, following other warm days. The implied rise of 12.64C in trend is not unreasonable.

Anyway, the WebGL is below the jump. Remember, you can click any shading for a numeric trend to show.


TempLS September and Antarctica

I wrote last about possible reasons for the drop in temperature in September shown by TempLS mesh, contreasted with the stasis shown by the grid weighting, and the rise shown by the NCEP/NCAR index. I thought that it was due to the large negative contribution assigned by mesh to Antarctica, wish has low weighting in grid.

I should have remembered that a while ago I reported that the WebGL shaded map of station temperatures is now updated daily, and is actually a better guide to what is reporting than the map of dots that I show in the daily TempLS report. I upgraded the WebGL to now show, on request, the mesh as well as the stations. That makes clearer what is happening in Antarctica, and what is reporting.

So here is a snapshot of what it reports for September, compared with August:

August 2015September 2015


You can check the original for color scale. It shows that in September, both the land of Antarctica (mostly) and much of the adjacent sea were cold, remembering that there is a lot of sea ice as well. In August, it varied more between cold and normal. But also, it shows that basically all the Antarctic reports for September are in.



So it shows why a few Antarctic stations in the mesh version carry high weight. The weight is the area of the adjacent triangles. The weighted sum of the land stations brings down the month average by about 0.1°C. But parts of the sea are very cold too, and they can also have large triangles attached. Generally, the ocean has a regular mesh, and approx equal weighting. These weighted cold parts bring down the average relative to the grid version, which doesn't have that heavy weighting.

So, you may ask, is the mesh version wrong? I don't think so. The Antarctic results are uncertain. The grid version infills much of the area with global average values. This is conservative from a noise point of view, but hard to justify as a good estimate. The mesh version gives about as good an estimate as you can get, but is necessarily sensitive to a small base of data.

I think the WebGL mesh plot should really replace the dot picture and also the rectangular shaded plot. It's hard to compare directly with GISS, though, and is a bit data-heavy for loading. I'll try to work out what is best.










Saturday, October 10, 2015

TempLS Mesh down 0.05°C in September

The Moyhu TempLS mesh index was down, at 0.657°C, compared with 0.707°C in August. This was at variance with the reanalysis index, which was up by 0.06, and with TempLS grid, which was steady at a rather higher value of 0.749°C. This is all based on 4168 stations reporting; we can expect 2-300 more reports to come.

The warm areas were Russia W of urals, E Canada and US mid-west, and Brazil. Also E Pacific (but not SE). This is the same pattern as with the NCEP/NCAR reanalysis. The very cold place was Antarctica.

I was curious about the reason for discrepancy, especially with the TempLS versions, which just integrate the same data with different weights. Each month, I publish in the Mesh report a plot of attributions, described here. This shows the breakdown of the contributions to the weighted average. You can see, for example, that the contribution from Antarctica was large negative, nearly 0.1. That means that the global average would have been 0.1°C higher if Antarctica had been average instead of cold.

So I made a similar plot comparing the contributions for both grid and mesh, just in September.



You can see that Antarctica made a very small negative contribution to the grid average. That is because the same data has much smaller weight. Each station there sits in just one cell, and is weighted with that area, which is further reduced by converging longitudes. Most of the area has unoccupied cells, which get the default global average - ie don't reflect measued Antarctic cold. But the mesh weighting weights the few Antarctic cells by the whole land area.

You can see other effects; particularly that the sea contributes less with mesh. This again reflects Antarctica. Some of the big triangles there terminate in the sea, and those cold points get upweighted too. But the accounting assigns that part to the sea total. And these two negatives explain why in Sept, the grid mean was almost 0.1°C higher than the mesh. Actually, in the same way, other warm sparse regions, like Arctic, Africa and S America, made a greater warm contribution with mesh, which brings the total down a bit.

I'm developing new integration methods, Grid with Infill, and Spherical Harmonics based. I'm hoping these will give more consistency. In September, I think the mesh is formally more accurate, but with increased uncertainty, because of the high dependence on a few Antarctic stations. If more Antarctic data comes in, the average could change.


Thursday, October 8, 2015

Rapid rise in NCEP/NCAR index

The local NCEP/NCAR index has risen rapidly in recent days. Not too much can be made of this, because it is a volatile index. But on 5 October, it reached 0.792°C. That is on an anomaly base of 1994-2013. It's about 0.2°C higher than anything in 2014. I've put a CSV file of daily values from start 2014 here.
Update: I have replaced the CSV at that link with a zipfile that contains the 2014/5 csv, a 1994-2013 csv, and a readme.

I see the associated WebGL map noticed the heat in Southern Australia. In Melbourne, we had two days at 35°C, which is very high for just two weeks after the equinox. And bad bushfires, also very unusual for early October.

Early results from TempLS mesh for Sept show a fall relative to August; TempLS grid is little changed. I'll post on that when more data is in.

Update. Today 0.865. Extraordinary. I naturally wonder if something is going wrong with my program, but Joe Bastardi is noticing too:



Update. Ned W has made a histogram (see comments) showing how unusual these readings are.







Wednesday, October 7, 2015

On partial derivatives

People have been arguing about partial derivatives (ATTP, Stoat, Lucia). It arises from a series of posts by David Evans. He is someone who has a tendency to find that climate science is all wrong, and he has discovered the right way. See Stoat for the usual unvarnished unbiased account. Anyway, DE has been saying that there is some fundamental issue with partial derivatives. This can resonate, because a lot of people, like DE, do not understand them.

I don't want to spend much time on DE's whole series. The reason is that, as noted by many, he creates hopeless confusion about the actual models he is talking about. He expounds the "basic model" of climate science, with no reference to a location where the reader can find out who advances such a model or what they say about it. It is a straw man. It may well be that it is a reasonable model. That seems to be his defence. But there is no use setting up a model, justifying it as reasonable, then criticising it for flaws, unless you do relate it to what someone else is saying. And of course, his sympathetic readers think he's talking about GCMs. When challenged on this, he just says that GCM's inherit the same faulty structure, or some such. With no justification. He actually writes nothing on how a real GCM works, and I don't think he knows.

So I'll focus on the partial derivatives issue, which has attracted discussion. Episode 4, is headlined Error 1: partial derivatives. His wife says, in the intro:
"The big problem here is that a model built on the misuse of a basic maths technique that cannot be tested, should not ever, as in never, be described as 95% certain. Resting a theory on unverifiable and hypothetical quantities is asking for trouble. "
Sounds bad, and was duly written up in ominous fashion by WUWT and Bishop Hill, and even echoed in the Murdoch press. The main text says:
The partial derivatives of dependent variables are strictly hypothetical and not empirically verifiable
He expands:
When a quantity depends on dependent variables (variables that depend on or affect one another), a partial derivative of the quantity “has no definite meaning” (from Auroux 2010, who gives a worked example), because of ambiguity over which variables are truly held constant and which change because they depend on the variable allowed to change.

So even if a mathematical expression for the net TOA downward flux G as a function of surface temperature and the other climate variables somehow existed, and a technical application of the partial differentiation rules produced something, we would not be sure what that something was — so it would be of little use in a model, let alone for determining something as vital as climate sensitivity.<
So I looked up Auroux. The story is here. DE has just taken an elementary introduction, which pointed out the ambiguity of the initial notation and explained what more was required (a suffix) to specify properly, and assumed, because he did not read to the bottom of the page, that it was describing an inadequacy of the PD concept.


Multivariate Calculus

Partial derivatives can seem confusing because they mix the calculus treatment of non-linearity with dependent and independent variables. But there is an essential simplification:
  • The calculus part simply says that locally, non-linear functions can be approxiumated as linear. The considerations are basically the same with one variable or many.
  • So all the issues of dependence, chain rule etc are present equally in the approximating linear systems, and you can sort them out there
That is a great relief, because partial derivatives are messy to write down on a blog, so I won't try.

Dependence

I'll use a simplified version of his radiative balance example, with G as nett TOA flux, here taken to depend on T, CO2 and H2O. The gas quantities are short for partial pressure, and T is (at least for DE) surface temperature. So, linearized for small perturbations,

G = a1*T + a2*CO2 + a3*H2O

Now there may be dependencies, but that is a stand-alone equation. It expresses how G depends on those measurable quantities. It is true that the measured H2O may depend on T, but you don't need to know that. In fact, maybe sometimes the two are linked, sometimes not. If you put a pool cover over the oceans, the dependence might change, but the equation which expresses radiative balance would not.

If you do want to add a dependence relation

H2O = a4*T

then this is simply an extra equation in your system, and you can use it to reduce the number of variables:

G = (a1+a3*a4)*T + a2*CO2

And since at equilibrium you may want to say G=0, then

T =- a2*CO2/(a1+a3*a4)

expresses the algebra of feedback. But this is just standard linear systems. It doesn't say anything about the validity or otherwise of partial derivatives.


Saturday, October 3, 2015

NCEP/NCAR index up 0.06°C in September

The Moyhu NCEP/NCAR index from the reanalysis data was up from 0.306°C to 0.368°C in September. That makes September warmer by a large margin (0.05°C) than anything in that index in recent years. It looked likely to be even warmer, but cooled off a bit at the end.

A similar rise in GISS would bring it to 0.87°C. Putting the NCEP index on the 1951-1980 base (using GISS) would make it 0.95°C. I'd expect something in between. GISS' hottest month anomaly was Jan 2007 at 0.97°C. Hottest September (GISS) was in 2014, at 0.90°C. It was the hottest month of 2014.

The global map shows something unusual - warmth in the US and Eastern Canada. And a huge warm patch in the E Pacific. Mostly cold in Antarctica and Australia, but very warm in E Europe up to the Urals, and in Middle East.






Thursday, October 1, 2015

Optimised gridding for temperature

In a previous post I showed how a grid based on projecting a gridded cube onto a sphere could improve on a lat/lon grid, with a much less extreme singularity, and sensible neighbor relations between cells, which I could use for diffusion infilling. Victor Venema suggested that an icosahedron would be better. That is because when you project a face onto the sphere, element distortion gets worse away from the center, and icosahedron faces projected have 2/5 the area of cubes.

I have been revising my thinking and coding to have enough generality to make icosahedrons easy. But I also thought of a way to fix most of the distortion in a cube mapping. But first I'll just review why we want that uniformity.

Grid criteria

The main reason why uniformity is good is that the error in integrating is determined by the largest cells. So with size variation, you need more cells in total. This becomes more significant with using a grid for integration of scattered points, because we expect that there is an optimum size. Too big and you have to worry about sample distribution within a cell; too small and there are too many empty cells. Even though I'm not sure where the optimum is, it's clear that you need reasonable uniformity to implement such an optimum.

I wrote a while ago about tesselation that created equal area cells, but did not have the grid aspect of each cell exactly adjoining four others. This is not so useful for my diffusion infill, where I need to recognise neighbors. That also creates sensitivity to uniformity, since stepping forward (in diffusion) should spread over equal distances.

Optimised grid

I'll jump ahead at this stage to show the new grid. I'll explain below the fold how it is derived and of course, there will be a WebGL display. Both grids are based on a similarly placed cube. The left is the direct projection; you can see better detail in the previous post. Top row is just the geometry (16x16), the bottom shows the effect of varying data (as before 24x24, April 2015 TempLS). I've kept the coloring convention of s different checkerboard on each face, with drab colors for empty cells, and white lines showing neighbor connections that re-weight for empty cells.


The right is the same with the new mapping. You can see that near the cube corner, SW, in the left pic the cells get small, and a lot become empty. IOn the right, the corner cells actually have larger area than in the face centre, and there is a minimum size in between. Area is between +-15% of the center value. In the old grid, corner cells were about 20% area relative to central. So there are no longer a lot of empty cells near the corner. Instead, there are a few more in the interior (where cell size is minimum).

In that previous post, I showed a table of discrepancies in integrating a set of spherical harmonics over the irregularly distributed stations:
L 12345
Full grid00001e-06
Infilled grid8.8e-050.000290.0010450.0020150.003635
No infill0.0076320.0273350.0493270.0644930.075291

In the new grid, the corresponding results are:
L 12345
Full grid00001e-06
Infilled grid8e-060.000190.0006450.0013480.002529
No infill0.0047580.025580.0477940.06010.069348

Simply integrating the SH on the grid (top row) works very well in either. Just omitting the empty cells (bottom row), the new grid gives a modest improvement. But for the case of interest, with the infilling scheme, the result is considerably better than with the old grid.

Optimising

I think of two spaces - z, the actual sphere where the grid ends up, and u, the grid on each cube face. But in fact u can be thought of abstractly. We just need 6 rectangular grids for u, and a mapping from u to z.

But the mapping need not be simple projection. You could, for example, re-map the u space and then project. If f is a one-one mapping of the square of u onto itself, then the change in area going to z is:
det(f'(u))/(1+f2(u))^(3/2)
where the second factor comes from the projection, and includes the inverse square magnification and a cos term for the different angles of the du element and the sphere. f' is a Jacobian derivative on the 2D space, and the determinant gives the area ratio of that u mapping.

I use the mapping f(u)=1/r tan(r*u) on each parameter separately. The actual form of f doesn't matter very much. I choose tan because if tan(1/r)=sqrt(2) it gives the tesselation of great circles separated by equal longitude angle. That is already much better than simple projection (r=0). I can check by calculating what would happen to a square in the centre, mid-side, and corner, for various r:

1/tan(1/r)00.707111.13121.2
central11111
midside0.35360.53030.70710.80590.8627
corner0.19250.4330.769811.1458
The top row actually shows a slightly modified r; and the bottom two show ratios of area to the central area. For small r it is small, so areas vary by about 5:1. For the case of great circle slicing, it is about 2:1, and for larger r it gets better. I've chosen tan(1/r)=5/6. hat means the midside areas are down by about 15%, and so most of the area is within that range. At the corners, a small section, it rises to +15%. There is associated distortion, so I don't think it is worth pushing r higher. That would improve average uniformity, but the ends would then be both large and distorted.

So here finally is the 24x24 WebGL plot equivalent to that from last time. Again it shows with drab colors empty cells, and with white lines the direction of reallocation of weights for infill. The resulting improved integration is discussed in the head plot.