Friday, December 23, 2016

Merry Christmas to all

And a viable New Year (as our CFO used to wish)

As a Christmas gift, I've been sparring at WUWT with folk who insist that raw temperature data is hidden/deleted/not available. I point them, of course, to GHCN Daily. But there was much talk of BoM, and I must admit that the data page isn't as obvious as it was. But it is there. And not only can you get an unadjusted daily file for just about any station they have, back to start, but you can download zipped csv files for max and min (but not together, unfortunately). And there is also extensive metadata.

On to interesting times. Here is Sou on Mike Mann's court victory. And Eli.

Friday, December 16, 2016

GISS rose 0.07°C in November.

GISS is up from 0.88°C in October to 0.95°C in November. That is similar to the NCEP/NCAR rise, but contrasts with a small drop in TempLS mesh and a larger one in TempLS grid. I think a lot of the changes this month will reflect the different treatment of the October freeze in Siberia and warmth in the Arctic.

Of course, the alt-news headlines, even tweeted by the US House Science Committee, was that "world average temperatures have plummeted since the middle of the year at a faster and steeper rate than at any time in the recent past". No sign of that on closer inspection. Here is my plot comparing 2015/6 with 1997/8. It's from this post, where you can see other datasets similarly plotted:

Another point of interest at this time of year is whether 2016 will be a record. Actually, not much interest; it seems certain in most indices. I've been tracking that at a post here qv for details); I'll echo it here. The faint lines that extend data is how the average will progress if temperatures continue at current month level:

I'll show the regular GISS plot and TempLS comparison below the fold

Thursday, December 15, 2016

Current global temps compared with CMIP 5

A plot has been in circulation for some time from John Christy. It is a version of one that he showed at a US Senate hearing, and is discussed here.

I don't know if it was ever accurate, but it ends in 2013, so obviously needs updating. It's also woth showing the other surface datasets, and definitely not showing the troposphere record, which I don't think is honest. The CMIP data is for surface, not troposphere.

So I have made my own version, using CMIP 5 data from KNMI. I have used their averages for the RCP groups, and their collection of 106 model runs (one per model/RCP), which is shown in the background. The complete data and R code for the plot are in a zipfile here.

It's a very different picture. The observations, as expected, are far more volatile than the multi-model means, and the slope is somewhat less, but is far from out of range. And of course, recent warming actually takes observations above the mean. I have set the anomaly base for all curves to 1981-2010, the WMO recommendation.

Sunday, December 11, 2016

Storing data for the winter.

I've been squirreling data away, and posting here, even before the current threats to US Science. But a post by Tamino prompted me to get a wriggle on. In particular, I decided to systematically post the files of monthly data that I use for graphs and tables on the latest temperature page. They are tables looking like this, but going back to 1850. My idea is to update weekly, and store monthly versions, so one can refer back. I have started doing that; there is a CSV file and a R data format file, with a readme, zipped. The URL for this month is
and it will change in the obvious way for future months. I'll probably put a table of links on the portals page, once a few have accumulated.

I'll look for other opportunities to back up. I'm actually more dependent on current data like GHCN, ERSST and AVHRR data, and the reanalysis. I'm more optimistic there, because that data is collected for weather forecasting, which has powerful clients. So while access may be bumpy, I don't think it will be lost.

Friday, December 9, 2016

Hansen's 1988 scenarios and outcome

While arguing at WUWT, eg here, about Hansen's projections, I've been encountering arguments about which scenario should be applied. I did discuss that in last month's post on the projections. But I have since looked up more information on what they were, and how they panned out. In this post, I'll review the files, numbers, compare with current, and post what data we have. I'll also review a discussion by Stephen McIntyre at Climate Audit in 2008, and show some of his graphs. The conclusion is that based on input and outcome, the temperatures should lie between scenarios B and C.

Thursday, December 8, 2016

Global TempLS unchanged in November; sea ice is low.

The November TempLS mesh index was virtually unchanged, at 0.687°C in November from 0.692°C in October. The TempLS grid index declined, from 0.626°C to 0.56°C. The disparity between mesh and grid is unusually large, and is caused by the polar warmth, which LS mesh is more sensitive to. Other indices generally rose in November; NCEP/NCAR index by 0.06°C, and also UAH lower troposphere (0.04°C).

The main map features are a cool band across Siberia, and big warmth in North America. Antarctic was warm, also Arctic.

I mention sea ice again, because it is starting to get more general attention. Arctic ice remains at record low, although it didn't really get worse during the month, despite an actual melting episode mid-month. But Antarctic ice has been exceptionally low since mid October, and is now entering the fast melt times.. Here is a section of the radial plot:

Sunday, December 4, 2016

Where do GHCN monthly numbers come from? A demo.

I've been arguing at WUWT, eg here. More and more I find people saying that all surface measures are totally corrupt. Of course, they give no evidence or rational argument. And when sceptics do mount an effort to actually investigate, eg here, it falls in a heap. BEST was actually one such effort that was followed through, but ended up confirming the main indices. So of course that is corrupt too.

As linked, I do sometimes point out that I have been tracking for six years with an index, TempLS, which uses unadjusted GHCN and gets very similar results to GISS and others. I have posted the code, which is only about 200 lines, and I have posted monthly predictions (ahead of GISS and all) for about six years. But no, they say, GHCN unadjusted is corrupted too. All rigged by Hansen or someone before you see it.

The proper way to deal with this is for some such sceptic to actually follow through the quite transparent recording process, and try to find some error. But I see no inclination there to do that. Just shout louder.

So here I'll track through the process whereby readings in my country, from BoM, go through the WMO collection in CLIMAT forms, and so into the GHCN repository. That's partly to show how it can be done, if a sceptic ever was inclined to stop ranting and start investigating.

Saturday, December 3, 2016

NCEP/NCAR November up 0.06°C - warmest since April.

The Moyhu NCEP/NCAR index rose from 0.419°C in October to 0.48°C. Not huge, but it makes it the warmest (just) since April, an ENSO peak month. The change mirrors a rise to November of 0.04°C in UAH V5.6.

The month started with a big peak, then a dip, then a smaller peak, still current. The big feature was a cold band across Siberia, extending into N Pacific. But it was balanced by warmth in Arctic, Antarctic and N America.

Sea ice at both poles was unusually low, with even some days of melting in the Arctic. Arctic is still record low, but will probably become more normal. Antarctic is very low indeed, and seems to be getting more so, heading into peak melting season. Here is the recent part of the radial plot, where black is 2016, and colors are other recent years:

Monday, November 28, 2016

Spectral methods in GCMs - and some thoughts on CFD.

There has been a lot of discussion recently on the maths of GCMs. I have summarised some in a series on chaos and the Lorenz equations (here, here, and here). David Young has commented substiantally on this thread, and raised the issue of the use of spectral methods in GCMs. These are used in the dynamical core, which relates pressure and velocity via the Navier-Stokes equations. They are time critical, because they require resolving sound waves, and so the speed performance here fixes that of the code as a whole. Spectral methods are used because they are fast. But some mystery is made of them, which I would like to try to dispel. But I'd like to do this in the context of some simplifying observations about CFD.

Wednesday, November 23, 2016

Update check on Hansen's 1988 projections

Hansen's famous 1988 paper used runs of an early GISS GCM to forecast temperatures for the next thirty years. These forecasts are now often checked against observations. I wrote about them here. That post had an active plotter which allowed you to superimpose various observation data on Hansen's original model results.

I did an update in 2015 here, and a lot of text from there is repeated here.. I think Hansen's projections had stood up well, but they ran ahead of warming during the "pause" of around 2006-13. That pause is now over, so the interest is in whether Hansen's projection is still running warm.

I've updated to Oct 2016, or latest available. Hansen's original plot matched to GISS Ts (met stations only), and used a baseline of 1951-80. I have used that base where possible, but for the satellite measures UAH and RSS I have matched to GISS Ts (Hansen's original index) in the 1981-2010 mean. But there is also a text window where you can enter your own offset if you have some other idea.

A reminder that Hansen did his calculations subject to three scenarios, A,B,C. GCM models do not predict the future of GHG gas levels, etc - that must be supplied as input. People like to argue about what these scenarios meant, and which is to be preferred. The only test that matters is what actually occurred. And the test of that are the actual GHG concentrations that he used, relative to what we now measure. The actual numbers are in files here. Scenario A, highest emissions, has 410 ppm in 2015. Scen B has 406, and Scen C has 369.5. The differences between A and B mainly lie elsewhere - B allowed for a volcano (much like Pinatubo), and of course there are other gases, including CFC's, which were still being emitted in 1988, but not much now. Measured CO2 fell a little short of Scenarios A and B, and methane fell quite a lot short, as did CFCs. So overall, the actual scenario that unfolded was between B and C.

Remember, Hansen was not just predicting for the 2010-16 period. In fact, his GISS Ts index tracked Scenario B quite well untill 2010, then his model warmed while the Earth didn't. But then the model stabilised while lately the Earth has warmed, so once again the Scenario B projections are close. Since the projections actually cool now to 2017, surface air observation series for now are warmer than Scen B (Giss). GISS Ts corresponds to the actual air measure that his model provided. Land/ocean indices include SST, which was not the practice in 1988. Hansen himself has expressed the view that the right measure of his projection now lies between Ts and Ts+SST.

So in the graphic below, you can choose with radio buttons which indices to plot. You can enter a prior offset if you wish. It's hard to erase on a HTML canvas, so there is a clear all button to let you start again. The data is annual average; 2016 is average to date. You can check the earlier post for more detail.

Monday, November 21, 2016

Chemistry of sequestration and carbon cycles.

There has been some recent discussion of carbon cycles. ATTP had a good series on ocean CO₂ uptake here. And in the context of sequestration, WUWT reported on some recent experiments with sequestering CO₂ in basic basalt rocks. The discussion showed that there is a lot people don't understand about the basic chemistry driving the carbon cycle, which this chemical sequestration tries to exploit.

Long term cycles and disruption

There is a finite amount of carbon in short term exchange with the atmosphere; it passes through forms where it is reduced by photosynthesis, and re-oxidised quite quickly, as it must be by the ubiquity of oxygen. And there is constant exchange with the upper layers of the ocean. Over millions of years, the amount of such carbon varied, reflected in atmospheric ppm. Carbonate rocks are unstable to heat; CO₂ is emitted by molten rock, most obviously appearing in volcanic eruptions. This would lead to indefinite accumulation, were it not for a process where basic rocks are weathered, exposing surfaces which can convert the CO₂ back to carbonate. This leads to a kind of balance.

Humans are disrupting this by digging and oxidising many gigatons of reduced carbon. The energy that enabled this reduction came from millions of years of photosynthesis and deposition, which prevented re-oxidation. Our burning is running far ahead of the long cycle, so the idea of the basalt absorption is accelerated weathering, probably through fracking etc. I have no strong views on whether this is feasible, but I'd like to talk about the driving chemiatry.

Wednesday, November 16, 2016

GISS down just 0.01°C in October.

GISS is down from 0.90°C in September to 0.89°C in October. As with the similar small drop in TempLS mesh, the main news is that for the first time in a year, it didn't set a record for the month (because Oct 2015 was so warm). As many have remarked, it fits with the near certainty of a record hot 2016.

Other indices were down; NCEP/NCAR by 0.056°C. As often recently, polar variations played a big part; both poles were quite warm, and GISS and TempLS mesh respond to this. I expect NOAA and HADCRUT will show larger reductions.

I'll show the map comparisons below the fold. The updated comparison plots with 1998 are here

Monday, November 14, 2016

Lorenz attractors, fluids, chaos and climate.

I have written two posts (here and here) on chaos, fluid dynamics and the Lorenz attractor. The context is a series of articles (eg here) which continue a common belief encountered that there is something basically wrong with climate models because they solve Navier-Stokes equations that are inherently chaotic, because of their non-linearity. Chaos is an inappropriately pejorative word here. Computational Fluid Dynamics (CFD) has been dealing with chaos (turbulence) since its beginning about fifty years ago. It is just part of the scene, and actually fits with what we want to know. Chaos is the inability to make solutions correspond to initial conditions. The initial information is lost, and does not recognisably affect the outcome. But that is a plus, because we usually weren't able to measure an initial state anyway. Often it is hard to even say what it would mean, as in flow past an aircraft.

The key to this is the existence of attractors. Trajectories wander, but not randomly. And it is the attractor we want to know about. Weather varies, but climate is the attractor. It is that which attracts people to Miami, not the weather forecast.

In my second post, in which I showed a WebGL device for generating and examining (in 3D) the famous Lorenz butterfly chaotic solution, I also showed a Wiki visualisation of an attractor. It was one of several there, but I am now convinced that it is wrong. The mathematical papers, from Lorenz on, refer to attractor surfaces. I found this out when trying to calculate attractors myself. In the process, I found out a lot more about the working of the Lorenz butterfly, and how the attraction works, without making everything just converge to one place. I'll describe that in another post. My purpose is to show that nonlinearity and "chaos" is no cause for despair; at least in this simple case, we can figure out everything we need to know.

The 1963 Lorenz paper gave the equations thus:

He used σ=10, β=8/3, ρ=28.

Importantly, it is an autonomous (no t on RHS) first-order differential equation. This means that the state (X,Y,Z) at any point entirely determines the following trajectory. Trajectories can't cross, and if they stably follow a surface, then representative values will show it. Regions where trajectories are curved are important, and since the equation is polynomial (quadratic, with well-behaved higher derivatives) fast change of direction is possible only if the first detivatives are small. Zeroes are especially significant, and Lorenz shows two of them - C and C' at the centers of the wings. The third is the origin, which is significant not because it attracts trajectories, but because the z-axis is a trajectory leading to it, which does not stably attract, but is responsible for the transition between the wings.

The wings are in fact logarithmic spirals, rather slowly evolving. So my plan for showing the attractor is to originate a set of trajectories across one period of this evolution. Because of the first-order properties, that means that those trajectories then should sweep out the whole spiral, and any trajectories attracted to the wings should be swept up with them. So I did that, starting near C. Since I only want the shape of the trajectories, not the time course, I varied the times steps to keep them advancing as a steady front. The result was:

Now it doesn't look so chaotic. I colored the 16 parallel trajectories near the spiral center (the smaller hole, on the right) with rainbow colors, black marking the red end. You can see that on that wing they evolve with that rainbow band. I have marked the origin with a red dot, and the z-axis with a red line. When the expanding band reached the z-axis, it behaves like a fluid stream meeting a wall. There is a stagnation point, and the lines separate. I chose the original band to ensure that it would not be split here; the trajectories eventually peel off and go into the other half-plane where they are attracted to the other wing. They smoothly merge with it, but at a big spread of points on the evolution of that spiral. So in timing, the trajectories are dramatically separated, but in shape, the surface behaves smoothly. You can see the blue trajectory came closest to the center, and had to wind around many times to emerge. I made the other trajectories wait. Then the whole process was repeated the other way, although no longer in rainbow order. No trajectory is periodic, but the attractor is.

That is the important weather/climate analogy. Weather, on times scales up to ENSO and even "pauses" etc, happens in an unpredictable time sequence. My band of trajectories is like an ensemble of GCM solutions. Looked at individually, they are a tangle, but together, they establish a pattern which is not (here) dependent on time, but does depend on the externally imposed parameter values.

Of course, we have a WebGL interactive version, below. It doesn't generate the solutions, but you can examine them from angles and restrict time subsets. I'll give details of using it below the jump.

Thursday, November 10, 2016

TempLS down 0.04°C in October; sea ice is low.

The October TempLS mesh index dropped a little from 0.721°C in September to 0.679°C in October. This establishes a new record in recent times - it is the first month since April 2015 that has not been the hottest of that month in the whole record. October 2015 was the first really warm month of the El Nino. Canada and Mexico were late reporting this month, so I waited for that.

The TempLS grid index showed a greater drop, from 0.734°C to 0.617°C. As usual, this reflects disparity in polar conditions, which remained relatively warm with more cooling elsewhere. Indices like HADCRUT and NOAA may follow this trend. The modest fall in TempLS mesh was very similar to the NCEP/NCAR index, and also UAH lower troposphere.

The main map features are a cool band across Siberia, and warmth in the USA.

Other interesting things are happening. November in the NCEP/NCAR index has been exceptionally warm, up about 0.22°C on October at this early stage. Arctic warmth is reflected in the Arctic sea ice, which I mentioned in the NCEP/NCAR post. That remains below other years, but will eventually freeze. The Antarctic ice has been exceptionally low since mid October, which may be more significant, since the melting season is starting from that low base. Here is a section of the SH radial plot:

Friday, November 4, 2016

Brexit - a sequel

I don't normally post about politics, but I made an exception after the Brexit referendum in June. I wondered then how it would actually be brought to pass. I had optimistically thought that no PM would have the effrontery to try to take Britain out of the EC without parliamentary support - basically invoking Royal privilege. And I had hoped that any PM who tried to do so would lose a vote of confidence and have to resign.

I may have been too optimistic - perhaps the battles that were fought for supremacy of parliament really have been forgotten. However, this may be moot. The High Court has now intervened to rule that parliamentary approval to trigger Article 50 is required.

So if parliament was too spineless to insist on its prerogative, won't it just roll over and agree? I wonder. The thing is, they will have to approve an actual deal, which may be quite harmful to a lot of their constituents. And members will be rightly held responsible for that. Could be interesting. And there's still the Scotland thing.

Thursday, November 3, 2016

NCEP/NCAR October down 0.056C - NH sea ice very low..

The Moyhu NCEP/NCAR index dropped a little from 0.475°C in September to 0.419°C in October, neatly reversing the surprising rise from August to Sept. The change mirrors a small drop in UAH V5.6. Last month indices went in various directions, so what this means for this month in other indices is unclear.

More spectacular is a big deviation in Arctic sea ice after mid-October, in both JAXA and NSIDC. Here is the recent part of the radial plot, where black is 2016, and colors are other recent years:
JAXA Arctic Sea Ice October 2016NSIDC

It is such a remarkable change that a fault might be suspected, but it is the same from both sources. The plot shows a period of about a month, centred on present. The number values are here. Arctic temperatures have been warm.

Monday, October 31, 2016

Climate and the Lorenz attractor, 3D interactive model.

In my previous post, I described some recent blog discussion of chaos and climate models (GCMs), and gave my views on the relation to Navier-Stokes solution and CFD. People who write about chaos tend to focus on the trajectories, and their touchy relations to starting point, claiming this undermines GCMs. They should instead focus on the attractors, which are independent of start point sensitivity, and are the analogue of climate. And I contend that attractors have a manageable relation (see Appendix for math) to parameters that may vary - forcings for climate, or coefficients in a chaos differential equation.

In this post, I'll focus on the Lorenz DE's

These have, for various parameters values, chaotic solutions with interesting trajectory paths often shown:

A trajectory for the standard Lorenz parameters σ=10, β=8/3, ρ=28. Often displayed without mentioning that specific parameters are required.An attractor (due to Anders Sandberg, Oxford) parameters not specified but seems close to standard.

This post provides below a Javascript interactive display of the Lorenz system. You can choose parameters and start points. It is built on the WebGL of my standard Earth view, so you can rotate with mouse as if it were in a trackball, and also magnify or reduce by right button dragged vertically. There is also provision for viewing separate trajectories, and for running an animation of their evolution.

The general idea is that you can compare the effect of changing start points, with comparison red/blue trajectories, and also see the very great range of different attractors that result when the parameters are changed, However, the changes are continuous. What I'd like to get to eventually (future post) is a possible relation between an average of trajectories and the attractor. That would help understand how GCM runs can be averaged to get a climate evolution.

Sunday, October 30, 2016

Chaos, CFD and GCMs.

There has been a flurry of skeptic blogging (and commentary from me) on chaos and climate models. It's generally along the lines that chaos renders GCMs unworkable because of small changes magnifying or some such, with words like coupled and non-linear. Kip Hansen has a series at WUWT, finishing here. Like many such, it shows the Lorenz trajectories produced by a set of three slightly non-linear equations. I'll develop that with a gadget to explore these curves and their attractor in a future post. Tomas Milanovic has one of an intermittent series of posts (latest "Determinism and predictability") at Climate Etc, of which the general theme is the unsolvability of Navier-Stokes equations due to some effect of non-linearity negating proof of existence and uniqueness, or some such.

My standard response to all this is, look at Computational Fluid Dynamics (CFD, which has been my professional activity for the last thirty years). It is a major established engineering tool based on numerically solving the Navier Stokes equations, and has dealt with the chaos (turbulence) from the beginning. And the climate models are just large scale CFD. There are certainly difficulties with the solution, mainly to do with the necessary sub-grid modelling (in both CFD and GCMs). But they aren't to do with the fact that the solutions don't relate to initial conditions. In fact, that is a benefit, since initial conditions are hardly ever known accurately.

And the theoretical issues of existence and uniqueness etc don't impinge on practice. Algorithms are used which generate solutions on a gridded or meshed space with time stepping. These solutions satisfy on that scale the conservation laws of momentum, mass and energy, which are also expressed by the N-S equations. If you find such a solution, it doesn't matter whether it's existence could be proved in advance. As for uniqueness, the solution procedure itself will generally indicate whether different solution pathways are possible. One CFD scientist, David Young, has been objecting that some recent work, in which he has a part, does show non-unique solutions. But as far as I see, this is in situations like near-stall on a wing, where reality itself is far from predictable.

The CE post had an odd answer to this - yes, CFD works, but only on a scale of up to a few metres. This is of course unphysical - there is no such restriction on the physical laws, nor in the discretised algorithms is any physical scale limitation built in. And of course, GCM's are just Numerical Weather Prediction (NWP) programs, run for longer periods. Most sensible people concede that these work quite well, despite the many km scale.

What people who like to show fancy chaos pictures rarely dwell on is the nature of attractors. These are what distinguish chaos from randomness. And they are typically the results that are sought from CFD analysis. In CFD, initial conditions are usually just a nuisance (because you rarly have good data, and when you try and specify them, there is usually something that will generate unintended disturbances). The standard remedy is to run the program for a while to let these settle out. This takes advantage of the fact that initial conditions are swept away in chaos. GCM's do the same. They typically "wind back" to start at some time well before the period of interest. This would be bad if initial conditions mattered, because data back then is less reliable. But it isn't bad, because they don't. Again, it is better to let artefacts settle before the solutions are needed.

This lack of concern with initial conditions in a search for attractors, relates to the frequent criticism of GCMs as predictors. GCM's find out about climate (attractor), but don't predict the trajectories that converge to them (weather). That relates to the initial condition issue - models can only generate trajectories that are possible in the circumstances, not ones that will reproducibly happen.

When trying to explain why GCMs do really work, and attractors are the key, I often post this GFDL video of modelled ocean SST over seasons. I say that it shows many transient effect, from various eddies to longer term events like ENSO. None of these are predictions for Earth. The actual eddies won't happen, nor will the ENSO events, at least not at the stated times. But this solution which just came from specifying bottom topography and various long term forcings (energy input) comes up with familiar patterns like Gulf Stream and other major ocean currents. The wiggles vary, but the current is there. There is underlying physics which determines the transfer of heat from the Caribbean to the North Atlantic. And GCMs can tell you how that effect of physics will relate to changes in forcing. Anyway, here is the video:

In my next post, I'll develop the notion of an attractor using the simple Lorenz system of differential equations. These show two important things. Trajectories follow a path with a pattern that is, after some convergence from the initial point, similar for all cases. This is the attractor, and in contrast to the hypersensitive dependence on initial conditions, the dependence of that trajectory on the three parameters of the system is gradual, although over the full range allows many very different shapes. To do this, I'll show a Javascript/WebGL gadget that allows you to vary initial conditions and parameters, and visualise the trajectories in 3D.

Tuesday, October 18, 2016

GISS down 0.06°C in September

GISS is down from 0.97°C in August to 0.91°C in September. This compares with a larger fall of 0.12° in TempLS mesh, and contrasts with the small rise in the NCEP/NCAR index. It is still the warmest September in the record (just ahead of 0.90°C in 2014). It really hasn't cooled since May, and a record hot 2016 is ever more likely.

As I mentioned in the TempLS post, the dominant effect on recent changes is Antarctica. TempLS rose strongly in Aug, and dropped in Sep; GISS responded in the same way, but to about half the extent. I expect NOAA and HADCRUT to be less affected again.

I'll show the map comparisons below the fold. The updated comparison plots with 1998 are here

Friday, October 7, 2016

TempLS Surface temperature down 0.12°C in September

The Moyhu TempLS mesh index fell in September to 0.736°C, down from from 0.855°C in August. That still makes it the hottest September in the record. TempLS grid fell 0.04°C fro 0.785°C to 0.744°C. The recent ups and downs mainly relate to Antarctica, which was very cold in July, very warm (relatively) in August, and about normal (on average) in September. TempLS mesh followed this closely, while TempLS grid regards a lot of Antarctica as missing values, and so downweights the changes. This is reflected in the other indices - GISS and BEST rose like TempLS mesh, while NOAA (0.05°C) and HADCRUT changed mush less. I would expect GISS to also drop this month, but NOAA and HADCRUT maybe not.

In terms of regions, there isn't much unusual outside Antarctica. Siberia, Europe, E US and Alaska fairly warm, with just Australia on the cold side. I can vouch for that, though we were on the fringe of the cold region shown. With other indices, UAH lower trop was steady, while RSS rose by about 0.1. All seem set for a record warm 2016.

On housekeeping, Google says they are looking into the blogroll issues - still out.

The map is below the jump; report at the data page here.

Monday, October 3, 2016

Reanalysis index up 0.047°C in September

The Moyhu NCEP/NCAR index rose in September to 0.475°C, up from 0.428 in August. This brings it back to about the level of May. There was then a drop to June, followed by a gradual increase to now. This seems to be associated with ENSO-neutral conditions. And as usual recently, it was the hottest month of its kind in the record. Next month will test this trend of records, since Oct 2015 was very warm.

On other matters, I apologise for the absence of blogroll, search etc. Apparently Google Blogger has recalled them for repair. I'm told they should reappear soon.

Saturday, September 24, 2016

Twelve coin problem

Update: New constructive algorithm appended.

Things are a bit quiet in climate blogging - so with a weekend coming I'll honor my ancient promise of diverting to some recreational math. I saw a few weeks ago a mention at Lucia's of the old twelve coin problem. Twelve coins, one of which is fake and of different weight to the others, and to be found with three weighings on a balance (Update: the usual spec, as here, is that you also have to say whether it is heavier or lighter). This first came to prominence in WWII, when it was said to have become a distraction to the war effort in places like Bletchley Park, leading to suspicions that it had been planted by the Germans. A suggested counter was for the RAF to drop it on Germany.

I first encountered it when I started University; it was posed in circumstances where I was expected to be able to solve it, and I was embarrassed. A year later, I went to a lecture on information theory (newish in those days). I was struck by the proposition, helpful in later years, that the information in a result was a function of prior uncertainty. So that is the clue - each weighing should be arranged so each of 3 outcomes was as near equally probable as could be managed, maximising prior uncertainty. Then I could solve it easily, and also versions with more coins.

The basic constraint is that in N weighings there are 3N possible outcomes, while with n coins, there are 2n situations to resolve, since only once coin is false, and could be heavy or light. So for 12 coins, there are 24 possibilities and 27 outcomes of 3 weighings, so it is tight but possible. An alternative to the equiprobable outcome method is a requirement that each weighing should be so that each outcome was resolvable with the remaining weighings.

When I saw it mentioned again, I started thinking of a constructive algorithm that would also prove feasibility for all cases where there were enough results to theoretically resolve. I had an idea of describing the proof, but then my other recent hobby of Javascript programming seemed it might help. So I've made an interactive version (below) with the information needed to solve.

You can choose a number of coins up to 121, and then start (or restart any time). There are boxes for left, right and off-scale. The buttons representing coins have colored bands top and maybe bottom. The top band is what I call the HL score. Initially each coin might the bad one, heavy or light (HL). But once there has been a weighing that tilted, some possibilities fail. On the side that dropped, the coins could no longer be light, hence H. On the other side, they are L. And any not on the balance must be good (G). So the total number of possibilities remaining is the HU score = 2*HL+H+L.

There is another more subtle score - DU. This applies to the coins on the balance before weighing, and relate to whether you will gain information if the left balance goes down or up. HL coins will always become either H or L, so they are not scored. But if left pan down, then L on left or H on right would then be known to be good, so they are marked as D; if the left pan went up, that would tell nothing new about D coins. And similarly H on left or L on R would be counted as R.

You can move coins to the right (cyclic) by mouse click, or left by click with shift key pressed. If they are H or L, you will see the lower DU bar change as they move. When a weighing has been set up, click the weigh button.

The color scheme for top bars is HL black; H orange; L blue and G yellow. For the bottom, only H and L coins on the balance have color, and it's red for D, green for U. Here is an image of the gadget immediately following afirst weighing of 12 coins. Note the balance tilt. The four coins on the left are now H because they are on the heavy side, and U because they are H and left. On the right, they are L similarly, and also U. The off balance coins are all G, yellow.

You'll see a table with the HL counts for on and off, and the D and U counts. Three are in large fonts, and those are the ones needed for solution. The strategy is that for every weighing, the D and U should be as near equal as possible, and the off HL score should be about half the on. More critically, each should be ≤ 3M, where M is the number of weighings to follow after the one being set up. So in the image, for the next weighing, these numbers should all be ≤3. So take 3 off, then swap the others until the D and U scores are each ≤3. You'll see that as you rearrange, the system pads with known good coins (if available) to retain balance, and removes surplus. You can try to minimise usage of dummies; you should be able to reduce the need to at most two. If known good coins aren't available (first weighing), don't worry; the system will imagine them present.

So here is the gadget. Just press start, then move coins for weighing with click and shift-click, bearing in mind the above strategy. As long as the three big numbers are ≤ 3M, you can solve in minimum weighings. It is solved when there is just one coin with H or L color showing.

I have to say, it is now a bit mechanical. You don't need to know the coin numbers, or even what the colors mean, as long as you keep your eye on the scores. But the Javascript isn't solving it - it's just adding up what you could count yourself, but displayed in a helpful way.

Update I have changed the DU score to count 1 each for HL (black) coins. This is more consistent when there are such coins. It means that all 3 large font numbers should now be made as nearly as possible equal an ≤3^M.

Update - proof

I originally planned this post as a constructive proof - show a method that is sure to work. I think the scoring goes a long way there. But a method can be spelt out. Suppose we have a set of coins with HL score g≤3N+1, to resolve in N+1 weighings. It's enough to show that a weighing can be made to reduce this to an assured score h≤3N. And the point of the big font numbers in the gadget is that they are the respective HL scores after each possible outcome.

Let g=2*p+q, where 3N≥p≥q≥0. Normally p=3N is OK, but if that leaves q<0, you can reduce p until q is positive. Then take a set of coins, HL score q (including all known good ones), to be kept off the balance. If all coins have HL score 2 (as at the start), that's still possible, because g and hence q must be even.

Then put the remaining coins one by one on the balance. Depending on what side, the D and U scores will increment by 1 or zero. Put all the H coins on first, alternating sides so the gap between D and U is never more than 1. Then the L, also so as to increase whichever of D,U is less. That will also alternate, so there will be a max imbalance of coin numbers of 2. Then finally, if there are HL coins, put them on whatever side improves coin number balance; each will increase both D and U by 1.

This ensures D+1≥U≥D. And since D+L is the HU score, when loaded, D=U=p≤3N. Since D,L and q are the respective HL scores after each possible balance result, and each ≤3N, that completes the proof.

Numbers of coins on each pan may differ by up to two. After the first weighing, this can be balanced by adding known good coins. On the first weighing, if g<3N-1, you can always choose p even, so the coins can split equally. In the worst case where g=3N-1 (eg 13 coins in 3 weighings) an external known good coin is needed.

Update: New constructive algorithm.

I thought of a simple constructive algorithm, with notation. Whenever a coin is on a tipped balance, you have half-knowledge about it. If it is on the down side, you know it can't be light, so I call it a H coin. On the other side, L. A known good coin I'll call G; one not half-known U

I'll call an odd trio, one that is HHL or HLL. An odd trio is the most that can be resolved in one weighing. You weigh an HL against a GG. If HL tips up, L is the culprit, if down, H. If balanced, you know the coin taken off is the bad one, and you know if it is H or L.

Any half-known duo duo can be resolved by just balancing one coin against a G. So at the end of the second weighing, we must have nothing harder than an odd trio.

So start with 8 on the scale, 4 off. This makes the 3 outcomes equally likely. If =, the 8 are G. Put 3 of the 4 on the scales, with one G. If = again, there is just one coin left, and can be weighted against a G. Otherwise, we have an odd trio.

If the first weighing tipped, remove a HHL, Leave a HLL in place, and interchange the remaining HL. Add a G to even coins for next weightng. If =, the odd trio HHL taken off has to be resolved. If it tips the HLL has it. If it tips the other way, the HL needs to be resolved.

In this table, each cell contains 3 groups. The first is the coins set aside, the other two are those placed on the scales. It stops after third weighing, because all outcomes can be resolved as odd trio or easier in third weighing.


Tuesday, September 20, 2016

CRUTEM (HADCRUT) versions are documented and accessible

I have encountered at WUWT ongoing complaints about HADCRUT 4 updates. It is currently in a thread here, but goes back to an earlier post here. The complaints typically say that the new versions always raise current anomalies, and suggest that they are poorly documented. In fact, the changes are extensively noted; see directory here.

In the earlier thread Tim Osborn commented here, to say mainly that the changes were due to changes (mainly additions) to station data, and listed the particular additions to HADCRUT 4.3. He also later made the important point that there is a good reason why the trends rise with new data. HADCRUT is a land/ocean set, but the empty cells are mainly on land, and the new data allows some of them to be filled. HADCRUT is an average by grid (area-weighted), in which cells without data are simply not included. That has the effect of assigning to them the global average, which is dominated by sea temp. If new stations assign to empty cells genuine land values, that will increase the trend, because land is warming more rapidly. HADCRUT had artificially low trends because of this missing value policy, as was remedied by Cowtan and Way (2013) - discussed here in a series of posts, with links here.

But another feature of HADCRUT transparency is insufficiently appreciated. For Ver 4, at least, they give a complete listing of station data for each version, with each station file documented. Here is a typical version file; it is for 4.4, but just change that URL to 4.2 or whatever you want. Each links to a zip file of the station data for that version (except for Poland), which has a URL like I'm spelling out the URL because if you click on it, it will immediately download about 18Mb. But again, you can edit for other versions.

I couldn't find, though, inventory files, except for 4.5. But it's easy enough to make them from the file headers. So I've done that, and placed the zipfile here (612 Kb). It has a csv file for each of 4.2-4.5, and the columns have 3 letter abbreviations meaning:
  • num - a unique HAD station number
  • nam - name of station
  • cou - country name
  • lat - latitude in deg
  • lon - longitude in deg
  • alt - altitude in m
  • sta - start year of data
  • end - end year
  • sou - source id code
UEA has an explanations file here, which is the best source I have found for the source id codes, but unfortunately it dates from 2012, and there is a new one pretty much with each new dataset that has come in. I'd be glad to hear of something more recent. It isn't really a problem, because the later numbers are in order of addition, so are easy to work out. Note that the files are in number order, but countries are not necessarily consecutive blocks.

So I thought I would just post this information, so that people who really want to know what HADCRUT is up to can look it up. I may in future produce a Google map.

Friday, September 16, 2016

Arctic ice freezing, Antarctic melting

As many have now noted, Arctic sea ice stopped melting rather abruptly after 6 September, and has lately been freezing quite rapidly. The pattern is quite similar to last year, but a few days earlier. In both JAXA and NSIDC the minimum was lower than 2007, but higher than 2012, so in secoind place (but in NSIDC, only just).

Meanwhile, more surprisingly, Antarctic ice has been melting strongly for six days, and now stands below the other years of this century at least, for this time. From the radial plot here, I'll show the NSIDC plots for recent days. The plot spans most of September.


The red curve is 2002. Orientation same as NH.

Thursday, September 15, 2016

Thermodynamics of climate feedback

I have been describing and responding to blog arguments about climate feedback and circuit analogies here and here. The arguments have continued, and they do provoke ideas. I'm going to write some down in this post.

The usual circuit analogy has surface temperature as voltage, and TOA flux as current. I showed in the first post that the feedback, including Planck, could be regarded as conductances. It's interesting to probe what this might mean. The units are watts/m²/K, which are actually the units of entropy/s/m². Does entropy make sense?

I wrote about entropy and atmospheric fluxes here and here. Sunlight (Q=240 W/m² global average after albedo) arrives, does things, loses the capacity to do work, and eventually leaves as thermal IR. It has accumulated entropy, or if you prefer, lost free energy. You might think that with a heat sink at 3K (space) the heat could go on doing work. But in fact you need a miniumum temperature to radiate that flux to space, which for Earth is about 255K (note 1). That constitutes a resistance. To get Q=240 W/m² to flow to space, you need 255K (voltage).

In our system, that resistance, inverted, is the Planck conductance, or feedback. It represents the entropy flux to space. It's really the maximum or optimal entropy flux for 240 W/m². In fact, emission to space comes from a rather large component at about 225 W/m², from GHGs, and some from the surface at average 288K (atmospheric window). We know uniform blackbody emission exports most entropy for a given flux, because any variation means that more entropy could be generated by transporting heat from the hotter parts to cooler.

This lies behind a supposed failing proclaimed in a WUWT post of Lord Monckton. The Planck feedback calculated for the Earth at 255K, the temperature for uniform BB emission of 240 W/m², is 3.75 W/m²/K, and Lord M thinks they erred by not using it. But its inadequacy has been long known, and I wrote in the previous post how Soden and Held (among others) did a thorough study with GCMs to get a value of about 3.2 W/m²/K. The difference is usually attributed to absorption in the atmosphere, but thermo gives an alternative viewpoint, which I find more useful. It is the entropy export reduced by the non-uniformity (sub-optimality) of apparent emission temperature.

Tuesday, September 13, 2016

GISS up 0.13°C in August

GISS is up from 0.85°C in July to 0.98°C in August. This compares with a larger rise of 0.21° in TempLS (but GISS rose in July when TempLS dropped slightly, so over two months, about the same), and is much more than the small rise in the NCEP/NCAR index. It is also the warmest August in the record (next was 0.79°C in 2011).

I'll show the map comparisons below the fold. The updated comparison plots with 1998 are here

Sunday, September 11, 2016

Unicode gadget

I'm planning a new page which has gadgets that I use for blog writing etc. The first entry is likely to be a Unicode writer; here is a draft you might like to try. Unicode is the massive collection of special characters which you can probably access using your system cahacter generator - on Windows it is a Windows Accessory called Character map.

Unicode chars come in groupings; a good listing is here. Each char has a number and you can render them in HTML using the scheme &#2022; for character 2022 (in decimal). But they are widely rendered and usable as characters, in browsers and editors. The first 255 chars are just ascii. A particular aspect of usability is that you can use them in blog comments where most fancy html is not allowed.

I find them very useful for maths, chemical formulæ etc. You can use them where latex is unavailable, and there is much less overhead than latex. Here are some examples:
∫₀¹uⁱ⁻¹(1-u)ⁿ⁻ⁱ⁻¹du = β(i,n) = Γ(i)Γ(n)/Γ(i+n)

You can cut and paste from a char map as in Windows, or write the long form HTML as above. But that is tedious, as they come in lists of thousands, many for all the various language scripts. So I've collected a manageable table which I think has most that I'll ever need, and added an editable phrase generator, below the jump.

Thursday, September 8, 2016

Big rise (0.21°C) in surface temperature in August

Surprising, but TempLS mesh is so far an outlier here. The TempLS mesh global anomaly rose from 0.65°C in July to 0.86°C in August (base 1961-90). This is a change after a period of slow decline, and is almost back to the level of last January. TempLS grid showed a much smaller rise of 0.043°C. These results are consistent with the NCEP/NCAR index (up 0.02°C). The satellite measures varied; UAH6 LT was up 0.05°C, but RSS down just 0.01°C. The reason for the discrepancy seems to be the big variation in Antarctica, which is variably seen and weighted.

The spherical harmonics map is here:

The regional temperature variations are similar to those in the NCEP/NCAR report, though the warmth in Russia is more pronounced. In the breakdown plot, you can see the big contribution from the change in Antarctica. SST also rose significantly.

On this basis, I would expect a rise of at least 0.1°C in GISS, with maybe smaller rises in NOAA and HADCRUT (less affected by Antarctica). In other news JAXA sea ice extent had a late melt rush and dropped below 2007 to be second only to 2012. It won't catch 2012, and the main marker remaining is whether it will drop below 4 million sq km. It is 4.023 now. NSIDC is also second place.

Tuesday, September 6, 2016

More on climate feedback

Recently I posted on Climate feedbacks and circuits. It was in the context of a series of articles by Lord Monckton at WUWT. The articles have continued, with more promised. Willis has joined in, though they are not always on the same page. And I've been commenting there.

My last post was in response to much talk of positive fedbacks and instability. I showed an active circuit (borrowed from Bernie Hutchins) which wass table and would emulate the usual equation of feedback in the conetxt of climate sensitivity:
ΔTeq = ΔF/( 1/λ0 – Σci)
My main observation is that this is just like Ohm's Law, with the feedback coefficients c as conductances in parallel. The need to resort to active feedback comes from the fact that some c's, corresponding to positive feedback, may be negative. But the basic simplifying idea off adding conductances still applies.

In a much-cited review paper, Roe 2009, roe argues that feedbacks should be seen as Taylor series in disguise. I think he could have said just chain rule, since he uses only first order. But I want to develop this, because it is the basis for the method used in an also much-cited (including AR4) paper by Soden and Held, 2006. Lord M's latest post is based on the claim that S&H have made an error, but he really has no idea of what they did. So I'll try to say something about that here.

Saturday, September 3, 2016

NCEP/NCAR global up slightly

For the second month running, the Moyhu NCEP/NCAR index rose slightly, from 0.414°C to &0.428°C. Temperatures were warm mid-month, but dips at start and finish. As usual, the reanalysis shows local variation in Antarctica, where data is patchy, but overall it seems warmth predominates. Otherwise few major features - East Europe was warm. There seems to be a "pause" in cooling, which increases the likelihood of record heat in 2016.

In other news, there has been a late spurt in JAXA sea ice melt, which seems likely to take 2016 minimum past 2007, and exceeding only 2012. NSIDC says similar.

And OT, but the Blogger platform has been a bit shaky lately. I think the only thing affecting users at the moment is that on Chrome the blogroll is updated but not always  kept in time order. Firefox seems OK. I'm reluctant to intervene, since I don't have a good javascript diagnostic tool on Chrome.

Tuesday, August 30, 2016

Climate feedbacks and circuits

I've been arguing again at WUWT. Lord Monckton has been writing a series of articles (see also here, here and here), with more promised, on feedbacks and what he calls the "official equation". Mentioning feedbacks brings out all the engineers talking about feedback and instability. In the course of arguing, I think I see some general confusions arising from inadequate specifications of what circuitry is envisaged, and other general unsoundness. So I thought I could try to clear that up here, and in the process show an actual circuit which would implement the "official equation".

The first thing I try to emphasise here is that climate is not a circuit, and feedbacks are not used in GCM's. They are not the basis of climate science; in fact climate scientists talk about them far less than people imagine. Feedbacks are diagnostic tools - inferred from model output (or climate data) to help understanding. And you are free to imagine any kind of circuit or other apparatus that you think helps. That is a starting point - people are not always imagining the same thing, but they use the same vocabulary.

The basic concept here is climate sensitivity (CS). If you add a warming heat flux, usually from greenhouse effect, how much will the temperature rise? To make this more definite, it is often expressed as equilibrium CS (ECS). If you add a flux and then keep it constant, how much will temperature have risen by the time it has settled to steady state?

A starting point is what can be called Planck sensitivity. We know from the Stefan-Boltzmann law that a warmer planet will radiate more heat, and that will give a relation between flux and temperature change. For a black body, flux F=σT4 (S-B, with σ as the S-B constant). I propose to make F analogous to current and T to voltage, so this gives CS = dT/dF = 1/(4σT3) K/(W/m2) This gives it the units of resistance (analogy). With T=255 K, the effective radiating temperature of Earth, that would be 0.26. Earth is not a simple body - the atmosphere has an effect, and GCMs say that the right figure is about 0.31 (Soden and Held, 2006).

But ECS is generally reckoned to be a lot higher, because of the effects of positive feedbacks, especially water vapor feedback. Discussions of these are in S&H just cited, or Roe 2009. So then come the claims that positive feedback is necessarily unstable. It isn't, because it in effect adds to the negative Planck feedback. But if it outweighed that (and any other negative feedbacks) then it would be. That is the basis for talk of tipping points and thermal runaway.

Arctic Sea Ice - JAXA is back, with more melting

JAXA is an index of Sea Ice extent which is preferred by many, because the observing platform has good resolution and has recently been mostly more reliable than, say, NSIDC. It's the one I present first here. But for about a week it was not reporting, and this was at a late stage of melt and during an Artic cyclone.

When it suspended, melting had been sluggish, and 2016 was falling behind 2012, 2007 and 2015. However, during the break melting was higher, and 2016 is now in clear second place, behind 2012. It won't catch 2012, and will struggle to stay with 2007, where melting lasted well into September. However, it is ahead, and is quite likely to stay ahead of 2015, finishing at least third in recent years. Neven's (and forum) is the place to stay in touch.

Sunday, August 21, 2016

Progress toward a record hot 2016

Most global temperature indices are now out for July. The latest is NOAA, which fell slightly from 0.902°C to 0.872°C. This was in contrast to most other indices which rose a little, or at least stayed steady. However, NOAA had risen in the previous month, when many indices went down.

Even so, NOAA, again like most surface indices, was still the hottest July in the record, and indeed, at least 12 of the most recent months were the hottest anomaly to date. So the chances of a record 2016 are high, and as I have previously done, I want to show graphically how the rest of the year must fare to make that happen. In the past, I have sought to present the progress to the end year average as a race. But I think more information is conveyed by the type of graph that Sou makes for GISS. This shows the progress during the year of the average to date, compared with other warm years. It has a characteristic that the initial warmth may tail off toward the end, which makes 2016 a little hard to predict. I've supplemented it with extrapolation (faint) assuming that the rest of the year continues as warm as  the most recent month. I have made plots for the same set as in the regularly updated plots of comparison of 2016 with 1998. So here it is, you can use the arrows at the top to cycle between plots:

For the meaning of the headings, see the glossary here. All the surface measures, land/ocean, land and SST, are clearly projecting a record if the most recent month temperatures are maintained, with a good deal in reserve. The two lower troposphere indices are projecting a record (ahead of 1998), by a very small margin. There was an end year dip in 1998; if this happens in 2016, it may fall just short of 1998.

Housekeeping - where to note data glitches.

On another subject, while I was away recently, some of the regular data streams failed (for which Walter Dnes helped out, thnaks). This may sometimes happen at other times too, and so there is a question of where is best to comment on this. I have reinstated the comment facility on each of the data pages, and I think this is the most natural place; the comments will appear in the "latest comments" list. I have also amended the title of the top listed page to "Notes, and an index...". The idea is that if you want to note something that is not part of a thread or existing page, this could be the place.

A downside here is that comments make pages slower to load. A few short comments won't matter, but it is conceivable that after a long time, I may have to prune the list. If so, I'll try to remove only old comments (on pages) that were relevant to a specific time.

Postscript on warm July

I see again a regrettable tendency to confuse by saying that July was the warmest month ever (eg here, here). In anomaly terms it wasn't anywhere near the records of early 2016. The basis of this silly statement is that the global absolute temperature has a seasonal cycle. But that has little meaning, unlike anomaly. A warm global anomaly means that it is quite likely (but not certain) that wherever you are, you experienced a warmer July that usual. But whther it was a "hottest month" for you depends entirely on your local seasonal cycle. Where I am, it is mid winter. In the tropics, goodness knows. Even in the NH, in many places July is not seasonally the hottest. There is a good reason for focussing on anomalies, and it should not be muddied.


Tuesday, August 16, 2016

GISS up 0.05°C in July

GISS is up from 0.79°C in June to 0.84°C in July. This compares with a small fall of 0.02° in TempLS, and is very close to the posted, 0.045°C rise in the NCEP/NCAR index. It is also the warmest July in the record (next was 0.74°C in 2011). The increase matches the 0.05°C rise in the UAH V6 lower troposphere, while RSS was unchanged.

I'll show the map comparisons below the fold. The updated comparison plots with 1998 are here. Since all months so far in 2016 have been individual records, many by a large margin,  a record 2016 is looking very likely.

Sunday, August 14, 2016

Surface TempLS down 0.02°C in July

The July report is late because I have been away. But the automated reporting continued, and I'll just note some of that here. TempLS mesh was down from 0.688°C in June to 0.667°C in July (base 1961-90). This continues the moration of the post El Nino decline. TempLS grid actually rose slightly. These results are consistent with the NCEP/NCAR index (up 0.04°C). The satellite measures varied; UAH6 Lt was up 0.05°C, but RSS virtually unchanged.

The spherical harmonics map is here:

The regional temperature variations are similar to those in the NCEP/NCAR report. Nothing spectacular, but warm in N Russia, Brazil. Antarctica was cold, but sea surfaces overall warmer than June.

As mentioned, I'm back now, and fixing the issues that stopped various data being posted. The basic problem was that UAH changed its URLs this month. Normally that would just have returned an error, which the system could handle, but here UAH returned a small HTML file pointing to the new address. Unfortunately, my system returned that as data, which caused flow-on problems, even stopping my NCEP/NCAR program. My thanks to Walter Dnes, who supplied his own NCEP/NCAR numbers in comments, which filled the gap. I think everything should be fixed now.

Wednesday, August 3, 2016

NCEP/NCAR up 0.045°C in July

The NCEP/NCAR index increased last month, from 0.369°C in JUn e to 0.414°C in July (anomaly base 1994-2013). Temperatures had been dropping rapidly after El Nino, but the drop in June was not so great, so this increase may signal the end of that. July 2016 is still well above July 2015, and indeed well above mid-2015 levels.

There weren't major hot/cold features in July. A warm area in N Russia; Antarctica had a mix of hot/cold (maybe more cold). The Pacific has a cool ENSO plume region, but warm on each side.

In other news, UAH V6 rose similarly, from 0.34°C to 0.39°C. Arctic Sea Ice has been melting consistently, but not spectacularly. It is about to pass 2011, but 2007 and 2012 may be more elusive

I'll be travelling for a few days, and probably won't post on the TempLS results. However, I hope the automated reports will continue, and TempLS appears here. I should be back in time for GISS.

Wednesday, July 20, 2016

GISS down 0.14°C in June; NOAA up slightly

GISS was late this month. NOAA is also out - numbers here. GISS is down from 0.93°C in May to 0.79°C in June. This is more than the fall of 0.06° in TempLS, and a little more than the posted, 0.1°C fall in the NCEP/NCAR index. As Sou has noted, it is still (just) the hottest June in the GISS record.

NOAA however rose slightly, from 0.877°C to 0.899°C. TempLS grid also rose, from 0.704°C to 0.75. This is a pattern often observed in the past, where GISS follows TempLS mesh, and TempLS grid tracks NOAA. It is expected from the different ways they are constructed. I'll show the map comparisons below the fold. The updated comparison plots with 1998 are here

Friday, July 8, 2016

Surface TempLS down 0.064°C in June

TempLS mesh, reported here (as of 8 July, 4306 stations), was down from 0.746°C in May to 0.682°C in June (base 1961-90). This shows some easing the post El Nino decline also seen in the NCEP/NCAR index (down 0.1). In fact, TempLS grid rose slightly, from 0.70°C to 0.74°C. The SST component of TempLS also rose. The satellite measures varied; UAH6 Lt was down 0.21°C, but RSS only 0.06°C.

The spherical harmonics map is here:

The one notable cool spot was near Paraguay, but both high Arctic and much of Antarctic were cool. Also US, S America around Paraguay, and a spot in the N Pacific. Warm in W US, around Egypt, Alaska and part of Siberia. The breakdown shows only Antarctica (cool) as unusual. The different coverage of Antarctica is likely to lead to discrepancies, as with TempLS mesh and grid.

In other news, 2016 JAXA Ice briefly lost, then recovered its lead. It is likely to soon fall behind 2012.

Sunday, July 3, 2016

NCEP/NCAR down 0.1°C in June

The NCEP/NCAR index dropped again in June, from 0.471°C in May to 0.369°C in May (anomaly base 1994-2013). The drop is smaller than previous months, and may be a sign of levelling. We are now back in average to about Sept 2015, the first small rise of the El Nino.

 Cold in E Europe to W Siberia, most of US except E Coast, and a band of cold from Labrador/Greenland into the N Atlantic. Warm in Arctic and Canada, and the reanalysis still has the ENSO region fairly warm. Globally, the temperature rose somewhat at end of month.

In other news, UAH V6 also dropped considerably, from 0.55°C to 0.34°C. Arctic Sea Ice recovered somewhat (relatively), and 2016 is now not quite in the lead. Big drops in the last few days, but looking back through the record, this seems to be a feature of the end of our financial year.

Sunday, June 26, 2016

Brexit - who will make it happen?

I try mostly to stick to climate at Moyhu, maybe sometimes straying into maths. But I see our contemporaries all have something to say on it - Sou, Stoat, ATTP, Eli, and even Lord M.

One reason why I tend to avoid politics in blogging is that the information content tends to be low. Plenty of other people would say what I would say, and it all gets predictable. But for the record - yes, I think Britain should have stayed in the EU. I'm old enough to remember when Britain originally shied away from EEC membership, and then through the hard work of Ted Heath and others, was able to belatedly join, suffering some disadvantage from the delay. I was actually in Britain in June 1975 when the first ever national referendum was held, getting support for (by then negotiated) EEC membership by 66%.

I think Britain's sparing use of referenda is justified. In Australia we have them fairly frequently. There is provision for them in the constitution, which adds the requirement to succeed that have to get not only a majority of voters, but a majority of states (ie 4 out of 6). Consequently, it takes a substantial majority to succeed.

In this post I don't want to dwell on the rights and wrongs of the actual vote, but just to raise a question that puzzles me. Who will actually implement it? That issue seems to be a tangle built into adding referenda on to parliamentary government. It hasn't affected us much, because referenda only succeed with bipartisan parliamentary support, with the extra burden of 4 states approving. And they tend to be issues which would not anyway affect the fate of governments. We have one coming up on gay marriage. Probably the most noted in recent times was the referendum on becoming a republic. But even if that had passed, it's unlikely that PM Howard, who opposed it, would have felt required to resign.

Anyway, what I'm writing about here is the mechanics. Exiting the EU in a reasonable way will be a very hard task. It will require acts of parliament, some of which may be unpopular. How will it be done?

Tuesday, June 14, 2016

GISS down 0.16°C in May; still hottest May in record

GISS is down from 1.09°C in April to 0.93°C in May. This is in line with the fall of 0.186° in TempLS, or as noted earlier, 0.164°C in the NCEP/NCAR index. Also similar falls in the troposphere indices. Still, it was 0.07° warmer than the next warmest May, in 2014. I'll show the map comparisons below the fold. But first, here is the comparison plot with 1998:

Thursday, June 9, 2016

HTTPS now default on Moyhu

hoping it works. Please let me know of any problems in comments. You'll see the URL come up automatically as https:. I've taken the opportunity to reorganise the blog resources ( images, scripts), and there are many ways this could have gone astray. But I'd especially like to hear of HTTPS security warnings. I think everything, including past posts, should be HTTPS OK, but lapses are certainly possible.

As I've mentioned in previous posts here and here, I think HTTPS for blogs is nothing but a nuisance, but likely inevitable. So I thought I should retrieve something by using the needed search for HTTPS changes to change other things needed - eg images served from now uneliable sources (copied to better places). Old posts may work better now.

Also, of course, I can still at this stage turn the switch back .

Wednesday, June 8, 2016

Surface TempLS down 0.19°C in May

TempLS mesh, reported here (as of 8 June, 4221 stations), was down from 0.934°C in April to 0.744°C in May (base 1961-90). This continues the post El Nino decline noted in the NCEP/NCAR index (down 0.164) and in the satellite measures ( RSS down 0.23° UAH down 0.16). But SST is only slightly down.

The spherical harmonics map is here:

The main cool spot was in Siberia, which was very warm during El Nino. Also US, S America around Paraguay, and a spot in the N Pacific. Warm in other boreal regions, Europe, and (unusual recently) Antarctica. The breakdown shows most regions not very cool, but only moderately warm.

In other news, JAXA Ice melting has slowed in recent days, but ice is still well down on past years..

Saturday, June 4, 2016

May SST down 0.03°C

I have been describing a new Moyhu index made by integrating the NOAA gridded sea surface temperature ERSST V4. I have added it to the data on the latest data page. You can find on the monthly table, where it currently sits with the secondary sets, and the new value is right down in the bottom right corner after scrolling. This table is getting unwieldy, and I'm going to reorganise. You can also find it in the updated active graph. The big virtue of ERSST is that it is released early in the following month, and it is now out for May. There will be a revised version later in the month.

I'll show plots below the jump. SST has been settling gradually following the El Nino, and in May, the ERSST index dropped from 0.411 to 0.383°C, on an anomaly base of 1981-2010.

Friday, June 3, 2016

NCEP/NCAR down 0.164°C in May

The NCEP/NCAR index continued its steady descent in May, from 0.635°C in April to 0.471°C in May (anomaly base 1994-2013). That is down about 0.3°C since March. However, it is still the warmest May in that record. Breaking the pattern of recent months, it was rather cold in Siberia and N Asia. Warm in Australia and Central Europe. Cool in S USA, but warm in W Canada. The ENSO plume is now rather cool.

I should mention that at WUWT, Walter Dnes is doing somewhat similar analysis, with regression-based links to the major indices. His NCEP/NCAR integration gives similar results.

In other news, UAH V6 also dropped considerably, from 0.71°C to 0.55. But Arctic Sea Ice is still well down on previous years.

Thursday, June 2, 2016

Integrating on a spherical grid

This post arises from my integration of the NCEP/NCAR reanalysis, described here and here. I have been a little discontented with my method of integrating on a regular lat/lon grid on a sphere. I tried what is usually a very good method - direct trapezoidal integration, with cos weighting for latitude. It worked fairly well, but there was an oddity, in that it gave zero weight to the pole values. That didn't seem right, so I used another method in which cells were weighted by the midpoint values of cos latitude, but with the temperature average used for the cells. That gave finite weighting to the poles, but I still wasn't sure if it was right.

I should emphasise that this is a very minor problem. There are 10226 separate values to integrate, so if two are wrongly weighted, that probably doesn't make a difference at three significant figures. However, I was motivated to review it by some recent posts by Walter Dnes at WUWT. He is doing something similar, and apparently carefully and well. He has cross-checked with my calculations, and reports good agreement with small discrepancies in the third significant figure. Since I found good agreement with NOAA PSD too, that is all reassuring. However, I would like to resolve discrepancies if possible.

There are many possible sources. Calculating the anomaly base is one. I mentioned in this post some leap year issues; that is a very likely source. But I thought I should in any case review my treatment near the poles.

Tuesday, May 31, 2016

BoM metadata assistant

I have come across a very useful resource at Australia's Bureau of Meteorology. If you go looking at station data (like this), there is a link to "Additional site information", and then to a "basic site summary". And tyhis has all sorts of metadata, including history of measurements taken, detailed maps of the site, including a skyline diagram, and a history of instrument changes. It is very useful in the context of the arguments about places like Amberley and Rutherglen.

But it is a bit awkward to access through this chain of links. So I have added it to the portals page. This has a row of buttons at the top that you can use for different data sets, and I have added one for "BoM Metadata". It brings up page arranged by states; it is the same set and format as for the ASN part of the GHCNDaily page. When you ask for the state (WA shows initially), it shows a list of Place names and links. The link leads to the metadata for that place, in a separate tab.

Many of the stations don't have temperature data at all. Generally those that do will be marked by dates for the duration of the record, so look out for these. I don't guarantee that you will find useful information for any link, or even any information at all. But the main stations have good information. Try it! I'll place a copy blow the jump.