Tuesday, June 19, 2018

GISS May global down 0.03°C from April.

The GISS land/ocean temperature anomaly fell 0.03°C last month. The May anomaly average was 0.82°C, down slightly from April 0.85°C. The GISS report notes that it is the fourth warmest May in the record. The decline is very like the 0.038°C fall, of TempLS Mesh, although the NCEP/NCAR index declined rather more.

The overall pattern was similar to that in TempLS. Warm in most of N America, and equally warm in Europe, especially around the Baltic. Warm in East Asia, especially Siberia. Antarctica mostly warm. Still a pattern of warm patches along about 40°S.

As usual here, I will compare the GISS and previous TempLS plots below the jump.

Sunday, June 10, 2018

May global surface TempLS down 0.038 °C from April.

The TempLS mesh anomaly (1961-90 base) fell a little, from 0.716°C in April to 0.678°C in May. This is less than the 0.09°C fall in the NCEP/NCAR index, while the satellite TLT indices fell by a similar amount (UAH 0.03°C).

It was very warm in much of N America, except NE Canada (cold), and very warm in Europe. Cold in E Siberia, but warm in East Asia generally. Again a pattern of warm blobs around 40-50 °S, though less marked than in recent months. Quite warm in Antarctica (relatively).

Here is the temperature map. As always, there is a more detailed active sphere map here.



Data from Canada delayed this report by a couple of days. Following my recent post on the timing of data arrival, I kept a note of how the TempLS estimates changed day by day as May data came in. The TempLS report is now first posted when the SST results are available, but I wait until all large countries are in before writing a post about it. Here is the table (Melbourne time):
DateNumber stations (incl SST)Temperature
June 0545160.676
June 0648290.723
June 0752940.709
June 0853720.708
June 0953810.709
June 1054740.678

Canada (late) did have a cooling effect.

Read More

Sunday, June 3, 2018

May NCEP/NCAR global surface anomaly down by 0.09°C from April

In the Moyhu NCEP/NCAR index, the monthly reanalysis anomaly average fell from 0.377°C in April to 0.287°C in May, 2018. This cancels out the last two months of increase, and matches the February average.

It was for once warm in both in North America (except far N) and Europe especially Scandia. Russia was cold in the W, warm in the East. Nothing special at either pole. Probably the main contributor to the drop was a chill in the N Atlantic region, including Greenland. Active map here.

I had thought that the gradual warming might be associated with the decline of La Niña. But the changes are small, so shouldn't be over-interpreted. The BoM still says that ENSO is neutral, and likely to stay so for a few months.


Thursday, May 31, 2018

To see the month's GHCN coverage, patience is needed.

I often see on contrarian sites graphs, usually from NOAA, which are supposed to show how sparse is GHCN-M's coverage of land sites, as used by the major US temperature indices. The NOAA monthly reports usually show interpolated plots, but if you go to some legacy sites, you can get a plot like this:





It is a 5x5° grid, but it does look as if there are a lot of empty cells, particularly in Africa. But if you look at the fine print, it says that the map was made April 13. That is still fairly early in the month, but NOAA doesn't update. There is a lot of data still to come. Station coverage isn't ideal, but it isn't that bad.

I took issue with a similar graph from SPPI back in 2010. That was quite a high visibility usage (GISS this time). Fortunately GISS was providing updates, so I could show how using an early plot exaggerated the effect.

The issue of spread out arrival of data affects my posting of monthly TempLS results. I calculate a new monthly average temperature each night, for the current month. I post as soon as I can be reasonably confident, which generally means when the big countries have reported (China, Canada etc). I did comment around January that the temperatures were drifting by up to about 0.04°C after posting. I think that was a run of bad luck, but I have been a little more conservative, with stabler results. Anyway, I thought I should be more scientific about it, so I have been logging the arrival date of station data in GHCN-M.

So I'll show here an animation of the arrival of March 2018 data. The dates are when the station data first appears on the posted GHCN-M file. Click the bottom buttons to step through.


The colors go from red when new to a faded blue. The date is shown lower left.

The behaviour of the US is odd, and I'll look into it. About 500 stations post numbers in the last week of February. I presume these are interim numbers, but my logging didn't record changing values. Then another group of stations report mid April.

Otherwise much as expected. The big countries did mainly report by the 8th. A few medium ones, like South Africa, Mongolia, Iran and Sudan, were quite a lot later. But there is substantial improvement in overall coverage in the six weeks or so after April 1. Some of it is extra stations that arrive after a country's initial submission.

There certainly are parts of the world where more coverage would be useful, but it doesn't help to exaggerate the matter by showing incomplete sets. The good news from the TempLS experience is that, even with an early set, the average does not usually change much as the remaining data arrives. This supports the analysis here, for example, which suggests that far fewer stations, if reasonably distributed, can give a good estimate of the global integral.

Tuesday, May 29, 2018

Updating the blog index.

I wrote late last year about improving the blog topic index, which is top on the page list, to right. I've now tinkered a bit more. The main aim was to automate updates. This should now work, so the index should always be up to date.

The other, minor improvement was to add a topic called "Complete listing" This does indeed give a listing of all posts, with links, back to the beginning of the blog in 2009. It includes pages, too (at the bottom), so there are currently 751 in the list, organised by date.


Friday, May 25, 2018

New interactive updated temperature plotting.

As part of the Moyhu latest data page, I have maintained a daily updated interactive plotter. I explained briefly the idea of it here. There is a related and more elaborate annual plotter kept as a page here, although I haven't kept that updated.

I think interactive plotting is a powerful Javascript capability. You can move the curves around as you wish - expanding or contracting the scales. You can choose which of a large set of data offerings to show. You can smooth and form regression lines.

But the old version, shown with that old post, looks a bit raw. I found I was using it more for display graphs, so I have cleaned up the presentation, using PrintScreen on my PC, and pasting the result into Paint. I have also simplified the controls. I had been using draggable popup windows, which are elegant, but not so straightforward, and don't make it easy to expand facilities. So I have reverted to an old-fashioned control panel, in which I can now include options such as writing your own headings and y-axis label. There is now also the option of changing the anomaly base, and you can choose any smoothing interval. Here is how it looks, in a working version:


You can choose data by clicking checkboxes on the left. Dragging in the main plot area translates the plots; dragging the pointer under the x-axis changes the time scale, and dragging vertically left of the y-axis changes the y-scale. At bottom left (below the checkboxes), there is a legend, only partly visible. This reflects the colors and choice of data, and you can drag it anywhere. The idea is that you can place it on the plot when you want to capture the screen for later presentation.

The control panel has main rows for choosing the regression, smoothing and anomaly base. When you want to make a choice, first tick the relevant checkbox, and then enter data in the textboxes. Then yo make it work, click the top right run button. The change you make will apply either to all the curves, or just to one nominated on the top row, depending on the radio buttons top left. The nominated curve is by default the last one chosen, but you can vary this with the arrow buttons just left of the run button. However, the anomaly base can only be altered for all, and the color selection only for one.

Choosing regression over a period displays the line, and also the trend, in the legend box, in °C/century units. You can only have one trend line per dataset, but possibly with different periods. If you want to make a trend go away, just enter a date outside the data range (0 will do). You could also deselect and reselect the data.

Smoothing is just moving average, and you enter the period in months. Enter 1 for no smoothing (also the default).

There are two rows where you can enter your own text for the title and y-axis label. Click run to make it take effect. The title can include any HTML, eg bold, text-size etc. You can use heading tags, but that takes up room.

Color lets you choose from the colored squares. A choice takes effect immediately, for the nominated data only.

Generally keep the checkboxes in the control panel unchecked unless you are making a change.

For anomaly base, you can also enter an out of range year to get no anomaly modification at all. The plots are shown each with the suppliers base. I don't really recommend this, and it tends to get confused if you have already varied base choices.

There are two more buttons, on the right of the control panel. One is Trendback. This switches (toggles) to a style which was in the old version, and is described here, for example. It shows the trend from the time on the x-xis to present (last data) in °C/century. In that mode, it won't respond to the regression, smooth, or anomaly base properties. The other button is "Show data". This will make a new window with the numbers graphed on the screen. This can be quite handy for the trendback plots, for example. You can save the window to a file.

Here is how the plot might look if you drag the legend into place:









Thursday, May 17, 2018

GISS April global down 0.02°C from March.

The GISS land/ocean temperature anomaly  fell 0.02°C last month. The April anomaly average was 0.86°C, down slightly from March 0.88°C. The GISS report notes that it is still the third warmest April in the record. The fall is very similar to the 0.016°C fall, of TempLS Mesh, although the NCEP/NCAR index showed a slight rise.

The overall pattern was similar to that in TempLS. Cold in most of N America, and contrasting warmth in Europe. Warm in East Asia, especially arctic Siberia. Polar regions variable. Warm in S America and Australia, and for at least the third month, a curious pattern of warm patches along about 40°S.

As usual here, I will compare the GISS and previous TempLS plots below the jump.

Tuesday, May 15, 2018

Electronic circuit climate analogues - amplifiers and nonlinearity


This post is a follow-up to my previous post on feedback. The main message in that post was that, although talking of electronic analogues of climate feedback is popular in some quarters, it doesn't add anything mathematically. Feedback talk is just a roundabout way of thinking about linear equations.

Despite that, in this post I do want to talk more about electronic analogues. But it isn't much about feedback. It is about the other vital part of a feedback circuit - the amplifier, and what that could mean in a climate context. It is of some importance, since it is a basic part of the greenhouse effect.

The simplest feedback diagram (see Wiki) has three elements:



They are the amplifier, with gain AOL, a feedback link, with feedback fraction β, and an adder, shown here with a minus sign. The adder is actually a non-trivial element, because you have to add the feedback to the input without one overriding the other. In the electronic system, this generally means adding currents. Adding voltages is harder to think of directly. However, the block diagram seems to express gain of just one quantity, often thought of as temperature.

In the climate analogue, temperature is usually related to voltage, and flux to current. So there is the same issue, that fluxes naturally add, but temperature is the variable that people want to talk about. As mentioned last post, I often find myself arguing with electrical engineers who have trouble with the notion of an input current turning into an output voltage (it's called a transimpedance amplifier).

If you want to use electronic devices as an analogue of climate, I think a fuller picture of an amplifier is needed. People now tend to show circuits using op amps. These are elaborately manufactured devices, with much internal feedback to achieve high linearity. They are differential, so the operating point (see below) can be zero. I think it is much more instructive to look at the more primitive devices - valves, junction transistors, FETs etc. But importantly, we need a fuller model which considers both variables, voltage and current. The right framework here is the two port network.

I've reached an awkward stage in the text where I would like to talk simultaneously about the network framework, junction transistors, and valves. I'll have to do it sequentially, but to follow you may need to refer back and forth. A bit like a feedback loop, where each depends on the other. I'll go into some detail on transistors, because the role of the operating point, fluctuations and linearity, and setting the operating point are well documented, and a good illustration of the two port treatment. Then I'll talk about thermionic valves as a closer analogue of climate.

Two Port Network

Wiki gives this diagram:


As often, engineer descriptions greatly complicate some simple maths. Many devices can be cast as a TPN, but all it means is that you have four variables, and the device enforces two relations between them. If these are smooth and can be linearised, you can write the relation for small increments y as



Wiki, like most engineering sources, lists many ways you could choose the variables for left and right. For many devices, some coefficients are small, so you will want to be sure that A is not close to singular. I'll show how this works out for junction transistors.

This rather general formulation doesn't treat the input and output variables separately. You can have any combination you like (subject to invertible A). For linearity, the variables will generally denote small fluctuations; the importance of this will appear in the next section.

The external circuitry will contribute extra linear equations. For example, a load resistor R across the output will add an Ohm's Law, V₂ = I₂R. Other arrangements could provide a feedback equation. With one extra relation, there is then just one free variable. Fix one, say an input, and everything else is determined.

Junction transistors

I'm showing the use of a junction transistor as amplifier because it is a well documented example of:
  • a non-linear device which has a design point about which fluctuations are fairly linear
  • a degree of degeneracy, in that it is dominated by a strong association between I₁ and I₂, with less dependence on V₂ and little variation in V₁. IOW, it is like a current amplifier, with amplification factor β.
  • there is simple circuitry that can stably establish the operating point.
Here from Wiki, is a diagram of a design curve, which is a representation of the two-port relation. It takes advantage of the fact that there is a second relation which is basically between I₁ and V₁, with V₁ restricted to a narrow range (about 0.6V for silicon).



The top corner shows the transistor with variables labelled; the three pins are emitter E, base B and collector C. In TPN terms, I₁ is the base current IB; I₂ is the current from collector to emitter IC, and V₂ is the collector to emitter voltage VCE. The curves relate V₂ and I₂ for various levels of I₁. Because they level off, the dependence is mainly between IC and IB. The load line in heavy black shows the effect of connecting the collector via a load resistor. This constrains V₂ and I₂ to lie on that line, and so both vary fairly linearly with I₁.

The following diagrams have real numbers and come from my GE transistor manual, 1964 edition, for a 2N 1613 NPN transistor. The left is a version of the design curves diagrammed above, but with real numbers. It shows as wavy lines a signal of varying amplitude as it might be presented as base current (top right) and appear as a collector voltage (below). The load resistor line also lets you place it on the y axis, where you can see the effect of current amplification, by a factor of about 100. The principal purpose of these curves is to show how non-linearity is expressed as signal clipping.





I have included the circuit on the right, a bias circuit, to show how the design operating point is achieved. The top rail is the power supply, and since the base voltage is near fixed at about 0,6V, the resistor RB determines the base current curve. The load RL determines the load line, so where these intersect is the operating point.

So let's see how this works out in the two-port formulation. We have to solve for two variables; the choice is the hybrid or h- parameters:



Hybrid suggests the odd combination; input voltage V₁ and output current I₂ are solved in terms of input current I₁ and output voltage V₂. The reason is that the coefficients are small, except for h₂₁ (also β). There is some degeneracy; there isn't much dependence at all on V₂, and V₂ is then not going to vary much.So these belong on the sides they are placed. I₂ and I₁ could be switched; that is called inverse hybrid (g-). I've used the transistor here partly as a clear example of degeneracy (we'll see more).

Thermionic valve and climate analogue

From Wiki comes a diagram of a triode



The elements are a heated cathode k in a vacuum tube, which can emit electrons, and an anode a, at positive voltage, to which they will move, depending on voltage. This current can be modulated by varying the voltage applied to the control grid g, which sits fairly close to the cathode.

I propose the triode here because it seems to me to be a closer analogue of GHGs in the atmosphere. EE's sometimes say that the circuit analogue of climate fails because they can't see a power supply. That is because they are used to fixed voltage supplies. But a current supply works too, and that can be seen with the triode. A current flows and the grid modulates it, appearing to vary the resistance. A FET is a more modern analogue, in the same way. And that is what happens in the atmosphere. There is a large solar flux, averaging about 240 W/m² passing through from surface to TOA, much of it as IR. GHGs modulate that flux.

A different two-port form is appropriate here. I₁ is negligible, so should not be on the right side. Inverse hyprid could be used, or admittance. It doesn't really matter which, since the outputs are likely to be related via a load resistor.

Climate amplifier

So thinking more about the amplifier in the climate analogue, first as a two port network. Appropriate variables would be V₁,I₁ as temperature and heat flux at TOA, and V₂, I₂ as temperature, upward heat flux at the surface. V₂ is regarded as an output, and so should be on the LHS, and I₁ as an input, on the right. One consideration is that I₂ is constrained as being the fairly constant solar flux at the surface, so it should be on the RHS. That puts V₁ on the left and pretty much leads to an impedance parameters formulation - a two variable form of Ohm's Law.

The one number we have here is the Planck parameter, which gives the sensitivity before feedback of V₂ to I₁ (or vice versa). People often think that this is determined by the Stefan-Boltzmann relation, and that does give a reasonably close number. But in fact it has to be worked out by modelling, as Soden and Held explain. Their number comes to about 3.2 Wm⁻²/K. This is a diagonal element in the two port impedance matrix, and is treated as the open loop gain of the amplifier. But the role of possible variation of the surface flux coefficient should alos be considered.

As my earlier post contended, mathematically at least, feedback is much less complicated than people think. The message of this post is that if you want to use circuit analogues of climate, a more interesting question is, how does the amplifier work?







Friday, May 11, 2018

TempLS monthly updates of global land and sea temperature

TempLS is a program I use to provide a monthly global land/ocean anomaly index, using unadjusted GHCNM V3 data for land, and ERSST V5 for SST. There is a summary article here. It is essentially a spatial integration, which reduces to an area-weighted average of the anomalies. My preferred method is to use an irregular triangular mesh to get the weights. It is then possible to separately sum with weights the stations of various regions. I have been doing this (as described here) for about three years as part of the monthly reporting. A typical plot for April is here

.

It shows the arithmetic contribution that each region makes to the published global average. It isn't itself a temperature of something; if you add all the continent colored bars shown, you get the land global amount, in red (that is new). And if you add land and SST you get the global, in black. Each bar is the weighted sum of locals divided by the global sum of weights. To get the regional average, the denominator would be the sum of weights for the region.

I plan now to more systematically post the land and SST averages, and also plots of regional averages. The SST will be particularly useful, because ERSST posts within a couple of days of the start of the month, so TempLS can produce a result much earlier than the alternatives. NOAA publishes a revision late in the month, but changes are usually small.

I have added TempLS_SST and TempLS_La to the sets normally displayed. You can find the numbers (anomaly base 1961-1990) under Land/SST in the maintained table of monthly data. There are trend plots in the Trend viewer. And they plots are available on the interactive plotter. Here is an example of recent data, compared with HADSST3 and NOAA SST:





I'll probably report the SST for each month in my first post for each month, along with the reanalysis average.

I'll show now the other possibilities in the monthly bar plot style. Showing the regional averages give sthis:



The regions are far more variable than the globals, which obscures the picture somewhat. Note the huge Arctic peaks. So I'll show also the progression of just the land, SST and globals. It is now practical to show more months. Here is the plot



It emphasises the variability of land relative to SST. This may be seen in better proportion by reverting to the first style, showing the contributions to the global average:



Again, red and blue (land and SST) add to the black total. It shows how monthly variations are dominated by the fluctuations on land. I'll find a way to include these extra graphs in the monthly reporting.



Thursday, May 10, 2018

April global surface TempLS down 0.016 °C from March.

The TempLS mesh anomaly (1961-90 base) fell a little, from 0.721°C in March to 0.705°C in April. This contrasts with a small 0.046°C rise in the NCEP/NCAR index, while the satellite TLT indices fell by a similar amount (UAH 0.04°C).

It was very cold in much of N America, except west, but very warm in Europe and E Siberia, and warm in East Asia generally. Also warm in Australia, Argentina, and once again a curious pattern of warm blobs around 40 °S. The Arctic and Antarctic were mixed.

Here is the temperature map. As always, there is a more detailed active sphere map here.



Friday, May 4, 2018

Feedback, climate, algebra and circuitry.

I've been arguing again at WUWT ( (more here)). It is the fourth of a series by Lord Monckton, claiming to have found a grave error in climate science, so it is now game over. My summary after three posts is here.

The claim is, of course, nonsense, and based on bad interpretation of notions of feedback. But I want to deal here with the general use of feedback theory in climate, and the mystery that electrical engineers who comment on this stuff like to make of it. The maths of feedback is trivial; just simple linear equations. And it is best to keep it that way.

A point I often make in commentary is that climate science really doesn't make much use of feedback theory at all. Critics invoke it a lot more. I continually encounter people who think that feedback is the basis of GCMs. I have to explain that, no, they do not form any part of the structure of GCMs, and cannot. A GCM is a solver for partial differential equations. That means it creates for each step a huge array of linear equations relating variables from neighboring cells. That isn't always obvious in the explicit methods they tend to use, but there is still an underlying matrix of coefficients. And because each row just related a few neighboring values, the matrix is sparse. This is an essential feature, because of the number of cells. But global averages, such as would come from a feedback expression, are not sparse. They connect everything. So they cannot fit within the discretised pde framework.

Linear equations and feedback

Problems described as feedback are really just linear equations, or systems of a few linear equations; usually one less equations than unknowns, so on elimination, one variable is expressed as a multiple of another. I described here how a feedback circuit could be analysed simply by writing linear current balance (Kirchhoff rule) equations at a few nodes. In climate, the same is done by balancing global and time average heat fluxes, usually at TOA.

The paper of Roe 2009 is often cited as the most completely feedback oriented analysis. I'll show its presentation table here:

It gives the appearance that ΔR is both input and output, because it is a flux that is conserrved. But the more conventional feedback view is that ΔT is the output. If we take the multi-feedback version of (c)
ΔT = λ₀(ΔR + ΣciΔT )
which I can rewrite setting c₀=-1/λ₀ as just
ΔR + c₀ΔT + ΣciΔT = 0

This is just the equilibrium heat flux balance at TOA since each of the ciΔT is a temperature-responsive flux. I have given the c₀ΔT special status, because it is the Planck term, representing radiation guaranteed by the Stefan-Boltzmann law (c₀ = -4ΔT).

Feedback reasoning and linear equations

Just resolving a linear equation is not a mathematical difficulty. So what is all the feedback talk about? Mainly, it is trying to see the equation as built up in parts. There is no math reason to do that, but people seem to want to do it. The process can be described thus:
  • Select (as in Roe above) a subset to refer to as the reference system. A logical set is the forcing and the necessary Planck response.
    ΔR + c₀ΔT + ΣcₖΔT = 0
    This is like a finite gain amplifier (c₀)
  • Express the other terms as feedbacks relative to c₀:
    ΔR + c₀ΔT *(1 - Σfₖ) = 0, fₖ = -cₖ/c₀
    The f's are then called the feedback coefficients. For stability (see next) they should sum to less than 1. Negative values make this more likely, and so are stabilising. As the coefficient of ΔT, diminishes, it increases the amount by which ΔT would have to change to keep balance. That is said to increase the gain, and creates a singular situation (of high gain) approaching zero.

Stability

If the singularity is passed, and the coefficient of ΔT becomes positive, the system is unstable. The reason involves an extra bit of physics. Suppose total flux is out of balance. Then the region into which it flows will cool or heat. The coefficient here is, for a uniform material, called the heat capacity H, and is positive. For a complex region like the Earth surface, that is hard to quantify, but will still be positive. That is, heat added will make it warmer, not cooler. So the equation for temperature change following imbalance is
ΔR + cΔT = H*dΔT/dt
If c is positive, this has exponentially growing solutions, and so is unstable. For c negative, the solutions decay, and lead toward equilibrium.

It's often said that positive feedback is impossible, because it would mean instability. But in the above algebra, that is not true; the requirement is that Σfₖ<1. It is true if you choose a different reference system - just the forcing. That can only work in conjunction with a c₀ΔT where c is negative. Electrically, the reference system is then like an operational amplifier.

Summary so far

Systems often described using feedback terminology are really just linear equations (or systems). Feedback arguments do not yield anything beyond what elementary linear solving can do, including a stability criterion. But with linear algebra, you can identify the various steps of feedback reasoning if you want to.

Systems are not exactly linear

Roe points out that linear feedback is just the use of a first order Taylor Series expansion of a nonlinear relation. This is very direct seen as a linear system. If the forcing R is to be balanced by a flux F which is a function of T and variables u,v which depend on T, then to first order

dR = (∂F/∂T) dT + (∂F/∂u du/dT) dT + (∂F/∂v dv/dT) dT

each partial holding the other variables (from T,u,v) fixed. This gives the required linear relation with the bracketed terms becoming the c coefficients (but negative).

More advanced

There is a lot of approximation here. Not only linearity (usually OK) but also in the use of global averaging. But that doesn't mean linear analysis has to be discarded if you want to take account of these things. You can extend using an inexact Newton's method. Suppose we have the base system

R = F(u,v,T)

where again u and v are variables (like humidity) that depend on T. Suppose we have an initial state subscripted 0, and a perturbed state subscripted 1, of which R₁ is known. Then to first order

F(u₁,v₁,T₁) - R₁ = F(u₀,v₀,T₀) - R₁ + (∂F/∂T)₁ dT + (∂F/∂u du/dT)₁ dT + (∂F/∂v dv/dT)₁ dT = 0

This can be solved as before as a linear equation in dT. Then updating

T = T + dt, u = u + du/dT)₁ dT etc, we can solve again

F(u,v,T) - R₁ + (∂F/∂T)₁ dT + (∂F/∂u du/dT)₁ dT + (∂F/∂v dv/dT)₁ dT = 0

and iterating until F(u,v,T) - R₁. Note that I have not updated the partial derivatives, which are the feedback coefficients. That is what makes it an inexact Newton; convergence is a bit slower, but we probably don't have information to do that update.

So non-linearity is not a show-stopper; it just takes a little longer. This also allows you to work out a more complicated version of F, with, say, latitude variation. You can still use the simpler global feedback coefficients, so the extra trouble is only in the evaluation of F. The penalty will again be slower convergence, and it may even fail. But it gives a way to progress.



Thursday, May 3, 2018

April NCEP/NCAR global surface anomaly up by 0.046°C from March

In the Moyhu NCEP/NCAR index, the monthly reanalysis anomaly average rose from 0.331°C in March to 0.377°C in April, 2018, mainly due to a spike at the end of the month. It's the same rise and pattern as last month. The rises are not huge, but have been consistent since the low point in January, so that now April is the warmest month since May last year. This seems consistent with the fading of a marginal La Niña.

The big feature was cold in North America, except for the Pacific coast and Rockies. Much of Europe was warm, as was Australia. There was a lot of (relative) warmth in Antarctica, but the Arctic was patchy. Interactive map here.

The BoM says that ENSO is neutral, and likely to stay so for a few months.


Wednesday, April 18, 2018

GISS March global up 0.1°C from February.

GISS rose 0.1°C. March anomaly average was 0.89°C, up from February 0.79°C January (GISS report here). That is a greater rise than TempLS mesh, which rose by 0.04°C, as did the NCEP/NCAR index. But GISS did not rise the previous month, so the change over two months is about the same. Mar 2018 is about the same as Mar 2015, but below 2016 and 2017.

The overall pattern was similar to that in TempLS. A cold band across N Eurasia, and a warm band below across mid-latitudes. Warm in N Canada and Alaska, but cool around the Great Lakes. As with last month, both show an interesting pattern of mostly warm patches in the roaring Forties.

As usual here, I will compare the GISS and previous TempLS plots below the jump.

Saturday, April 7, 2018

March global surface TempLS up 0.021 °C from February.

The TempLS mesh anomaly (1961-90 base) rose a little, from 0.683°C in February to 0.704°C in March. This is similar 0.046°C rise in the NCEP/NCAR index, while the satellite TLT indices rose by a similar amount (UAH 0.04°C).

There were two major bands of weather, one cold, one warm. The warm belt spread from N China to the Sahara, being very warm from Mongolia to Egypt. The cold band went from N Siberia to Britain, being very cold in NW Russia. Both poles were moderately warm. For the Arctic, this is a big reduction since last month, so indices like NOAA and HADCRUT might rise more than GISS. TempLS grid, which also undercounts poles, rose 0.07°C.

Another noticeable pattern, similar to last month, was a band of SST warmth extending right around the 35-45&deg; S latitudes.

Here is the temperature map. As always, there is a more detailed active sphere map here.



Tuesday, April 3, 2018


March NCEP/NCAR global surface anomaly up by 0.046°C from February

In the Moyhu NCEP/NCAR index, the monthly reanalysis anomaly average rose from 0.285°C in February to 0.331°C in March, 2018, mainly due to a spike at the end of the month.

Unusually, the Arctic was mostly cool. Cold in N Russia, extending through Europe to Spain. To the south of that cold, a warm band from China to the Sahara, which was probably responsible for the net warmth. US was patchy, but more cool than warm. Interactive map here.

On prospects, the BoM says that ENSO is neutral, with neutral prospects. Currently SOI looks Nina-ish, but BoM says that is due to cyclones and will pass.


Friday, March 16, 2018

GISS February global temperature unchanged from January.

GISS cooled, February anomaly average was 0.78°C, same as January (GISS report here). That differs from TempLS mesh, which rose by 0.06°C, as did the NCEP/NCAR index.

The overall pattern was similar to that in TempLS. Warm in the Arctic (very) and Siberia, Eastern USA, and also a band from Nigeria through to W India, bit warmest around the E Mediterranean. There was a band of cold in Canada below the Arctic extending into the US upper mid-west, and in Europe. Both show an interesting pattern of mostly warm patches in the roaring Forties.

As usual here, I will compare the GISS and previous TempLS plots below the jump.

Tuesday, March 13, 2018

Buffers and ocean acidification.

This is an ongoing topic of blog discussion - recently here. I have written a few posts about it (eg here), and there is an interactive gadget for seawater buffering here. One of my themes is to reduce the emphasis on pH. The argument is that it is present in very small quantity; the buffering inhibits change, and so it is not a significant reagent. Because of its sparsity, it was until recently hard to measure. So both with measurement and conceptually, it is better to concentrate on the main reagents. These are CO₂, bicarb HCO₃⁻ and carbonate CO₃⁻². Carbonate is also involved in a solubility equilibrium with CaCO₃.

There is resistance to defocussing on H⁺, based on older notions that it is the basis of acidity. But for 95 years we have had the concept of Lewis acidity, in which a proton is just one of many entities that can accept an electron pair. CO₂ is another such Lewis acid. And to it makes possible the description of the overall reaction of CO₂ absorption

CO₃⁻² + CO₂ + H₂O ⇌ 2HCO₃⁻

as a Lewis acid/base reaction, in which carbonate donates an electron pair to CO₂. I propound that, but meet resistance from people who think Lewis acidity is an exotic modern concept.

I've realised now that that isn't necessary, because the notion of buffering isn't tied to any notion of acidity. So I can set up buffer equations just involving those three major reagents.

pH Buffer

A buffer is normally described as an equilibrium

A + H ⇌ HA

where HA is a weak acid, A the conjugate base and H a proton. I have dropped the charge markings. The system operates as a buffer as long as substantial concentrations of both A and HA are present. I'll denote concentrations as h, a and ha.

The maths of the buffer system comes from the equations

h*a/ha = K(M)
ha + h = cH(H)
ha + a = cA(A)
Eq (M) is the Law of Mass Action. Eq (H) is mass balance of H, and (A) of A. As the reactions of the equilibrium proceed, the numbers on the right are invariant within the reactions. The equilibrium will shift if one of them changes from outside effect.

The equations are a mixture of multiplicative and linear, and in the buffer calculator I used a Newton-Raphson method to solve the coupled systems. But for one buffer there is a simple way which illustrates the buffering principle. The iteration for given K, cH and CA goes: starting with h=0
1 solve (H) and (A) for a and ha
2 solve (M) for h and repeat (if really necessary)

Under buffer conditions K is small, and so is H, and so changing h will not make much relative change to ha and a, and so in turn will not affect the next iteration of h that emerges from solving (M). The process converges quickly and ensures h stays small. The buffer is perturbed by changing cH or cA, either by adding reagents or by perturbing other equilibria in which reagents are involved. Eq (M) ensures that h not only remains small, but is fixed by the slowly changing ratios of the major species.

Here is a worked example. A litre of 1M A, 1M HA, pKa=8 (so pH=8).
Add 0.1M HCl, sufficient to reduce pH of water to 1.
Then, ignoring volume change:
ha=1.1 (eq H, total H) and ha+a=2 (eq A) so a=0.9.
Then from (M), h=1e-8*1.1/0.9=1.222e-8
On iteration, the corrections to (H) and (A) would be negligible. The pH has gone from 8 to 7.91

Now there is no mention of any kind of acidity in this math. The only requirement is that there is a ternary equilibrium in which one component H is in much lower concentration than the others. H could be anything So I didn't need to talk about Lewis acids (it helps understanding but not the buffer math).

Bjerrum plot

The classic way of graphing buffer relations is with a Bjerrum plot. This takes advantage of the fact that you can divide a and ha by cA, and eq (M) is not changed. Eq H would be, but you can let it go if h is specified as the x-axis variable. Then M is solved to show a/cA and ba/cA (which add to 1) as functions of h, or usually, -log10(h). Actually, Bjerrum plots are really only interesting for coupled equilibria. Here is a Wiki example:

Generalised buffer - sea water

Sea water buffering is complicated, in normal description, by the interaction of two pH buffers (numbers from Zeebe)
HCO₃⁻+H⁺ ⇌ CO₂ + H₂OK1: pKa=5.94
CO₃⁻²+H⁺ ⇌ HCO₃⁻K2: pKa=9.13

The pKa for a H,A,HA buffer is the pH at which ha=a. K1, K2 are the equilibrium constants, as in Eq (M). So it is a complicated calculation. But the two can be combined, eliminating the sparse component H⁺, as before

CO₃⁻² + CO₂ + H₂O ⇌ 2HCO₃⁻

Now we still have an essentially ternary equilibrium, since the concentration of water does not change. And [CO₂] is still small. It is essentially a buffer equation, but buffering [CO₂]. The equations now are, with A=CO₃⁻², HA=HCO₃⁻ and H=CO₂:

h*a/ah² = K(M')
ha + a + h = cC(C)
ha + 2*a = cE(E)


The additive equations are different; I've renamed them as (C) (total carbon, or dissolved inorganic carbon cC=DIC) and (E) (cE = total charge, or total alkalinity TA). K can be derived from the component buffers above, K=K2/K1, so -Log10(K) = 9.13-5.94 = 3.19


Summarising for the moment:
  • I have replaced two coupled acid/base buffers with a single equilibrium with buffering properties, eliminating H⁺.
  • The components are the main carbonate reagents, which we can solve for directly.
  • The additive equations conserve the measurement parameters DIC and TA.
  • CO₂ replaces H⁺ as the sparse variable, and also the measure of (Lewis) acidity.

Bjerrum plot for generalised buffer

This uses the same idea of choosing a x-axis variable, and using the equation that results from eliminating it from the additive equations as the constraint, scaling wrt its rhs. The x-axis here uses the scarce species H=CO₂, and the eq (E) for total alkalinity is suitable for normalising, since cE=TA does not change as h varies. So the new plot variables are
  • x = -log10(h/cE)
  • y = 2*a/cE
  • y'=1-y=ha/cE
Here is the plot, using standard concentrations a=260, ha=1770 μM, K=K2/K1=6.46e-04, so TA=2290 μM



For the simple pH buffer, the curves would be tanh functions; here they are similar but not symmetric. More acid solutions are to the left; the green line represents equilibrium h for those conditions; adding CO₂ does not change the normalising TA and moves the green line to the left.


Perturbing an equilibrium by forcing concentration

Again a natural iteration can be used for the equations, based on the small component. However, in the real OA problem, we don't add a finite amount of reagent, but force an new change in the buffered quantity [CO₂] by air change. Then the buffering effect works in reverse; a small change forces big changes elsewhere.

Suppose we have a pond of seawater, with standard concentrations a=260, ha=1770, h=[CO₂]=10 μM. Suppose the pCO₂ in air rises by fraction f. We don't actually need to know what pCO₂ is, just use Henry's law to say [CO₂] will increase in the same ratio. We can't use eq (C), because change in cC is unknown, but eq (E) says Δha = -Δa. Letting x be the fractional change in a, and m=2*a/ha=0.294, so the fractional change in ha is -m*x so from (M') we have,
(1+f)*(1+x)/(1-m*x)² = 1 (ratio change)
or f+x*(1+f+2*m) = (m*x)² (ratio change)

We could solve this as quadratic, but it is instructive to iterate, solving
x<- -="" br="" f-="" f="" m="" starting="" with="" x="2"> With f=0.1 (10%) the iterates are -0.07175, -0.07166737, -0.07166755

The first term is good enough. The key result is that a 10% change in atmospheric CO2 makes a 7% change, at equilibrium, with [CO₃⁻²], even though its concentration remains very small. Note that there is no reference to pH in this calculation. pH can be recovered from eq (M).

Repeating that main result; if m=[CO₃⁻²]/[HCO₃⁻] and the fractional change in gas phase pCO₂ is f, then the fractional change x in [CO₃⁻²] is given to very good approximation by
x = f/(1+f+2*m)
Estimates of m vary, but are usually around 0.1. So the fractional reduction in [CO₃⁻²] is comparable to the fractional increase in pCO₂.

Of course that is an equilibrium calculation, and the mixing time of the whole ocean is very long, so it could only apply to surface layers. It also omits the key question of CaCO₃ dissolution, which could restore [CO₃⁻²]. That dissolution is seen as the penalty, and this quantifies it.

Summarising again the virtues of this approach:
  • It eliminates H⁺ and deals with the reagents directly
  • The natural measures are the Dissolved Inorganic Carbon (DIC) and Total Alkalinity, both easily lab-measured and with data available
  • It gives a useful approximation to the natural forcing condition, which is change in pCO₂ in the air
  • The concept is that [CO₂] is buffered rather than pH. That leads directly to the consequence that trying to force a change in [CO₂] passes directly to a change in carbonate instead.

Wednesday, March 7, 2018

February global surface TempLS up 0.05 °C from January.

The TempLS mesh global anomaly (1961-90 base) rose from 0.603°C in January to 0.655°C in February. This is almost the same as the 0.054°C rise in the NCEP/NCAR index, while the satellite TLT indices fell by a similar amount.

The main warmth was in the Arctic and Siberia, Eastern USA, and also a band from Nigeria through to W India, bit warmest around the E Mediterranean. There was a band of cold in Canada below the Arctic extending into the US upper mid-west, and in Europe.

Because of the warmth in the Arctic, it is likely that indices will diverge according to the extent to which they represent it. TempLS grid, which is weaker there, showed a drop of about 0.04°C. As often, I would expect GISS to show a rise like TempLS mesh, while NOAA and HADCRUT may fall.

Here is the temperature map. As always, there is a more detailed active sphere map here.


Saturday, March 3, 2018

February NCEP/NCAR global anomaly up by 0.054°C from January

In the Moyhu NCEP/NCAR index, the monthly reanalysis anomaly average rose from 0.231°C in January to 0.285°C in February, 2018. The big dip in January continued in to the earlier part of February, then a considerable peak, and a modest dip and recovery at the end.

It was cold in the northern prairies and N Canada, but surprisingly, warm on average in E USA. Cold in Europe, extending through Morocco to Senegal. As much noted, very warm in the Arctic, and warm in a band from E Mediterranean to N India. Mixed in Antarctica, probably on average cool. Interactive map here.

I expect that Arctic-sensitive indices like GISS, Cowtan and Way, and TempLS mesh will show more warming than the others. Here is a temperature map looking down on the N Pole


Friday, February 16, 2018

GISS January global down 0.11°C from December.

GISS cooled, going from 0.89°C in December to 0.78°C in January (GISS report here). That is a smaller drop than TempLS mesh; I originally reported a 0.2°C fall, but later data changed that to a 0.16°C fall. GISS says that January 2018 was the fifth warmest in the record, and was cooler due to La Niña.
Update. I wrote this post based on the GISS report, as the data file was not posted for some time. It is now there, and I see that the December average has been adjusted up from 0.89&deg;C to 0.91&deg;C. That means the drop Dec-Jan is now 0.13&deg;C. This brings it close to the TempLS change of 0.16&deg;C. TempLS also incrased in December due to later data, so in both months GISS and TempLS now track well. 

The overall pattern was similar to that in TempLS. Cool in east N America and cold in far East Siberia. Very warm in west of Russia, and in the Arctic. A cool La Nina-ish plume, but warm in the Tasman sea and nearby land. The W US was warm, more so than TempLS showed. Also the W Russia hotspot extended well into central Europe.

As usual here, I will compare the GISS and previous TempLS plots below the jump.

Wednesday, February 7, 2018

January global surface tempLS down 0.2 °C from December.

TempLS mesh anomaly (1961-90 base) fell from 0.762°C in December to 0.565°C in January. This compares with the fall of 0.097°C in the NCEP/NCAR index, and a similar fall (0.16) in the RSS LT satellite index.
According to the reanalysis, the cause was a deep dip late in the month, which seems to be extending into February.

The main cool areas were central asia into Mongolia , and Eastern N America. Also Central Africa. Warm were NW Russia, extending into Europe, and N Canada/Alaska. Also a band right across temperate SH, but still very warm Tasman Sea and around.

Here is the temperature map:


Tuesday, February 6, 2018

Weirdness from Armstrong/Green and conservative media.

This was new to me when it popped up at WUWT. But apparently is has already been around for a week or so, at Fox News and The Australian. Economists Scott Armstrong and Kesten Green, who regard themselves (pompously) as authorities on forecasting, made a splash ten years ago when they made a challenge of a bet with Al Gore that their forecast of global temperature, based on no change, would be better over the next ten years than his. They even set up a website, to track his response, and presumably track the bet. And they got a good run in the conservative media at the time.

Gore never showed any interest - he just said that he doesn't bet. So it was empty noise. However, for some strange reason, there is now a volley of fantasy articles, attributing a bet to Gore (in which he had no say) and declaring him the loser. The terms of the bet are exceedingly arcane. Since they have to make up some warming forecast, they picked an "IPCC" forecast of 0.3°C warming for the decade.

The first thing to say is that the IPCC made no such forecast. They did, in the AR4 SPM that came out at abaout that time, say this:
For the next two decades, a warming of about 0.2°C per decade is projected for a range of SRES emission scenarios.
And it was for surface temperature, not the troposphere measure that Armstrong/Green decided on.

But that itself turns out oddly. They nominate UAH as a measure, which actually has a slightly higher trend over the period than surface measures. Here is the plot

And they are betting with zero trend (blue). The actual was more than double the "IPCC trend". They lose by a mile. But in the WUWT article, at least, they say the OLS trend for the period is 1.53°C/Century. But they prefer a measure they make up called Least Absolute Deviation (LAD).

Well, they would, wouldn't they, because they say that comes out to 1.14°C/Century, and on that basis they declare themselves the winner (since they have pinned Gore with the "IPCC" 3 C/century. Of course it is 2, not 3, so even on that basis they lose. But since they seem to have miscalculated the OLS trend by a factor of 3, I have no faith in their LAD calculation.

On the plot, I've marked a line with slope 1.14°C/century in green, and the Armstrong/Green zero forecast in blue. They all, as fitted lines should, pass through the mean of x and y. See if you think the "LAD" line (1.14) is a better fit. Or whether the no trend forecast is the winner.

Scott Armstrong is a professor of marketing. Kesten Green is a lecturer in Commerce. Neither seems to know much about what the IPCC actually says. And they seem pretty weak in statistics.









Saturday, February 3, 2018

January NCEP/NCAR global anomaly down by 0.097°C from December

In the Moyhu NCEP/NCAR index, the monthly reanalysis anomaly average dropped from 0.328°C in December to 0.231°C in January, 2018. There was a big dip late in the month; there are some signs that it is ending. The month was close to November and June, 2017, but a little lower than both. You have to go back to July 2015 (0.164°C) to find a colder month.

AS we have heard quite a lot, it was cold in eastern N America (but warm in W). Also central Asia and Sahara/Sahel. Warm in Arctic and a lot of Europe, and still warm sea around New Zealand. Cool in tropical E Pacific (ENSO).


Housekeeping post.

A couple of housekeeping issues. One is an apparent problem with Blogger. I have posted the January results for NCEP/NCAR, but they don't show on the actual blog page, or in the archive. The post does have a place on the web here">, but you can't get to it in the normal way, and it hasn't generated the usual RSS notifications. It may be that Blogger is working on this, because for some time I could see and edit the post on the dashboard where all posts are listed, but now it has disappeared.

I'm posting this partly as an experiment, to see if it suffers the same fate.
Update - it worked. I'll try reposting the earlier post.


The other happening, which I have been spending a bit of time on recently, is the HiREs SST page, and it's consequent movie page. This stopped updating at end 2017. End year glitches in my automated system are not uncommon, but this turned out to be at source. NOAA has reorganised its system, using NetCDF 4 and different directories. They also have a new data set which I'll look at. But anyway, the program is over five years old now, and a bit slow, so I tried to improve. That is never smooth, but I think it is OK now.

Friday, January 26, 2018

A new gallery of interactive graphics at Moyhu.

There was an old page, linked to right, on interactive graphics at Moyhu. I have replaced it with a new gallery, which is much more comprehensive. Maybe too comprehensive - I have tried to include every instance to end 2017. It is a tableau of images, and I have placed at the top what I think is a representative selection. These are marked with red borders. Each cell has a passive image, a date, a link which will take you to the original post or page containing the graphic, and a button imploring you to Try it!. This takes you to a rearranged version of the original post with the graphic at the head; it is an active version of what you see in the image.

The radio buttons on the left allow you to choose categories, generally based on the technology used. They are explained below the tableau, and I will expand on them here.

In this post, I want to review the overall progress of interactive graphics here. Interactive requires at least the use of JavaScript, so that pressing buttons, dragging with the mouse etc will modify what you see. This can be augmented with two main technologies - the drawing canvas, which was formally introduced with HTML 5 in 2014, but usable earlier, and WebGL, which is based on the old Silicon Graphics GL of about thirty years ago, which became OpenGL and is now built into browsers as WebGL.

It is important sometimes to remember that Javascript activity is embedded in the HTML scripts that a browser downloads, and is entirely implemented on the users machine. There is no Moyhu server support. This means all the data is downloaded too. There are some security restrictions on this privilege, which adds to the interest of Javascript programming. An underlying technology is the programming language R. I typically sort out the data for plotting or whatever at this level, and the output is a Javascript file often defining masses of data. Javasript itself does not have i/o; you have to express input as code.

Much of my experimenting with graphics has been motivated by a wish to present data plotted on a sphere, to avoid projection distortion. JS makes it possible to do this and allow the user to view the sphere from various directions.

I have recently revamped the Moyhu topic index, and some of the toipics give a more comprehensive list of links.

Javascript and interactivity

My first active graphic was part of a discussion on how to make spaghetti graphs more readable. My first idea was an animated GIF, which overlaid black outlines of each strand in sequence. But a reader TheFordPrefect recommended Javascript and sent me an example made by Dreamweaver.So I learnt some JS, and made a plot of proxy reconstructions here. There was a legend where you could roll the cursor over a name in the legend, and a black overlay of the strand would appear. This is the basic idea that I have set as a separate category - active viewers - described further below.

A common use of JS and buttons was simply to compress information. A lot of images could be accessed at one location in the page, with a choosing mechanism. Alternatively a huge number of links can be sorted into manageable pages with button clicks.

JS Globe

That set me up for the first presentation of a globe plot, as here. In R I made 2D projections from a number of viewpoints of a shaded plot of some variable, usually temperature. The viewpoints were usually from either the 8 corners of a cube containing the sphere, or the 6 face centres. There was a panel of squares you could click to switch views. A lot of my JS graphics involves locating a click point and responding in some way. Dragging is a variant of this.

Google Earth and KML/KMZ

This was rather a dead end, but I did quite a lot of things. KML is a control language for Google Earth. Here is a typical application. I haven't shown the graphics here because, well, they aren't really mine. GE is good for detailed location, which is relevant to individual stations, but not to temperature plots etc. I found that the control capabilities, based on folders were rather limited, so I switched to Google Maps, which offers JS control.

Google Maps

The general idea and working environment is described here. GM provides an app which allows you to embed a GM in your page, but gives ample facilities to control it with JS. Again the main use is for showing land stations, where the lat/lon are fairly precisely known. I typically select subsets of the stations colored according to some criterion. Clicking on them brings up information including the name and often links and history. The selection table is on the right, and can allow quite complex logic. Recent cases show the total in each color. There is, for example, a maintained page which is described here.

Active viewers

This is just a subset of JS-active graphics, in which a spaghetti plot is shown, with a large range of strands, usually proxies. More use of JS is made in that when a strand is marked, a table of information os shown, and also a marker shows where it is on a map. A typical example is here.

Trend viewers

This is another JS subset. A colorful triangle (prepared in R)is shown in which each dot represents the trend over some period of months. There is a coupled time series graph on the right, with two markers representing beginning and end of trend. You can choose a period either by clicking on the triangle or by moving the markers directly. For each chosen period, numerical data is displayed. Buttons allow you to choose the overall display period, the dataset (from one of many), and possibly a different kind of display which says something about the significance or confidence intervals. There is a maintained page here.

XMLHTTPrequest

Normally with JS you have to load the data in advance, which is a nuisance if you want to accommodate a range of user wishes. XMLHTTPrequest is a workaround that lets you download JS files when asked by user. There are security restrictions. But it vastly increases the amount of data that can be supplied. It isn't itself graphics technology, but it enables some of my larger apps

HTML 5

HTML 5, when it came out, included a lot of new elements, but the one particularly useful to me was the canvas element. All the graphics described so far had to be pre-drawn using R and supplied as images. With the canvas, we can draw from numerical data, in response to user input. Further, there is a clunky but not bad capability for shading triangles in response to vertex values.

One liberation is that graphs don't any longer have to have a fixed range of x or y. The user can zoom or extend, if the numerical information is there. My first big use of this was in the climate plotter. This is still kept as a page, although the information update has been spotty. You can choose from a large number of annual sets of climate data. Combinations can be displayed and regressions (including polynomial) performed. Various kinds of arithmetic can be done, But most importantly, the axes are under user control. You can translate curves independently, stretch in the x or y directions (with axes adapting).

A similar application is to superimposing on an image. I rather frequently review the progress of Hansen's 1988 predictions, eg here in 2016. This makes a canvas image from his paper, and the user can choose various datasets to superimpose, and even vary offsets if desired.

Another liberation was in viewing Earth plots. No longer need there be fixed views pre-calculated. The canvas can show shading in response to arbitrary control (there is maths involved). An early version is here. The use of shading improved over time.

Drag plots

This is an extension, using HTML 5, of plotting with variable axes. You can translate just by dragging, or by dragging just behinds each axis, you can shrink or expand it. And there can be the usual selection facilities etc. I maintain such a graph for surface indices in the latest data page.

WebGL

Most recent graphics has been done with WebGL, the origins of which are described above. WebGL is the staple of gaming, and so ample resources are provided. For fast-moving graphics on screen, it provides access to the GPU, for highly parallel operation. It is fully 3D, so keeps track of what is obscuring what. The shading is excellent. It has elaborate capabilities for lighting and perspective, but I don't use those much. And of course it is under full JS and mouse control. In my applications there is a fixed centre point about which everything can be revolved, and a fixed viewpoint at infinity.

A great thing about WebGL is that it deals in objects. In HTML5, you can't unravel a 2D canvas, and it is hard to selectively erase. But in WebGL you can just ask for an object to disappear, or move it, and you see what is underneath.

My first effort is here. But it got better. One that is still one of my favourites is the maintained high resolution SST page. At max resolution of 1/4°, that is a lot of triangles (2073600). But WebGL handles it fairly well, and it really can tell you more if you zoom. The app draws together a lot of technology, as interactive downloads are essential, and since that are about 25 years of data, a lot of it daily, just organising this is a stretch. Like all maintained pages, it downloads data and processes (in R) every night.

But the recent outpouring of WebGL graphics which dominates the gallery is due to the Moyhu WebGL facility. This hides all the parallel programming etc and just allows as input as numeric data. Usually a vector of nodes, linkages (for triangles etc) and values which will become shading. Mass production. A non-spherical example is the Lorenz butterfly..

Animation

This isn't a single technology; in fact I started out with animated GIFs. I tend to use it now where video compression is used, as with MPEG. I typically use FFmpeg with R to string together sequences of PNG or JPEG images. The classic example is a maintained page which is an offshoot of the HiRes SST page. It shows daily or 4-day sequences for regions like the ENSO Pacific plumes or the poles (where it is good for tracking sea ice). Another interesting experiment was the 2012 hurricane season, where I show the hurricanes moving against a SST background. In many cases it clearly shows the tracks of cooling.

An aspiration is to provide a 3D movie, so you can rotate a world as it goes through a temperature or whatever sequence. The problem is that you lose video compression, so it is hard to maintain speed. Here I went back to the JS globe idea above, using stored projections of 6-hour relative humidity plots. It's interesting, but hard work.

Another kind of animation is just zipping through WebGL plots. It's fast enough. Here is a movie display of the various spherical harmonics which I use a lot for Fourier-like representation on the sphere, with various controls. Fun version here.







Wednesday, January 24, 2018

Satellite temperatures are adjusted much more than surface.

I continually come across claims that surface temperatures should be ignored in favour of satellite troposphere temperatures, because the surface temperatures are adjusted. It's an odd argument to conduct, because while at least there is a recognised surface temperature reading that can be adjusted, satellite temperatures are the product of a long and complex calculation sequence, in the course of which many judgement calls are made. Here, for example, is Roy Spencer's (+Christy + Braswell) explanation of the changes that were made in going to UAH version 6. He describes the need for it thus:
One might ask, Why do the satellite data have to be adjusted at all? If we had satellite instruments that (1) had rock-stable calibration, (2) lasted for many decades without any channel failures, and (3) were carried on satellites whose orbits did not change over time, then the satellite data could be processed without adjustment. But none of these things are true.
...
After 25 years of producing the UAH datasets, the reasons for reprocessing are many. For example, years ago we could use certain AMSU-carrying satellites which minimized the effect of diurnal drift, which we did not explicitly correct for. That is no longer possible, and an explicit correction for diurnal drift is now necessary. The correction for diurnal drift is difficult to do well, and we have been committed to it being empirically–based, partly to provide an alternative to the RSS satellite dataset which uses a climate model for the diurnal drift adjustment.
...
So instead of continually making small adjustments, as in the surface dataset, they produce new versions in which these decisions are revisited and often radically revised. The changes are much larger in overall effect than the changes to individual surface station averages.

Two years ago, I wrote a post about the changes that happened when Version 5.6 of the UAH index went to version 6. This decreased trends a lot, and so was popular with contrarians. I was prompted to write by Roy Spencer's claim:
"Of course, everyone has their opinions regarding how good the thermometer temperature trends are, with periodic adjustments that almost always make the present warmer or the past colder."
So I compared the change in TLT (lower troposphere) going from V5.6 to 6.0, to the cumulative effect of changes in GISS from archived time series of 2005 and 2011, with the then current 2015 GISS. GISS was far more stable than UAH, even though the period of changes was much longer.

Meanwhile, RSS also updated their troposphere data, going from V 3.3 to V4. RSS had been a favourite of contrarians, because it had a much lower trend than UAH. Roy spencer noted this, saying:
"But, until the discrepancy [in trend with UAH higher]] is resolved to everyone’s satisfaction, those of you who REALLY REALLY need the global temperature record to show as little warming as possible might want to consider jumping ship, and switch from the UAH to RSS dataset."
They needed little persuasion. Lord Monckton wrote a monthly series at WUWT about the length of the "Pause", which he defined as the maximal period of zero gradient of RSS TLT, starting about 1997. He scorned UAH then, as it was similar to the surface data. But RSS V4 turned that around too, showing much greater trends historically, and severely damaging the "Pause". I commented on some of this here, before any of the new versions.

Lord Monckton did not like it. His tamper tantrum is here. Any change which increases the trend is "tampering". Why going from V3.3 to V4 is tampering, but going from 3.2 to 3.3 or the earlier steps is not, was never explained.

Anyway, I thought it would be worth updating my graphs of Dec 2015 to include the changes to RSS. In fact, the two indices neatly changes places, so that RSS V4 is close to UAH V5.6, and UAH V6 is close to RSS V3.3. So in both cases the change is large.

An amusing sideshow of the more satisfactory UAH V6 is that surface datasets were being accused of fraud for differing from it - eg NOAA’s Fake SST’s Not Supported By Atmospheric Data. But the reviled discrepancies were not there with V5.6, which was far closer to the surface data than to V6. So was V5.6 also "fake"?

Anyway, here are the plots. I'm using the same old versions of GISS as in the previous post and sourced there. They can be got at the Wayback Machine. I convert everything to the same anomaly base, which this time is 1979-2008. I chose that because there isn't quite a 30 year span common to GISS2005 and the sat sets, but this reduces the gap to three years. So I set the other sets to zero average on this span; then I make the GISS2005 match the rebased GISS_current on its range.

First, as before, I just plot the time series. I use reddish colors for RSS versions, bluish for UAH, and greenish for GISS. Because the curves are tangled, there are four different color views of the same plot, which you can access with the buttons below. The text and content are the same for each, but transparency is used so that only one group stands out. Here is the plot:



This plot is good for a general appreciation of the deviations. The GISS variants bunch together, and the upper sat variants, UAH V5,6 and RSS V4.0, tend to follow them. The other pairing, RSS V3.3 and UAH V6, is the outlier, deviating rather markedly below from about 2008 onwards.

The values relative to each other are easier to see if they are expressed as differences from a common value, and for this I chose current GISS. In principle any value will do, but because the satellites respond with big spikes for El Niño, this would be inverted into a negative spike for GISS, which would be confusing. So I'm using the same colors, and choice of variants - GISS shows as the zero line:



Next I plot the difference from one version to the next - ie the "adjustment". In each case, it is new minus old. Again you can use the buttons to cycle through different colors.



This shows most clearly what happened in the recent changes. The trend of UAH went way down, and the trend of RSS went way up. These changes dwarf the minor and fairly trend-free changes to GISS. Interestingly, especially for RSS, most of the change happens post-2000.

Of course, GISS has more changes going further back. But satellites do not have an advantage there. They have no data at all.








Tuesday, January 23, 2018

Prospects for 2018.

Each year now when results for the previous year are in, I post some graphs (2017 here), and invite discussion about the coming year, which can be maintained on that thread. Reader Uli follows the prospects for GISS; readers JCH and WHUT remind us about ocean periodicities, and there are many other interests. I don't have strong prognoses myself; I just expect it to get warmer, with variations.

I think 2017 predictions worked out fairly well, mainly because the chief unexpected event, the strong warmth in Feb/March, came early. It created some possibility that 2017 might exceed 2016, but by August that was starting to look unlikely. That left the question of whether it would beat 2015, and that was close in most indices, with those that gave proper weighting to the polar temperatures putting 2017 ahead.

On the prospects for ENSO, Australia's BoM expects a weak La Niña in the short term. The NOAA has been silenced by the shutdown, but the IRI forecasts tend to see this going away fairly soon, with a fair chance of El Niño conditions later in the year.







For the moment, January has already been through two peaks and two dips, but overall, not much different to the last nine months.

So I'll leave the thread there open for comments through the year. Thanks, Uli and all.

Friday, January 19, 2018

Graphs of 2017 global temperatures among record years for major indices

A few days ago, I posted some graphs in an updated style which shows 2017, as seen by TempLS mesh in its place among records years (2017 came second) in a progressive record style. I also showed a detailed graph of the sequence of months from 2014 to 2017, showing how the warming introduced by the El Niño seems to be lasting. I said I would do a similar set of plots for the major indices when they appeared, as I have done in previous years. By now NOAA, HADCRUT and GISS have reported, as well as the satellite indices. So here is the set of progressive record plots. We have so far:
  • GISSlo - Gistemp land/ocean
  • HADCRUT 4 land/ocean
  • NOAAlo - NOAA land/ocean
  • UAH V6.0 - lower troposphere (TLT satellite)
  • RSS V4.0 - lower troposphere (TLT satellite)
  • CRUTEM 4 - land only
  • TempLS mesh
I'll add more as they arrive. You can find more information about the indices, with source links, here. The Glossary may help too. You can flick through the 7 images using the buttons below the plot.

GISS and TempLS had 2017 in second place, as did RSS V4.0. HADCRUT, CRUTEM and NOAAlo put it behind 2015 in third place. UAH V6 had things in a very different order, with 1988 in second place, and 2017 a distant third. The grouping of the surface indices is commonly observed. TempLS mesh and GISS interpolate, giving more (and IMO cdue) weight to polar regions. NOAA and HADCRUT do so much less. So insofar as the warmth of 2017 was accentuated at the poles, the less interpolated indices tend to miss that.

Here is the set of monthly averages for each of those indices, with as before the annual averages shown as horizontal lines in the appropriate color. Almost all months of 2017 were well above the 2014 average, even though 2014 was a record year in its time.






GISS December global up 0.02°C; 2017 was second warmest.

GISS warmed slightly, going from 0.87°C in November to 0.89°C in December (GISS report here). That is very similar to TempLS mesh; I originally reported no change, but later data pushed that up to a 0.04°C rise. For GISS, that makes 2017 the second warmest year in their record; behind 2016 but ahead of 2015. Their report, with the annual summary too, is here. I showed some aspects of 2017 annual in context here, and I'll do that for GISS and other indices in an upcoming post.

The overall pattern was similar to that in TempLS. Cold in east N America and Mediterranean and far East Siberia. Very warm in most of Russia, and in the Arctic. A cool La Nina-ish plume, but warm in the Tasman sea.

As usual here, I will compare the GISS and previous TempLS plots below the jump.

Wednesday, January 10, 2018

Review of 2017, heat records, and recent warm years.

Yesterday I posted the December global anomaly (base 1961-90) results for TempLS mesh, and noted that it made 2017 the second warmest year, after 2016. I'd like to put that in a bit more context. For the last three years (eg here) I have posted a progressive plot showing in steps the advance of the hottest year to date. Since 2014, 2015 and 2016 were each the hottest years to date, there was something new to show each year, and the plot showed the rapidity of those rises. This year, with 2017 in second place, it doesn't add new information to that style of plot. So I tried a way of adding information. I superimposed on the steps plot a column plot of each year's temperature. This measn that you can follow the max outline, or focus on the columns, which also show how far the years following a record were cooler. It emphasises the warmth of 2017 relative to earlier years. Here is the plot:



The legend shows the color codes for the record years. I'll probably make an active plot of all the indices when they become available. But I was also curious about how 2017 came to be warmer than the near Niño year of 2015. So I drew a column plot by month of the last four years, shown by color
I've also marked each year's average in the appropriate colour. 2017 is almost a mirror image of 2015, and the main contribution to its warmth came from the first three months, a somewhat separate peak from the El Niño. But what is clear is that the apparent level of later 2016 and 2017 is a good deal higher than 2014, a record year in its day. Even the coolest month of 2016/7 (June 17) was at about the 2014 average.

In my previous post, I reported December 2017 as virtually unchanged from November. Further data has made it a little warmer. In other news, the Australia BoM 2017 climate statement is out, and here 2017 was the third warmest year, after 2013 and 2005.


Tuesday, January 9, 2018

December global surface temperature unchanged; 2017 was second warmest year.

TempLS mesh anomaly (1961-90 base) was virtually unchanged, from 0.716°C in November to 0.721°C in December. This compares with the rise of 0.075°C in the NCEP/NCAR index, and a similar rise (0.05) in the UAH LT satellite index.

The TempLS average for 2017 was thus 0.757°C, which puts it behind 2016 (0.836°C), and ahead of 2015 (0.729°C), and so was the second warmest year in the record. I expect this will be a common finding, although 2015 is close. I'll post a graph showing the history of records.

The breakdown is interesting. The main cooling effect came from SST, well down on November. The balancing rises came from the Arctic and Siberia. Since TempLS, like GISS, is sensitive to Arctic temperature, indices like NOAA/HADCRUT may well show a fall. Otherwise the map shows those effects, along with much-discussed cold around the Great Lakes region and also W Sahara.

Here is the temperature map: