Thursday, April 14, 2011

Quiet time

McIntyre, Mann, and the Gaspe cedars



Sixteen years later, people are still arguing about Mann's 1998 Nature paper (MBH98) on multiproxy (112) reconstructions. McIntyre and McKitrick (M&M) wrote a paper in Energy and Environment in 2005, and I find myself still arguing about that.



The 2005 paper was a major resource for Congressman Barton's 2006 inquisition of Mann. One of its featured criticisms was the Gaspe dataset, number 53, and in particular the fact that Mann had padded some missing data from 1400 to 1403 with the 1404 value. And Barton directed them to explain.



One thing not often now mentioned is that in that paper, M&M actually did what many other critics should have done. They repeated the calculation with the criticism made good, to see what effect it had. This was in their Figure 1. They showed the effect of marking those four Gaspe years as missing, and then the effect of using a centered mean rather than Mann's famous calibration mean. They got a surprisingly large difference, which has been much cited in recent days. This post reports on my investigation of that surprise.



Barton got statistician Wegman to report, and his report was heavily based on the M&M 2005 papers. He said of this

"In MBH98, the first four years of both of these series were extrapolated. The extrapolation has the effect of depressing early 15th century results, and was not disclosed by Mann et al. until a later paper, "

This depressing effect was based on the M&M calculation reported in their Fig 1.



I'll show that the effect of padding those four years was limited to just those four years, and is minor. The effect Wegman describes was due to a coding rule in which M&M in effect removed the effect of Gaspe not just the four years in question, but all years from 1400-1450.



I have previously criticized the use Wegman made of the M&M 2005 GRL code for showing the effect of noise. There there was a clearly deliberate selection of the top 100, ordered by "hockey stick index", from which the displayed samples were chosen. In the case described here, there is no indication that the amplification of the effect was deliberate. In fact, M&M say they are following a policy of Mann's. However, the effect as presented is misleading, in that the change shown is due to the application of a rule rather than the actual effect of the padding.





M&M Fig 1, seen in perspective



Here is the original M&M05 Fig 1



The top panel shows their MBH emulation, which they say appears to be completely accurate. The second shows supposed effect of replacing the 1400-1403 years of proxy 53 (Gaspe) by NA, the R marker for missing. There is a moderate deviation in the years 1400-1450.



The bottom panel shows the effect of changing the offest mean to centered. I'll say more about that in a future post.



I'm rerunning the M&M code, so I'll show that my version does indeed reproduce that plot. I've used a bit of color:





Now I'll plot the same data but superimposed. This emphasises that the Gaspe deviation affects the years 1400-1450.



I've made the panel 3 plot faint, since it isn't the current subject.



For the remaining plots, I'll focus on the years 1400-1450. Here is again the data from panels 1 and 2



Padding with numbers



Mann used the 1404 value (0.723) to pad, and as Wegman suggests, this was a low point. The range from 1404-1450 was from 0.618 to 1.351. So I tried using the maximum value, and got this:



This compares Mann's padding with the effect of padding with that max. It isn't very different at all, and if you focus on the annual non-smoothed values, only the 1400-1403 values are changed. Much less effect than with NA.



What the M&M code does.





The recon work is done in a function NHbeta() in the file ee2005functions.r. They loop over the 11 periods, of which the last is 1400-1450. For each period they set up a "roster" of proxies that will be used. The statement defining it is:

roster<-!is.na(proxy[period[k+1],])



Here period[k+1] points to the first data point in the period, ie year 1400. So by setting the 1400 value to NA, the program removes Gaspe entirely for that period. They might as well have set the values from 1400-1499 to NA. In fact, I checked that doing that has the same effect.



Conclusion



Padding as Mann did had a very small, local effect. The M&M sffort to replace with missing values triggered a large response. But it was an artefact.










Saturday, April 9, 2011

TempLS Ver 2.1 release

This is the first release for a while. The structure is still much like ver 2, and the math basis is the same. The main differences are:
  • The code has been internally reorganized. Most variables have been gathered into R-style lists. And more code has been sequestered into functions, to clarify the underlying structure of the program.
  • The new work on area-based weighting has been incorporated.
  • Other work has been done with weighting. In particular, subsets of stations can be weighted differently relative to each other.
  • Hooks have been added so that the adaptions to the Antarctic studies can be included.


The preprocessor which merges and converts data files available on the web to a standard TempLS format is unchanged. Data and inventory files from Ver 2 will still work.

A reminder of how TempLS is used. In an R environment, a jobname is defined, and a .r file with that name is created which defines some variables relevant to the wanted problem. This may have several consecutive runs defined. The variables are variants of supplied defaults.

Then source("TempLSv2.1.r") solves and writes data and jpg files to a directory jobname.

The files you need are on a zip file TempLSv2.1.zip. This 16Mb file contains preprocessed data sets, so you won't need to use the preprocessor. I've had to separate GSOD data; GSOD.zip is another 7 Mb.

If you just want to look at the code and information (everything readable), there is a 170 Kb TempLS2.1_small.zip. A ReadMe.txt file is included. There is also a partly updated manual from V2.0, most of which is still applicable. More details below, and past files on TempLS can be found on the Moyhu index. I'll describe below the things that are new.



TempLS internal lists

Being R code, you can of course modify it as you wish. But it's better generally to work through the jobname.r file. To understand the code, you'll want to know about these global lists:
  • u - the one most encountered. It is defined in TempLSuserv2.1.r, and includes the variables that I think you're most likely to want to change. Some important ones:
    • u$inputstem - for selecting data. eg if ="GHCN", uses files GHCN.data, GHCN.inv
    • u$name - a string that gets attached to output and filenames - can be varied between runs
    • u$num_runs - your file can have several expressions {} and a run could be done with each - num_runs says how many are used
    • u$station_mask - select from the inventory which stations to include in the analysis. A logical expression based on the names of columns
    • u$model - what kind of analysis. 1 for time series, 2 for spatial trend, 3 for spatial period (currently year
    • u$yr0, u$yr1 analysis endpoints
    • u$postp() - an output function where you can add requests
  • v - list from the data read - loaddata()
    • v$d - the full data set from the .data file, v$nd = number rows
    • v$i - the inventory table from the .inv file, v$ni = number rows
  • s - things created following station selection by u$station_mask
    • s$o - logical vec (length v$ni) of stations selection
    • s$ns, s$nm, s$ny - numbers of stations selected, months and years
    • s$i - index vec (length s$ns) of stations selection
    • s$ib - reverse index vec (length v$ni) of stations selection
    • s$dm - various dimensions that can be assigned to the s$ns*s$nm data array
  • out - list for accumulating runs for compiling run report

    TempLS V2.1 output

    You'll generally get a map of the region with stations marked, and a plot of stations reporting in each year. The map will be a projection of a sphere on the surface. You can specify the lat/lon of the centre point (u$mapfocus). If the region is global (u$isglobe=T) you get a Mercator-like map as well.

    Model 1 will produce a time series of temperature anomaly, with a smooth and trends. You can ask for several different periods to be covered.

    Model 2 and 3 will produce shaded plots. If you have asked for a direct solver (oCG=F) you'll get a graph of eigenvalues. If the verif() function is called, you'll get R2 for the fit, and you can request plots of residuals at a list of stations (u$residual=c(,,,)).

    If you are requesting area weighting, you can ask for a map of the weights and mesh for a specified month (numbered from start)(u$showmeshmonth=..)

    Weighting (new)

    Before V2.1, the only weighting was by estimated density based on number of stations in lat/lon cells. Now 4 kinds of weighting are allowed:
    1. Uniform weighting
    2. Cell weighting (as before)
    3. Mesh-based (ie by area)
    4. Voronoi (also area)
    In V2.1 Voronoi is not fully implemented, and mesh-based is preferred. You specify by u$wt_type=1,2 or 3.

    You can have two classes of stations that you weight separately. This was created for AVHRR vs land, but would also be useful for SST vs land. You need a mask u$wt_mask to select one subgroup. It's value is a logical expression based on columns of the inventory. Then you need u$wt_type to be a vector, eg c(3,1). That gives the wt_type of the set you've chosen, then the rest. They cannot both be >2.

    Then you need to express how one class is weighted relative to the other. This is the wt factor u$wtf, which ranges from 0 to 1. They are scaled so that (sum wts of class 1)/( total sum of weights) = u$wtf.

    You can also control smoothing of the weights. The parameter is the characteristic length of smoothing, in km, say u$wt_len=100. The default is 0, which means no smoothing. Smoothing gives some benefit, but takes a little time.

    Zip file contents

    • ReadMe.txt - summary of the contents
    • DataInv.zip - contains the preprocessed data files (eg GHCN.data) and inventory files. With this you won't have to use the preprocessor until you want an update. This file is 16Mb; because complete data exceeds the 20Mb limit here, I've had to separate the GSOD data and inv, which is in GSOD.zip.
    • TempLSv2.1.r - the master routine
    • TempLSfnsv2.1.r - the collection of functions for the master routine
    • TempLSuserv2.1.r - defaults for the user file, defining list u
    • TempLSprepv2.1.r - preprocessor file
    • TempLSprepinputv2.1.r - some names etc for the preprocessor file
    • userfiles.zip - a zip file of user routines for example problems. Most were used in compiling recent Moyhu posts.
    • TempLSv2a.pdf - a users guide for v2 which has been updated with new variable names etc for v2.1. It does not have a description of new features yet.
    • testall.r - a master file for testing - uses all the supplied example files, creates their directories of results, and makes a file testall.htm in which you can see all the code and output. Handy to check if you have everything. Good to run with R CMD BATCH.
    • testall.mht - an output from testall.r that I made recently, stored as IE archive.
    • miscR.zip - files like ryan.r (some of Ryan O's graphics routines, called by the user file for Antarctic.r). Also update.r, used by testall.r.







Monday, April 4, 2011

Blogger's spam filter

In about September last year, Blogger, the Google outfit that host this site, introduced a new spam filter. Until that time I had been very happy with them. I still am, in most aspects. But the spam filter is making dialogue on the site impossible. It is supposed to learn from the liberations that I make, but it seems to be just getting worse. I have not yet been blest with a single item of real spam, but about one in three genuine comments go to the bin for no obvious reason at all.

The problem is compounded because, since I'm on Australian time, they can sit in the spam bin for hours before I can fix it.

I've done what I can to raise the problem with Blogger. They don't offer direct communication, but delegate feedback to a forum. The response is that, no, you don't get a choice here, your sacrifice is for the general good.  We seem to be conscripted into an experiment whereby Google uses our rescue activity to improve its spam database.

So I'm looking at Wordpress. Blogger has just announced a revamp, but from their aggressive attitude about the filter, I don't see much hope. I'll wait a few days longer, and listen to advice on whether Wordpress is likely to be better, but it's looking inevitable.