Monday, June 29, 2015

NCEP/NCAR monthly average maps.

A short time ago, while writing about the discrepancy between the NCEP/NCAR index and TempLS mesh, I made a temporary modification to the daily WebGL plot to show monthly averages. I've now formalised this. There is a black bar at the top of the date selector, and if you mouseover that, it will offer monthly averages. They are all there now, including the current month (to date). I expect this will be kept up to date.

Friday, June 26, 2015

TempLS V3 - part 2

In my previous post, I began a three part description of the new vwrsion of TempLS. That part was devoted to the collection of land and SST data into a temperature data array.

The next part is about forming the other basic array for the calculation - the weighting array, with one entry for each temperature measurement. I've described in various posts (best summarized here) the principles involved. A spatial average (for a period if time, usually month) requires numerical spatial integration. The surface is divided into areas where the temperature anomaly can be estimated from data. If the subdivisions do not cover the whole area, the rest will be assumed (by the arithmetic) to have the average anomaly of the set that does have data. The integral is formed by adding the products of the estimates and their areas. That makes a weighted sum of the data.

Traditionally, this was done with a lat/lon grid, often 5°×5°. An average is formed for each cell with data, in each month, and the averages are summed weighted by the cell area, which is proportional to the cosine of latitude. That is not usually presented as an overall weighted average, but it is. HADCRUT and NOAA use that approach. TempLS originally did that, so that the weight for each station was the cell area divided by the number of stations reporting that month in that cell. Effectively, the stations are allocated a share of the area in the cell.

Of course, this leaves some cells with no data. Cowtan and Way explored methods of interpolating these areas (discussed here). It makes a difference, because, as said, if nothing is done, those areas will be assigned the global average behaviour, from which they may in fact deviate.

Recently, I have been using in preference an irregular triangular mesh (as shown right), of which the stations are the nodes. The weighting is made by assigning to each station a third of the area of each triangle that they are part of (see here). This has complete area coverage. I have continued to publish results from both the grid and mesh weighting.

The code below shows how to form both kinds of weights. Note that the weighting does not use the actual measurements, only their existence. Stations with no result in a month get zero weight. There is no need for the weights to add to 1, because any weighted sum is always divided by a normalising sum.


The stage begins by loading saved data from the previous part. At this stage the land and SST data are concatenated, and a combined inventory made, from which the lat/lon of each station will be needed. R[] are filenames of saved data, consistent with the job control.
if(do[3]){ #stage 3 -  wts from supplied x
 load(R[2])  # ERSST x,iv
 x0=x;i0=cbind(iv,NA,NA,NA,NA); rad=pi/180;
 load(R[1])  # GHCN x,iv
 colnames(i0)=colnames(iv)
 x=rbind(x,x0);iv=rbind(iv,i0)  # combine

Then the choice is made (from the job, third char) between grid and mesh. We'll start with grid:
 if(kind[3]=="g"){  #Grid
  dl=180/36 # 36*72 grid for wts
  jv=as.matrix(iv[,1:2])%/%dl # latlon integer number for each cell
  y=cos((jv[,1]+.5)*dl*rad)  #trig weighting for area each cell
  kv=drop(jv%*%c(72,1)+1297)  # Give each cell a unique number
  w=array(0,dim(x))
  for(i in 1:ncol(x)){  # over months
   j=which(!is.na(x[,i])) #  stations with data
   s=1/table(kv[j]) # stations in each cell
   u=as.numeric(names(s)) #cell numbers
   k1=match(kv[j],u) # which cell numbers belong to each stat
   w[j,i]=y[j]*s[k1]
  }
 }else{ # Mesh

It's just a matter of assigning a unique number to each cell, and a vector with an entry for each of those numbers, and going through each month/year, counting. In R it is important to use fairly coarse-grained operations, because each command has an interpretive overhead.

Then continuing with the "else" part, for mesh weighting
  require("geometry")
  require("alphahull")
  a=pc("w_",R[3])
  if(file.exists(a)){load(a)}else{w=array(0,dim(x))}  # old mesh
  a=cos(cbind(iv[,1:2],90-iv[,1:2])*rad)
  z=cbind(a[,1]*a[,c(2,4)],a[,3]) # to 3D coords
  if(!exists("w_yr"))w_yr=yr
  w=w[,mth(w_yr)%in%mth(yr)]
  if(sum(dim(w)==dim(x))<2){w=array(0,dim(x));}
  for(ii in 1:ncol(x)){
   o=!is.na(x[,ii]) # stations with data
   no=sum(o); if(no<9)next;  # skip months at the end
   if(sum(xor(o,(w[,ii]>0)))==0)next; # w slots match x, so use old mesh
   y=z[o,]
   m=convhulln(y) # convex hull (tri mesh)
   A=0
   B=cbind(y[m[,1],],y[m[,2],],y[m[,3],])
   for(j in 0:2){  # determinants for areas
    k=(c(0:2,0,2,1)+j)%%3+c(1,4,7)
    A=A+B[,k[1]]*B[,k[2]]*B[,k[3]]-B[,k[4]]*B[,k[5]]*B[,k[6]]
   }
   A=abs(A)  # triangles from convhulln have varied orientation
   n=cbind(c(m),rep(A,3)) # long list with areas matched to nodes
   k=which(o)
   w[!o,ii]=0
   w[o,ii]=1e-9  # to mark stations that missed wt but have x entries
   for(jj in 1:99){  # Adding areas for nodes
    i=match(1:sum(o),n[,1]) # find first matches in mesh
    q=which(!is.na(i))
    w[k[q],ii]=w[k[q],ii]+n[i[q],2]  # add in area
    n=rbind(n[-i[q],]) # take out matches
    if(length(n)<1)break;
   }
   print(c(ii,nrow(m),sum(w[,ii])/pi/24))
  } # end mesh months loop

This is more complicated. The mesh is actually the convex hull of the points on the sphere, which the R package alphahull calculates via convhulln(). This is quite time consuming (about 20 mins on my PC), so I save the results on a "w_.." file. So it reads the most recent, if any, and for months where the pattern of months matches, it uses the stored, which is much faster. Also the lat/lon of stations are converted to a unit sphere.

So we enter the ii loop over months with x and a w array waiting to be filled. If stored w doesn't match, the convex hull is formed. Then A, the areas are found. Note that convhulln does not consistently orient triangles, so the abs() is needed. The areas are in fact determinants, again calculated in a way to minimise looping.

Then it's just a matter of assigning area sums to nodes. This isn't so easy without fine looping, since each column of the m array of triangles has multiple occurrences of nodes. So in the jj loop, I identify a unique set of occurrences, add those areas to the node score, remove them, and repeat until all areas have been allocated. The last part just saves the newly calculated weights for future reference.

The last code in this step just saves the result for the calculation step 4.
 save(x,w,iv,file=R[3])  
}#  end step 3

Here is the complete code for this stage:
if(do[3]){ #stage 3 -  wts from supplied x
 load(R[2])  # ERSST x,iv
 x0=x;i0=cbind(iv,NA,NA,NA,NA); rad=pi/180;
 load(R[1])  # GHCN x,iv
 colnames(i0)=colnames(iv)
 x=rbind(x,x0);iv=rbind(iv,i0)  # combine
 if(kind[3]=="g"){  #Grid
  dl=180/36 # 36*72 grid for wts
  jv=as.matrix(iv[,1:2])%/%dl # latlon integer number for each cell
  y=cos((jv[,1]+.5)*dl*rad)  #trig weighting for area each cell
  kv=drop(jv%*%c(72,1)+1297)  # Give each cell a unique number
  w=array(0,dim(x))
  for(i in 1:ncol(x)){  # over months
   j=which(!is.na(x[,i])) #  stations with data
   s=1/table(kv[j]) # stats in each cell
   u=as.numeric(names(s)) #cell numbers
   k1=match(kv[j],u) # which cell numbers belong to each stat
   w[j,i]=y[j]*s[k1]
  }
 }else{ # Mesh
  require("geometry")
  require("alphahull")
  a=pc("w_",R[3])
  if(file.exists(a)){load(a)}else{w=array(0,dim(x))}  # old mesh
  a=cos(cbind(iv[,1:2],90-iv[,1:2])*rad)
  z=cbind(a[,1]*a[,c(2,4)],a[,3]) # to 3D coords
  if(!exists("w_yr"))w_yr=yr
  w=w[,mth(w_yr)%in%mth(yr)]
  if(sum(dim(w)==dim(x))<2){w=array(0,dim(x)); }
  for(ii in 1:ncol(x)){
   o=!is.na(x[,ii]) # stats with data
   no=sum(o); if(no<9)next;  # skip months at the end
   if(sum(xor(o,(w[,ii]>0)))==0)next; # w slots match x, so use old mesh
   y=z[o,]
   m=convhulln(y) # convex hull (tri mesh)
   A=0
   B=cbind(y[m[,1],],y[m[,2],],y[m[,3],])
   for(j in 0:2){  # determinants for areas
    k=(c(0:2,0,2,1)+j)%%3+c(1,4,7)
    A=A+B[,k[1]]*B[,k[2]]*B[,k[3]]-B[,k[4]]*B[,k[5]]*B[,k[6]]
   }
   A=abs(A)  # triangles from convhulln have varied orientation
   n=cbind(c(m),rep(A,3)) # long list with areas matched to nodes
   k=which(o)
   w[!o,ii]=0
   w[o,ii]=1e-9  # to mark stations that missed wt but have x entries
   for(jj in 1:99){  # Adding areas for nodes
    i=match(1:sum(o),n[,1]) # find first matches in mesh
    q=which(!is.na(i))
    w[k[q],ii]=w[k[q],ii]+n[i[q],2]  # add in area
    n=rbind(n[-i[q],]) # take out matches
    if(length(n)<1)break;
   }
   print(c(ii,nrow(m),sum(w[,ii])/pi/24))
  } # end mesh months loop
  f=paste("w_",R[3],sep="") # for saving the wts if wanted
  w_yr=yr # save yrs to match when recalled
  if(!file.exists(f))save(w_yr,w,file=f)
 }# end mesh/grid
 save(x,w,iv,file=R[3])  
}#  end step 3






Wednesday, June 24, 2015

TempLS: new V3

TempLS is my R program that I use for calculating global average temperatures. A recent review of the project is here. A few weeks ago I posted an iterative calculation sequence which is the basis of the program. I noted that the preliminaries for this were simple and mechanical. It requires the data just organised in a large array, and a set of weights, derived from station locations.

I've never actually released version 2.2 of TempLS, which is what I have been using. I did a documented release of v2.1 here. Meanwhile, the program continued to accumulate options, and got entwined with my automatic data acquisition system. So I thought it would be useful to create a simplified version with the basic functionality. I did post a simplified version once before, but that was land only, and with more primitive weighting.

So I'm planning to post a description with three parts. The first will be data gathering, the second weighting, and third is the calculation, which has already been covered. But there may be results to discuss. This is the first, which is about the data gathering. There is no fancy maths, and it is pretty boring, but useful for those who may use it.

The program is in four stages - GHCN, SST, weights and calc. The first two make up the data organising part, and are described here. Each stage stores its output, which the next reads, so can be run independently. I have a simple job control mechanism. You set a variable with four characters, the default is
job="UEGO"

There is one character for each stage; current provisions, according to location, are:
U,AUnadjusted or Adjusted GHCN
D,EERSST Ver 3b or V4
G,MGrid or mesh-based weighting
O,POutput, without or with graphics

Each character can be lower case, which means skip that stage, but use the appropriate stored result. The point of this brevity is that the letters are used to name stored intermediate files, so the process can be re-started at any intermediate stage.

*** Note that ERSST has a new version V5, which I use. In this post, where I refer to V3b and V4, take it to be V4 and V5, including "job" above..

The program should be run in a dedicated directory - it will create subdirectories as needed. It starts with a section that defines some convenience functions, interprets the job code to make the names of the intermediate files (I use .sav for binary saved files), and defined the year range yr. This is hard-coded, but you can alter it.

# Function TempLS3.r. Written in four stages, it collects data, makes weights 
#and solves for global temperature anomaly. 
#Written by Nick Stokes, June 24 2015. 
#See  https://moyhu.blogspot.com.au/2015/06/templs-and-new-ersst-v4-noaa-is-now.html
#Functions
isin=function(v,lim){v>=lim[1] & v<=lim[2]}
mth=function(yr){round(seq(yr[1]+0.04,yr[2]+1,by=1/12),3)}
sp=function(s)(unlist(strsplit(s,"")))
pc=function(...,s=""){paste(...,sep=s,collapse=s)}
# Start program
if(!exists("job"))job="UEGO"
i=sp(job)
kind=tolower(i)
do=i!=kind
yr=c(1900,2015); yrs=yr[1]:yr[2]; # you can change these
ny=diff(yr)+1
j=c(1,1,2,2,1,3,1,4); R=1:4; #R will be the names of storage files
for(i in 1:4){ # make .sav file names
 R[i]=pc(c(kind[j[i*2-1]:j[i*2]],".sav"))
 if(!file.exists(R[i]))do[i]=TRUE; # if filename will be needed, calc
}
</=lim[2]}>


Stage 1 begins with some function definitions, which I'll describe
if(do[1]){  # Start stage 1 - read land (GHCN)
 getghcn=function(){ # download and untar GHCN
  fi="ghcnm.tavg.latest.qcu.tar.gz"
  fi=sub("u",ki,fi)  # switch to adjusted
  download.file(pc("https://www1.ncdc.noaa.gov/pub/data/ghcn/v3/",fi),fi);
  gunzip(fi,"ghcnv3.tar",overwrite=T);
  untar("ghcnv3.tar",exdir=".");
  di=dir(".",recursive=T)
  h=grep(pc("qc",ki),di,val=T) # find the qcu files un subdir
  h=grep("v3",h,val=T) # find the qcu files un subdir
  file.rename(h,Rr)
  unlink("ghcnm.*")
 }


It uses R facilities to download, unzip and untar. Then it finds the resulting files, data and inventory, which have a date-dependent name, and copies them into the current directory called ghcnu.dat etc. Then it removes the empty directory (else they accumulate). Download takes me about 30 secs. If you are happy with the data from before, you can comment out the call to this.

The next routine reads the data file into a big matrix with 14 rows. These are the station number, year, and then 12 monthly averages. It also reads the flags, and removes all data with a dubious quality flag. It returns the data matrix, converted to °C.
readqc=function(f){  # read the data file
 b=readLines(f)
 d=matrix(NA,14,length(b));
 d[1,]=as.numeric(substr(b,1,11));
 d[2,]=as.numeric(substr(b,12,15)); # year
 iu=0:13*8+4
 for(i in 3:14){
  di=as.numeric(substr(b,iu[i],iu[i]+4)); # monthly aves
  ds=substr(b,iu[i]+6,iu[i]+6); # quality flags
  di[ds!=" " | di== -9999]=NA  # remove flagged data
  d[i,]=di/100  # convert to C
 }
d
}

The rest is fairly simple:
 require("R.utils") # for gunzip
 ki=kind[1]  # u or a
 Rr=paste("ghcn",ki,c(".dat",".inv"),sep="")
 getghcn()
 d=readqc(Rr[1])
 b=readLines(Rr[2])
 ns=length(b)
 wd=c(1,11,12,20,21,30,1,3,31,37,88,88,107,107)
 iv=data.frame(matrix(NA,ns,6))
 for(i in 1:7){
  k=substr(b,wd[i*2-1],wd[i*2]); 
  if(i<6 1="" airprt="" alt="" c="" colnames="" convert="" country="" d="" data="" dim="" each="" else="" end="" file="R[1])" fill="" for="" i-1="" i="=1)K=k" ib="match(d[1,],K)-1" ic="" if="" in="" iv="" k="as.numeric(k)" lat="" locations="" lon="" ns="" numbers="" ny="" of="" order="" place="" rearrange="" restrict="" row="" save="" stage="" station="" to="" urban="" x="" years="" yr="">

The inventory is read, and the station numbers in d are replaced by integers 1:7280, being the corresponding rows of the inventory. iv is a reduced inventory dataframe; lat and lon are the first two columns, but other data is there for post-processing.

Then it is just a matter of making a blank x matrix and entering the data. The matrix emerges ns × nm, where ns is number of stations, nm months. Later the array will be redimensioned (ns,12,ny). Finally d and iv are saved to a file which will be called u.sav or a.sav.

So now we come to part 2 - SST. Here is how it starts:
if(do[2]){ # Stage 2 - read SST
# ERSST has convention lon 0:358:2, lat -88:88:2
 ki=kind[2]; 
 s=c("ersst4","ersst.%s.%s.asc","ftp://ftp.ncdc.noaa.gov/pub/data/cmb/ersst/v4/ascii")
 iy=1;  if(ki=="d"){s=sub("4","3b",s);s[1]="../../data/ersst";iy=10;yrs=seq(yr[1],yr[2],10);}
 if(!file.exists(s[1]))dir.create(s[1])
 x=NULL; n0=180*89;

ki is d,e for 3b,4; this is just making filenames for the cases. ERSST comes as a series of 89 x 180 matrices (2°), 10 years per file for 3b, and 1 year for v4. This is good for downloading, as you only have to download a small file for recent data. That is a grid of 16020 (some land), which is more than we need for averaging. SST does not change so rapidly. So I select just every second lat and lon. In v2.2 I included the missing data in a local average, but I'm not now sure this is a good idea. It's messy at boundaries, and the smoothness of SST makes it redundant. So here is the loop over input files
    for(ii in yrs){
    # read and arrange the data
  f=sprintf(s[2],ii,ii+9)
  if(ki=="e")f=sprintf(s[2],"v4",ii)
  g=file.path(s[1],f)
  if(!file.exists(g))download.file(file.path(s[3],f),g);
        v=matrix(scan(g),nrow=n0);  
  v=v[,(mth(c(ii,ii+iy-1)) %in% mth(yr))[1:ncol(v)]]/100  # Select years and convert to C
  n=length(v);
  o=v== -99.99 | v< -1  # eliminate ice -1  
  v[o]=NA
  n=n/n0
  dim(v)=c(89,180,n)
  # reduce the grid
  i1=1:45*2-1;i2=1:90*2 # Take every second one
  v=v[i1,i2,]
  # normalise the result; w is the set of wts that were added
  x=c(x,v) 
  print(c(ii,length(x)/90/45))
    }
 
</>

The files, when downloaded, are stored in a subdirectory, and won't be downloaded again while they are there. So to force a download, remove the file. I use cURL to download when there is a new version, but that requires installation. The first part of this code gathers each file contents into an array v. The -9999 values are converted to NA, and also anything less than -1°C. ERSST enters frozen regions as -1.8°C, but this doesn't make much sense as a proxy for air temperature. I left a small margin, since those are probably months with part freezing (there are few readings below -1 that are not -1.8).

Now to wrap up. A lat/lon inventory is created. Locations with less than 12 months data in total are removed, which includes land places. Then finally the data array x and inventory are stored.
 iv=cbind(rep(i1*2-90,90),rep((i2*2+178)%%360-180,each=45)) # Make lat/lon -88:88,2:358
 m=length(x)/90/45
 dim(x)=c(90*45,m)
 rs=rowSums(!is.na(x))
 o=rs>12  # to select only locations with more than 12 mths data in total
 iv=iv[o,] # make the lat/lon
 x=x[o,] #discard land and unmeasured sea
 str(x)
 ns=sum(o)  # number stations in this month
 rem=ncol(x)%%12
 if(rem>0)x=cbind(x,matrix(NA,ns,12-rem)) # pad to full year
 str(x)
 dim(x)=c(ns,12*((m-1)%/%12+1))
 save(x,iv,file=R[2])
} # End of stage 2


So I'll leave it there for now. The program has the potential to create a.sav, u.sav, d.sav and e.sav, each with a data array x and an iv. A simple task, but in lines it is more than half the program. Stage 3, next, will combine two of these and make a weight array.

Here is the code presented here in one piece. It is stand-alone.

# Function TempLS3.r. Written in four stages, it collects data, makes weights 
#and solves for global temperature anomaly. 
#Written by Nick Stokes, June 24 2015. 
#See  https://moyhu.blogspot.com.au/2015/06/templs-and-new-ersst-v4-noaa-is-now.html
isin=function(v,lim){v>=lim[1] & v<=lim[2]}
mth=function(yr){round(seq(yr[1]+0.04,yr[2]+1,by=1/12),3)}
sp=function(s)(unlist(strsplit(s,"")))
pc=function(...,s=""){paste(...,sep=s,collapse=s)}
if(!exists("job"))job="UEGO"
i=sp(job)
kind=tolower(i)
do=i!=kind
yr=c(1900,2015); yrs=yr[1]:yr[2]; # you can change these
ny=diff(yr)+1
j=c(1,1,2,2,1,3,1,4); R=1:4; #R will be the names of storage files
for(i in 1:4){ # make .sav file names
 R[i]=pc(c(kind[j[i*2-1]:j[i*2]],".sav"))
 if(!file.exists(R[i]))do[i]=TRUE; # if filename will be needed, calc
}
if(do[1]){  # Stage 1 - read land (GHCN)
 getghcn=function(){ # download and untar GHCN
  fi="ghcnm.tavg.latest.qcu.tar.gz"
  fi=sub("u",ki,fi)  # switch to adjusted
  download.file(pc("https://www1.ncdc.noaa.gov/pub/data/ghcn/v3/",fi),fi);
  gunzip(fi,"ghcnv3.tar",overwrite=T);
  untar("ghcnv3.tar",exdir=".");
  di=dir(".",recursive=T)
  h=grep(pc("qc",ki),di,val=T) # find the qcu files un subdir
  h=grep("v3",h,val=T) # find the qcu files un subdir
  file.rename(h,Rr)
  unlink("ghcnm.*")
 }

readqc=function(f){  # read the data file
 b=readLines(f)
 d=matrix(NA,14,length(b));
 d[1,]=as.numeric(substr(b,1,11));
 d[2,]=as.numeric(substr(b,12,15)); # year
 iu=0:13*8+4
 for(i in 3:14){
  di=as.numeric(substr(b,iu[i],iu[i]+4)); # monthly aves
  ds=substr(b,iu[i]+6,iu[i]+6); # quality flags
  di[ds!=" " | di== -9999]=NA  # remove flagged data
  d[i,]=di/100  # convert to C
 }
d
}

 require("R.utils") # for gunzip
 ki=kind[1]
 Rr=paste("ghcn",ki,c(".dat",".inv"),sep="")
 getghcn()
 d=readqc(Rr[1])
 b=readLines(Rr[2])
 ns=length(b)
 wd=c(1,11,12,20,21,30,1,3,31,37,88,88,107,107)
 iv=data.frame(matrix(NA,ns,6))
 for(i in 1:7){
  k=substr(b,wd[i*2-1],wd[i*2]); 
  if(i<6 -1="" -88:88:2="" -88:88="" -99.99="" -="" 0:358:2="" 2="" added="" airprt="" alt="" and="" arrange="" ascii="" b="" c="" cmb="" colnames="" convert="" country="" d="" data="" dim="" dir.create="" do="" download.file="" e="" each="" eliminate="" else="" ersst.="" ersst4="" ersst="" every="" f="" file.exists="" file.path="" file="R[1])" fill="" for="" ftp.ncdc.noaa.gov="" ftp:="" g="" grid="" has="" i-1="" i1="1:45*2-1;i2=1:90*2" i="=1)K=k" ib="match(d[1,],K)-1" ic="" ice="" if="" ii="" in="" is="" iv="cbind(rep(i1*2-90,90),rep((i2*2+178)%%360-180,each=45))" iy="10;yrs=seq(yr[1],yr[2],10);}" k="" ki="=" lat="" length="" locations="" lon="" m="" make="" mth="" n0="180*89;" n="" ncol="" normalise="" ns="" numbers="" ny="" o="rs" of="" one="" order="" place="" print="" pub="" read="" rearrange="" reduce="" restrict="" result="" row="" rs="rowSums(!is.na(x))" s.="" s.asc="" s="" save="" second="" select="" set="" sst="" stage="" station="" str="" take="" that="" the="" to="" urban="" v4="" v="v[i1,i2,]" w="" were="" wts="" x="" years="" yr="" yrs="">12  # to select only locations with more than 12 mths data in total
 iv=iv[o,] # make the lat/lon
 x=x[o,] #discard land and unmeasured sea
 str(x)
 ns=sum(o)  # number stations in this month
 rem=ncol(x)%%12
 if(rem>0)x=cbind(x,matrix(NA,ns,12-rem)) # pad to full year
 str(x)
 dim(x)=c(ns,12*((m-1)%/%12+1))
 save(x,iv,file=R[2])
}

</=lim[2]}></>





Sunday, June 21, 2015

TempLS and the new ERSST V4

NOAA is now using their new V4 ERSST, and so should I. It has come along at a time where I am preparing my own V3 of TempLS (more below), so I will make a double transition.

But first I should mention an error in my earlier processing of V3b, noted in comments here. ERSST3b came in decade files; that helped, because decades before the present did not change and did not require downloading. So I collected them into a structure, and with each update to the current decade (every month) merged that with the earlier decades.

However, I misaligned them. Most climate data uses the strict decade numbering (1961-90 etc), but ERSST uses 2000-2009. I got this wrong in the earlier data, creating a displacement by 1 year. Fixing this naturally gave better alignment with other datasets.

I'll describe the new TempLS version in detail in coming days. V 2.2 of TempLS had become unwieldy due to accumulation of options and special provisions for circumstances that arose along the way. I have made a new version with simplified controls, and using a simple iterative process described here. It should in principle give exactly the same answers; however I have also slightly modified the way in which I express ERSST as stations.

The new version is an R program of just over 200 lines. Of course, when simplifying there are some things that I miss, and one is the ability to make spherical harmonics plots (coming). So for a while the current reports will lag behind the results tabled above. That table, and the various graphs on the latest temperature page, will now be using ERSST v4.

Below the fold, I'll show just a few comparison results.

I've used running means to smooth out monthly noise, and focussed on the period since 1990. At some stage I might say more on the V4 changes, but that has been covered extensively elsewhere. I show the old results before fixing the missing year problem, then the V3b results after fixing, still using V2.2. Then there are results using new ERSS v4 and new TempLS V3. Ideally I would separate the effects of the new code and new ERSST, but getting V3b working with ERSST v4 would take a while, and the combined effect makes very little change.

So here are the grid weighted results. The effect of the year error is clear, but there is little change going from ERSST 3b and TempLS2.2 to ERSST 4 and TempLS 3.



And here are the grid weighted results, which are similar.




Friday, June 19, 2015

New NOAA - record May anomaly

NOAA has a new version, based on the new ERSST4. This has prompted me to make a change in the NOAA data that I use. They publish two sets; I have been using MLOST, which is the traditional one based on 1961-90 anomalies. But they also publish a version based on 1901-2000, and this is what they use for their climate summary announcements. Updates to MLOST have been lagging, so it is time to change. The different basis shouldn't affect the usual plots on the latest data page, or the trendviewer, but will affect the tabulated numbers.

So the May number was considerably higher than the previous May, and as I expected, higher than April, unlike GISS. In fact, it was quite in line with the NCEP/NCAR index.

I see there is a post at WUWT suggesting that the new record is somehow an artefact of the new version. As I commented here, the versions differ very little in year to year changes, by month. It is very unlikely that the old version would not have shown a similar rise, and in both versions, May 2014 was hotter than any previous May.


Thursday, June 18, 2015

NCEP/NCAR matched to GISS, NOAA

In my previous post, I noted that comparing data on different anomaly bases brought in discrepancies because of vagaries of monthly averages over the base period. These are actually significant enough to detract from the use of the NCEP/NCAR index (base 1994-2013) in predicting GISS (1951-80) and NOAA (1961-90) indices.

So I have added to that table, below the NCEP monthly averages, recent monthly values adjusted to the earlier basis periods by adding the 1994-2013 month averages of GISS and NOAA anomalies. This expresses NCEP on those scales, although strictly, it is unique to those indices. Adding HADCRUT averages would give slightly different 1961-90 values.

I have also shown the actual GISS/NOAA values in the right column. The current tables are here:

GISS adj
NCEPGISS
Jun0.835NA
May0.7730.71
Apr0.7020.71
Mar0.8550.84
Feb0.8180.82
NOAA adj
NCEPNOAA
Jun0.618NA
May0.558NA
Apr0.4610.469
Mar0.5730.564
Feb0.5450.575


Except for GISS May, the correspondence is very good.

Update. I calculated a whole lot more values, in a table below the fold. It looks as if the last few months aren't typical. The correspondence is still reasonable, but discrepancies as observed April-May are not unusual.

Anomaly base periods - monthly variation

It's easy to think of changing the base period for calculating temperature anomalies as having the effect of simply shifting the values up or down by a fixed amount. And for annual averages, this is true. But for monthly averages, it isn't, quite. Suppose, like HADCRUT, you express anomalies to a base 1961-90. That means that all Januarys are expressed relative to the January average for that time, etc. If you shift to a different period, those monthly bases will shift relative to each other. That is one reason for using a longish period (30 years). A long anomaly average will fluctuate less from month to month.

This effect was noted by commenter Olof, in relation to the April-May discrepancy between NCEP/NCAR and GISS of about 0.1°C. My NCEP index uses a base period 1994-2013 (where data is good); GISS uses 1961-90. If you extract GISS averages over 1994-2013, you get this:
Base periodJanFebMarAprMayJunJulAugSepOctNovDec
1994-20130.5070.5470.5680.5330.4980.5150.5040.5190.5280.5480.580.503

There is an expected drop from 0.568°C to 0.533°C of GISS relative to NCAR/NCEP on the latter's base period. In shifting to the later base, you would need to subtract these values for each month. That would raise May relative to April, and even more relative to March. This means that, because it seems May's were actually rather cool during this time, the NCEP index is always going to tend to give a higher anomaly.

This clearly explains only part of the discrepancy. But I thought it was worthwhile calculating all of the common indices on all base periods likely to be used, where possible (ie they have data). There is a table below the fold.

Here is is. You can generally recognise the native base period by a string of zeroes (for those periods I zeroed rounding discrepancies). BEST uses 1951-80, but for some reason doesn't give exactly zero. RSS uses 1979-98, which I haven't included.

HadCRUT
Base periodJanFebMarAprMayJunJulAugSepOctNovDec
1951-1980-0.064-0.067-0.098-0.061-0.061-0.035-0.038-0.032-0.039-0.047-0.062-0.057
1961-1990000000000000
1971-20000.1230.1440.120.1180.1040.1090.110.110.0950.0840.0760.108
1981-20100.3250.340.3220.2990.2720.2810.2860.2970.270.2850.2670.27
1994-20130.4070.4480.4390.4390.3960.4230.4360.4430.4140.440.440.383
GISSlo
Base periodJanFebMarAprMayJunJulAugSepOctNovDec
1951-1980000000000000
1961-19900.1020.0960.1330.0920.0990.0780.0830.0740.0730.0650.0730.084
1971-20000.2420.2580.2620.2330.2210.2130.2090.1960.1840.1730.1820.219
1981-20100.4280.4470.4570.4050.3840.380.3770.3720.3730.3790.3820.386
1994-20130.5070.5470.5680.5330.4980.5150.5040.5190.5280.5480.580.503
NOAAlo
Base periodJanFebMarAprMayJunJulAugSepOctNovDec
1951-1980-0.23-0.246-0.243-0.235-0.211-0.192-0.184-0.187-0.181-0.182-0.181-0.221
1961-1990-0.133-0.149-0.119-0.135-0.119-0.122-0.117-0.121-0.116-0.111-0.099-0.137
1971-2000000000000000
1981-20100.1760.1790.1890.1690.1630.1650.1620.1680.1580.1770.1730.14
1994-20130.2450.2740.2860.2920.2830.2980.3040.3080.3050.330.3360.239
UAH6.beta
Base periodJanFebMarAprMayJunJulAugSepOctNovDec
1951-1980NANANANANANANANANANANANA
1961-1990NANANANANANANANANANANANA
1971-2000NANANANANANANANANANANANA
1981-2010000000000000
1994-20130.0950.1020.080.1030.0860.1020.1180.120.1640.1260.0950.078
RSS.MSU
Base periodJanFebMarAprMayJunJulAugSepOctNovDec
1951-1980NANANANANANANANANANANANA
1961-1990NANANANANANANANANANANANA
1971-2000NANANANANANANANANANANANA
1981-20100.1160.1070.1050.1210.0840.0640.0940.0970.1110.0930.0940.075
1994-20130.2180.2120.1950.2460.1840.1780.2290.2320.280.2250.1890.162
TempLSgrid
Base periodJanFebMarAprMayJunJulAugSepOctNovDec
1951-1980-0.091-0.093-0.118-0.082-0.077-0.06-0.061-0.056-0.056-0.059-0.075-0.08
1961-1990000000000000
1971-20000.130.150.1210.1310.1130.1160.1040.1040.0910.0940.0880.125
1981-20100.2990.3240.3030.2910.2690.2710.260.2680.2550.2770.2670.271
1994-20130.3560.4040.3940.4090.380.3950.3860.3970.3870.4220.4290.364
BESTlo
Base periodJanFebMarAprMayJunJulAugSepOctNovDec
1951-19800.0440.03-0.0130.021-0.007-0.001-0.023-0.022-0.0390.009-0.0090.043
1961-19900.1120.0920.0820.0820.0490.0330.020.0240.010.0510.040.096
1971-20000.2140.2190.1860.1950.1390.140.1310.1330.1050.1310.1190.196
1981-20100.3920.3970.3760.3590.2940.2980.2930.30.2740.3140.2870.341
1994-20130.4690.50.4870.4890.4120.4370.4270.4360.410.4570.4580.448
C.Wkrig
Base periodJanFebMarAprMayJunJulAugSepOctNovDec
1951-1980-0.067-0.064-0.101-0.05-0.047-0.018-0.028-0.021-0.036-0.026-0.037-0.055
1961-1990000000000000
1971-20000.1170.1440.1240.1280.1040.1260.1280.1210.1070.0950.0960.103
1981-20100.3250.3480.3340.3120.2790.2960.3080.3150.2940.310.3030.287
1994-20130.4280.4660.4620.4640.4050.4450.4560.4730.4480.4860.5120.42
TempLSmesh
Base periodJanFebMarAprMayJunJulAugSepOctNovDec
1951-1980-0.086-0.086-0.118-0.077-0.077-0.058-0.065-0.052-0.058-0.051-0.061-0.075
1961-1990000000000000
1971-20000.1230.1450.1210.1340.1070.1130.1050.1010.0870.090.0950.114
1981-20100.3030.330.310.3030.2680.2650.260.2670.2550.2830.2850.279
1994-20130.3830.4270.4130.4360.380.3930.3820.4080.3990.4440.4760.393
BESTla
Base periodJanFebMarAprMayJunJulAugSepOctNovDec
1951-19800.1160.022-0.0620.071-0.087-0.085-0.08-0.077-0.060.121-0.010.141
1961-19900.250.1460.1590.20.046-0.0260.0010.012-0.0010.1620.0720.234
1971-20000.4410.4040.3210.4140.1970.1660.1650.190.1340.2810.1690.424
1981-20100.7680.730.6780.6710.4350.410.3980.4290.4170.5980.4890.648
1994-20130.8670.8990.8550.8950.6410.670.6090.650.6890.8720.8360.795
GISS.Ts
Base periodJanFebMarAprMayJunJulAugSepOctNovDec
1951-1980000000000000
1961-19900.1410.1390.1830.1390.1530.1250.140.1220.1160.0960.0910.105
1971-20000.3290.3610.3620.340.3220.3220.3250.3090.2730.2470.2440.289
1981-20100.5840.6220.6160.5580.5260.5270.5260.5340.5210.5070.4970.517
1994-20130.7160.7680.7580.7080.6660.6960.6690.7030.7210.7230.7430.692
CRUTEM.4
Base periodJanFebMarAprMayJunJulAugSepOctNovDec
1951-1980-0.105-0.129-0.196-0.11-0.111-0.054-0.072-0.062-0.042-0.057-0.085-0.074
1961-1990000000000000
1971-20000.2390.2870.2010.2140.1710.1880.1520.1630.1370.1390.0990.206
1981-20100.590.6170.5410.4930.4120.4350.4030.4340.4060.4670.4370.453
1994-20130.7270.7840.7180.7160.6060.6680.630.6550.660.740.7450.629
NOAAla
Base periodJanFebMarAprMayJunJulAugSepOctNovDec
1951-1980-0.438-0.499-0.466-0.43-0.346-0.28-0.25-0.271-0.229-0.236-0.222-0.37
1961-1990-0.259-0.317-0.194-0.247-0.186-0.193-0.156-0.174-0.159-0.149-0.101-0.245
1971-2000000000000000
1981-20100.3590.3520.3710.2790.2460.2380.2470.2710.2540.3340.3460.233
1994-20130.4420.5230.5460.5220.4590.4780.4880.4860.520.6250.6870.375
HADSST3
Base periodJanFebMarAprMayJunJulAugSepOctNovDec
1951-1980-0.033-0.027-0.032-0.033-0.032-0.026-0.015-0.008-0.031-0.036-0.038-0.034
1961-1990-0.01-0.01-0.008-0.011-0.01-0.0030.0090.008-0.004-0.009-0.011-0.012
1971-20000.0550.0590.0720.0610.0680.0720.0910.0860.0690.0510.0570.05
1981-20100.180.1860.2030.2010.2110.2210.2450.2470.2180.2020.1860.176
1994-20130.260.2760.2950.3020.3080.3260.3680.3750.3290.3160.3010.278
NOAAsst
Base periodJanFebMarAprMayJunJulAugSepOctNovDec
1951-1980-0.153-0.152-0.159-0.158-0.157-0.157-0.157-0.153-0.161-0.161-0.164-0.165
1961-1990-0.087-0.087-0.091-0.093-0.093-0.094-0.102-0.1-0.099-0.097-0.098-0.096
1971-2000000000000000
1981-20100.1080.1150.1220.1280.1310.1360.1290.1280.1210.1170.1080.105
1994-20130.1730.1830.1910.2060.2160.2280.2330.2390.2230.2180.2050.188

Wednesday, June 17, 2015

Checking on May Temperatures

For the first time since I have been tracking both NCEP/NCAR and surface temperatures (TempLS), there has been a substantial disagreement. The TempLS Mesh result showed May about the same temperature as April, and this was backed up by GISS. But NCEP/NCAR suggested a warmer month, comparable to February.

There was a new factor in the mix - GHCN has a new version 3.3. But its changes are mainly methodological, and made little difference to previous months. Both TempLS and GISS use that version. There is also a new version of ERSST, v4, but neither TempLS nor GISS (AFAIK) use that. TempLS will soon.

The difference does seem not very robust. TempLS Grid did show a significant rise, of about 0.05°C, which does indeed match February for that measure. This is working from the same data but with different (grid-based) weighting with lesser coverage. Since NOAA and HADCRUT share that characteristic, I think they might show May somewhat warmer.

HADSST3 also showed a rise in SST. So I wondered, if so much is rising, there must be something one could identify in going from April to May which would particularly affect the GISS/TLS indices. So, below the fold, I'll show a map and table

Here is an active WebGL plot of the differences, May anomaly - April, by stations (red dots) with readings in both months. The spacing between is linearly interpolated and shaded. The color scale is highly non-linear, with the extremes being quite large.


The cool spots are in the Arctic, Antarctic, and particularly NE Siberia. The ocean warms by a small, fairly uniform amount. The colors don't give a particularly good quantification of the temperature differences at the extremes. Here is a table of the top 10 cooler GHCN locations, by anomaly difference:

May-Apr   Latitude Longitude Name
-8.1980.6258.05GMO IM.E.T.
-8.0879.576.98OSTROV VIZE
-6.2681.6-16.67NORD ADS
-5.8277.72104.3GMO IM.E.K. F
-4.82-68.5877.97DAVIS
-4.4249.6594.4BARUUNTURUUN
-4.3836.32-119.64HANFORD 1 S
-4.2473.580.4OSTROV DIKSON
-4.24-67.662.87MAWSON
-4.16-78.45106.87VOSTOK


It's heavily weighted toward N and NE Siberia. I've added a facility, which I'll formalise, to the NCEP/NCAR map. The table on the right allows you to choose a day, by mousing over and tracking what it says just NE of the globe - click when you have what you want. It now allows you to choose the 0'th on the month, for Jan-May 2015, and you get the map of the monthly average anomaly. Selecting 2015-5-0, I get the May average map. Here's a plot of the Arctic region:



The reanalysis certainly shows cold anomaly in the Arctic, but nothing special in NE Siberia, and it was cold there in April too. That seems to be the main difference between GHCN and reanalysis.

Update. Here's another interesting table. This time I've weighted the anomaly differences to show their contribution to the TempLS sum. So when it says Qiqihar -0.00149, that means Q contributed that amount to the May-April anomaly difference. The weighted order is different to the absolute order, and the amounts are fairly small. The Antarctic now look more significant than the Siberian. The discrepancy between NCEP and GISS/TLS is of order 0.05°C, so none of these if changed would make more than 1/30 of that.

May-Apr weighted Latitude Longitude Name
-0.0017451.1299.67RINCHINLHUMBE
-0.0014947.38123.92QIQIHAR
-0.0013-68.5877.97DAVIS
-0.0012329.07-110.95HERMOSILLO,SO
-0.0012379.576.98OSTROV VIZE
-0.00114-67.662.87MAWSON
-0.0010567.25167.97ILIRNEJ
-0.0010245.97128.73TONGHE
-0.0009830.2556.97KERMAN
-0.0009353.791.7MINUSINSK















Tuesday, June 16, 2015

GISS has global May temperatures same as April

GISS has reported an anomaly average of 0.71°C for May 2015, the same as for April. This is in agreement with TempLS mesh, but in contrast to the rise in the troposphere measures RSS and UAH.

I was surprised by the TempLS result, since the NCEP/NCAR index had indicated a much warmer result. HADSST3 also rose in May. But the GISS result essentially confirms it. It coincides with the new version 3.3 of GHCN, which would have influenced both indices. But that change made little difference to previous months. I am planning to post a study of the anomaly differences between April and May.

TempLS Grid also rose by about 0.05°C, so I think May rises in HADCRUT and NOAA are likely.

Below the fold is the customary comparison between GISS and TempLS.

Here is the GISS map for May:



And here is the corresponding TempLS map, using spherical harmonics:




Wednesday, June 10, 2015

TempLS May down slightly from April

GHCN Data was delayed slightly for May by the release of Version 3.3. When data became available, it was fairly complete. It showed a slight drop of 0.01°C from April (0.628 to 0.618). Satellite measures rose considerably.

Update 12 June. I'm still uncertain of this result. GHCN seems unsettled; according to my count, the most recent file has for May about 300 stations fewer than the day before.

This is at variance with the reanalysis data, which suggested a much warmer month. I wondered if V3.3 had an effect, but previous months were much as before. I'll watch the next few days for any changes, but as I said, most of the data is in. So May is looking more like April (cool for 2015) than February.

Details are in the report here. N Russia and Alaska had very warm spots; the poles were cool. The Arctic coolness was reflected in the reanalysis results. Generally the coolness was due to land; SST rose, as reflected by HADSST3.

There is an interesting Tech Note from NOAA on ver 3 here.


Thursday, June 4, 2015

Current Arctic Temperatures

I posted about a gadget for displaying NCEP/NCAR based average temperatures for user-selected Arctic regions. I have now embedded it in the maintained data page, so it will update whenever the other NCEP data does - usually daily. Operating guidance is at the original post.

Commenter Nightvid asked about including past years, and I'll do that sometime. It would overload the page to include the data by default, so I need a user-initiated download scheme.

I'm thinking of a similar gadget for whole Earth. The current one is for absolute temperatures, which is OK for special regions like Arctic. And it would be OK for, say, the NINO regions, but less good for continents, say. The alternative is anomalies, which don't show the seasonal effect. It's a dilemma.