Friday, May 9, 2014

Nonsense plots of USHCN adjustments


On 6 May, Anthony Watts posted at WUWT the following plot, which said it represented the effect of USHCN adjustments over the years. WUWT said:

"Yet as this simple comparison between raw and adjusted USHCN data makes clear…



…adjustments to the temperature record are increasing – dramatically. The present is getting warmer, the past is getting cooler, and it has nothing to do with real temperature data – only adjustments to temperature data. The climate reality our government is living in is little more than a self-serving construct."


Dramatically!

The text gave no source, but linked to this USHCN document, which had no such graph. However, it did say you could click for the source, and that led you to this post from Steven Goddard's site.

The title looks rather official, but he did not give the source. But you can see from comments below that he made it himself. And he doesn't give much detail.

He did in a comment link to two tar files of data, on his site. So I downloaded them. They are standard USHCN files of station monthly averages, raw and initial.

I was surprised at the "dramatic" change, because USHCN had published a fairly well known ver 1 plot:



with much more modest changes, and since it's mostly TOBS, such a different version seemed unlikely.

So I did download the data, and did my own calc. I'll show the R code below the jump. But here is the plot:
Update. Zeke points out in a comment below that the files Steven Goddard supplied were in °C, not °F. Since SG had quoted his results in °F, I assumed that was the form of the data - there was no other indication. I've fixed the plot, sticking with °F.
Update,
A commenter, Peter O'Neill, who checked my code, noted perceptively that I had made a mistake on data reading. I allowed four places for a monthly temp, which is OK between -9 and 99, but for -10 or less loses the sign. My mistake was actually in thinking the data was in deg F, so -10 would be unlikely for a CONUS month average, tho apparently not impossible.

So I've fixed it as he suggested. Mainly, the result is a lot smoother. The format error only hurt badly when final and raw were on opposite sides of -10, which is rare and makes a jumpy result.

I now get a small spike in 2014, as Peter did. It is about 0.1°C. I have figured the reason. The average final-raw has a strong seasonal variation. here are the monthly averages for 2013 in hundredths °C:
12, 17, 14, 1, -7, -9, -10, -14, -14, -8, -2, 3
So the first four months of 2014 gets the positive part.







Pretty different. Note the y axis. I just subtracted raw from initial, for all months where both were available, and did an unweighted average over months and stations. That is what Steven Goddard said he did. Personally, I would at least have done by state, and then perhaps with area weighting, But it probably wouldn't make much difference.

Update: Diagnosis

I suggested here that the problem is that SG subtracted the annual average of the final readings from the average of the raw. Zeke, who has the code, has confirmed. But while the final data generally has no missing data, because of FILNET, the raw data has many missing station/months. So the difference of averages of two different populations is formed, and the data, being absolute temperatures, is very heterogeneous. Spatially, but even worse, seasonally.

The spike is clear evidence of the problem. SG got one here, too, in Illinois. It is inevitable.

In 2014, the final set has 1218 values in each of the first four months. Every station. But for raw, there were, respectively, 891,883,883, and 645 for April. A big preponderance of winter month readings in the average. So the raw is cooler than final for that reason, not adjustments.

Update: Some commenters have kindly contributed their own plots. I've appended these at the end.

Here is my R code. The contents of the tar files are in my directories "raw" and "final".

# Code for averaging USHCN adjustment differences
#Written by Nick Stokes, 9 May 2014
# See post at https://moyhu.blogspot.com.au/

# gather and sort data
f=list.files("raw")
x=c(".raw.tavg",".FLs.52i.tavg")
n=substr(f,1,11)  # station code
iy=1890:2014
#  w will be a big matrix of raw and adjusted, in the original units of .01F
# from 1890-2014
# (yr+12 months,125 years,1218 stations,raw+final)
w=array(NA,c(13,125,1218,2))
for(i in 1:length(n)){
  s=n[i];  # looping over stations
  for(j in 1:2){ # raw, then final
    y=x[j];  # read the file
    b=readLines(paste(c("raw/","final/")[j],s,y,sep=""))
    b=gsub("-9999","  NA ",b)
    h=as.numeric(substr(b,13,16))  # Read year
    # Read months
    for(k in 0:11*9+19)h=rbind(h,as.numeric(substr(b,k,k+3)))
    h=h[,h[1,]>1889]  # delete pre 1890
    kk=match(h[1,],iy) # line up
    w[,kk,i,j]=h # add
  }
  if(i%%100==0)print(Sys.time())  # it takes about 80 sec
}

x=w[2:13,,,2]-w[2:13,,,1]  # final-raw (omit years)
x1=colMeans(x,dims=1,na.rm=T)  # mean over months
x2=rowMeans(x1,na.rm=T)  # mean over stations
x2=round(x2/100,3) # round and set to deg F

# x2 are the annual averages Now do plot
graphics.off()
png("ushcn.png",width=800)
plot(1890:2014,x2,pch=19,xlab="Year", ylab="Final-raw in deg F",main="USHCN V2.5 adjustment discrepancy",col="blue")
lines(1890:2014,x2,col="blue")
text(1920,0,"Graph prepared by Nick Stokes, 9 May 2014")
text(1920,-0.05,"post at https://moyhu.blogspot.com.au/ 9 May")
dev.off()

Plots in comments:
Zeke shows this plot of raw and adjusted, and the difference



Here is Zeke's current version of the difference plot. He's using some gridding, which I think is better, but doesn't make a lot of difference (a bit smoother).



Bruce Schuck (sunshinehours) linked max/min plots as well:









46 comments:

  1. I did a version with 2.5x3.5 lat/lon gridding and a land mask, and it looks quite similar: http://i81.photobucket.com/albums/j237/hausfath/USHCNHomogenizedminusRaw_zps284d69fe.png

    ReplyDelete
    Replies
    1. Thanks, Zeke. Yes, I thought some sort of area allowance should be used. Interesting that it seems to smooth a bit. But as you say, similar.

      Delete
  2. So "Goddard" made it up?

    ReplyDelete
    Replies
    1. I think he just got it wrong. There are plenty of ways that can happen. That's why such a graph needs to come with a statement about who did the averaging etc.

      My guess is that he just averaged the sites that had raw data and separately, the sites with adjusted, and showed the difference. When averaging absolute degrees, it really matters which sites are in your sample. SG's commenter David A, who seems to be a fan, really tried to get an answer on that. Nothing from SG.

      Delete
  3. Goddard made his code available, why didn't you use it and check that work instead of coming up with your own method?

    I don't see you method as valid, since your baseline offset does not match the pre-2000 NOAA provided plot. How do you account for that?

    ReplyDelete
    Replies
    1. One of the best ways to check work is to go about the process in an independent way and see if you get a similar answer. It isn't typically the fastest way, but provides confidence (or lack of confidence) in results. This is kinda how science as a whole works. It's not a new or novel concept.

      Delete
    2. It is to some.

      Delete
    3. This comment has been removed by the author.

      Delete
    4. Also, where is Goddard's code? I can't seem to find it on the blog post linked.

      Delete
    5. Anthony,
      I couldn't find any reference to code, at least on that thread. Steven Goddard described his method thus:
      "It is the numerical average of all final monthly minus raw monthly temperatures published by USHCN"
      That's what I tried to reproduce. I wasn't making up a new method.

      As to baseline offset, SG's plot is actually similar there, near zero recently until the final spike. I assume that sometime between v1 and v2.5 that they have moved to the fairly standard convention where in adjusting the modern data is held to match, with old data changed relative to it.

      Delete
  4. Nick: It would be nice if there was a way to look at the data in various stages of homogenization, in an effort to see how different corrections impacted the result. Then you could see how the raw data evolves to the adjusted product. It might help people visualize the necessary corrections to time-of-observation bias, land cover biases, loss/gain of stations in the same area, etc.

    Of course by the time we get to the global scale, the impact of the homogenization process is reduced. And glacier loss can't be explained by homogenization. Wildlife migration can't be explained by adjustments. Sea level doesn't just rise for no reason. We could go on and on.

    ReplyDelete
  5. USHCN sure worked hard to cool off the 1930's.

    Wouldn't 0.5F move 1934 to the top of the list?

    ReplyDelete
    Replies
    1. No. Here is Zeke's plot, showing both raw and adjusted. 2012 is top either way. But 1934 is next in raw.

      Delete
  6. Looked into it a bit more, and NCDC posts their data in degrees C, not F as Nick's import code assumes. So Goddard's results are not too far off, apart from the oddity of the last point.

    Here is what things should look like (in degrees F): http://i81.photobucket.com/albums/j237/hausfath/USHCNHomogenizedminusRaw_zps3725ac9a.png

    ReplyDelete
    Replies
    1. And here are Adjusted and Raw data over time:
      http://i81.photobucket.com/albums/j237/hausfath/USHCNAdjRawDiff_zpseb67fa50.png
      (Really wish hyperlinks worked in comments here...)

      Delete
    2. Thanks, Zeke. I just used the data as supplied by Steven and assumed that, since he gave the result in deg F, that his data was in F. So it's just a matter of re-labelling the axis. But still, even without the final spike, his plot had a range of a little over 1.8°F, or 1 °C. That's almost double what we found.

      Delete
    3. Zeke,
      Hyperlinks should work. Blogger sometimes gives you more privileges if you log in with an ident, eg wordpress or google.

      Delete
    4. His plot is still a bit off, but not by nearly as much. Yet another reason that everything should just use metric units...

      Delete
    5. Yes, but it is still nearly double the range. I've shown your plots at the end of the post.

      Delete
  7. tmax: http://sunshinehours.wordpress.com/?attachment_id=4212
    tmin: http://sunshinehours.wordpress.com/?attachment_id=4213
    tavg: http://sunshinehours.wordpress.com/?attachment_id=4214

    tmin is barely touched.

    .75C for the 20s/30s/40s

    That is kind of grotesque

    ReplyDelete
    Replies
    1. Sorry. 0.75C for tmax is kind of grotesque when tmin is barely .2C for the same period

      Delete
    2. Bruce,
      Your links aren't working for me.
      Your search - http://sunshinehours.wordpress.com/?attachment_id=4214 - did not match any documents.

      Delete
    3. Try:

      http://sunshinehours.files.wordpress.com/2014/05/tmax-ushcn.png
      http://sunshinehours.files.wordpress.com/2014/05/tmin-ushcn.png
      http://sunshinehours.files.wordpress.com/2014/05/tavg-ushcn.png

      Delete
    4. Hey, more corroboration! Thanks for that. The Tmax/Tmin breakdown is interesting.

      It's not so surprising that Tmax is most affected. TOBS was mostly a shift from late afternoon to morning reading. "Morning" meant when the rain gauges are read, which I think is meant to be 10am. Afternoon reading meant a warm bias, because of the possibility of double counting warm afternoons. Correcting that would bring down Tmax. Morning would go the other way, but I think there is less chance of 10am providing two minima.

      Delete
    5. To the adjuster goes the spoils.

      By the way, I never us R matrices. How would I filter for each month?

      Delete
    6. These are actually arrays, with sometimes more than two indices. But months are first. For the big one, w, it has the year before the months, so it's 1:13. To get just April, w[5,,,]. Differences h drop the month, so it's h[4,,] (note the last index has gone too). To get JJA, say, h[6:8,,]

      Thanks for the better links, I've appended the images to the post, if that's OK>

      Delete
  8. This spike at the end may be related to the "late data" problem we see with GHCN/GISS and NCDC's "state of the climate" reports. They publish the numbers ahead of dataset completeness, and they have warmer values, because I'm betting a lot of the rural stations come in later, by mail, rather than the weathercoder touch tone entries. Lot of older observers in USHCN, and I've met dozens. They don't like the weathercoder touch-tone entry because they say it is easy to make mistakes.

    And, having tried it myself a couple of times, and being a young agile whippersnapper, I screw it up too.

    The USHCN data seems to show completed data where there is no corresponding raw monthly station data (since it isn’t in yet) which may be generated by infilling/processing....resulting in that spike. Or it could be a bug in Goddard's coding of some sorts. I just don't see it since I have the code. I've given it to Zeke to see what he makes of it.

    Yes the USHCN 1 and USHCN 2.5 have different processes, resulting in different offsets. The one thing common to all of it though is that it cools the past, and many people don't see that as a justifiable or even an honest adjustment.

    It may shrink as monthly values come in.

    ReplyDelete
    Replies
    1. Yes, I think that's a likely reason. The problem would be subtracting the averages of two different sets of stations, and attributing the result to adjustment. That's why I formed the differences first, so both raw and adjusted have to be available.

      Commenter David A tried to get that sorted at SG's site. No luck.

      Delete
    2. Figured it out. Goddard is using absolutes rather than anomalies, and incomplete reporting of absolute temps can lead to pretty big differences if a station appears in raw but is not in the adjusted data yet.

      Delete
    3. Yes, the big 2014 spike is inevitable. He got one here, a few days ago in Illinois, where the spike was held to show iniquitous data tampering. But it's inevitable at this stage of the year. The reason is that USHCN, through FILNET, infills and provides realistic values for all "final" data, for all stations. But raw data has missing values highly biased towards recent (late data).

      So final is the average of Jan-Apr uniformly. "raw" for 2014 has a preponderance of winter data. So final-raw has to have a big spike. In autumn, the spike will be reversed. It's an artefact of defective averaging. Using anomalies, or just weighting equally by months would fix it.

      Delete
    4. I was on the case over a week before David A's comment, hammering on it, and far from being ignored, I was called crazy by both Goddard and his cheer leading squad. I wrote:

      “Or is this all really just a temporary artifact of various stations reporting late etc?”

      http://stevengoddard.wordpress.com/2014/04/26/noaa-blowing-away-all-records-for-data-tampering-in-2014/#comment-344582

      Then I was the first to take Tony Watts to task for re-posting Goddard's plot, pointing out that it was where skepticism “runs off the rails.”

      P.S. Your blog here is one of the most passive-aggressive sites I've ever tried to post on, erasing my message for lack of a Wordpress log on link, due to phishing prevention, then claiming my URL contains illegal characters, which I didn't know was about my message link or my bizarre suggested-format Wordpress log in name that seems to itself want a URL. You're a dick, Nick. Let's try Google. Oh gosh, that requires a full page reload. Nope, that didn't work, no Google log in, even after I logged out here from Wordpress. Ah, there's a hidden pop up menu activated by clicking in the Wordpress log in name text box (on my iPhone). Now I'm at the top of the damn page again, only to find a weird miniature text entry box way down below, with an Edit link, so no, it didn't post. Let's click again.

      Delete
    5. Nik,
      I had followed the WUWT link to SG's site; it did not go to that thread. I did notice the plot, but didn't read the comments there. I cited David A's query because it was quite direct. But I did see your earlier queries at WUWT.

      Sorry about the site troubles. It will let you log in without ID, and with username, but doesn't make it obvious. It gives more privileges if you use a login. I don't control any of this - it's Google Blogger. I use a Google login without problems - if I'm already logged in to google, it seems to know.

      Delete
  9. I've been in the basement at NCDC where the paper forms are dealt with. There's a fair amount of lag in the B91 COOP station monthly report mail forms received at NCDC. They have to be scanned, checked for scan errors, some have to be sent for hand transcription, then they go for a sanity test checker to look for things like missing minus signs, 4's that look like 9's and get wrongly scanned/transcribed etc, lots of handling involved to get the numerical data out of them.

    Often they are still waiting for final data into the next month for a few stations if observer mails late or forgets.

    ReplyDelete
    Replies
    1. You might be interested in my previous post, which is a grumble about GHCN errors.

      Delete
  10. Yep, on this we agree.

    CLIMAT reports and GHCN/GISS reporting aren't always in sync, resulting in odd anomalies due to late data. GISS is missing several CLIMAT reports on a regular basis. Some GHCN stations which we know to be reporting regularly, disappear from GISS for no apparent reason.

    Stockholm is a good example, missing in GISS since 1994:. http://data.giss.nasa.gov/cgi-bin/gistemp/show_station.cgi?id=645024640000&dt=1&ds=14

    Yet we know the data from Stockholm observatory is still being recorded. http://bolin.su.se/data/stockholm/raw_individual_temperature_observations.php

    Moberg is the data keeper, hardly anyone obscure.

    Data plotted to 2012 here: http://people.su.se/~amobe/stockholm/stockholm-historical-weather-observations-ver-1.0/graphics/annual-temperatures/stockholm_annual_1756_2012_sve.png


    The "biggest problem facing mankind" needs better quality control. Gavin needs to do less TED talks and more data ingestion QC IMHO.



    ReplyDelete
    Replies
    1. I don't think GISS uses CLIMAT reports directly. As I understand, they get most of their land data from GHCN adjusted. I see GHCN unadjusted does stop in 1994 for Stockholm.

      BTW, I followed up the Wichita airport study here. There is a small but persistent daily difference; the ASOS is higher than the NWS station.

      Delete
    2. Its worth mentioning that the folks at NCDC are hard at work on GHCN version 4, which will incorporate nearly all the stations in Berkeley Earth/ISTI and should deal with most of the legacy drops in the current GHCN-monthly.

      Delete
  11. Any idea why Dec/Jan/Feb are adjusted to bizarrely?

    https://sunshinehours.wordpress.com/2014/05/10/ushcn-2-5-adjustments-final-raw/

    ReplyDelete
    Replies
    1. Bruce,
      In early 20thcen there are quite a lot of stations missing raw data. Typically 80-90 per month (out of 1218). So there is scope for artefact there. I thought that winter might be worse, but in fact January is much better than most. No obvious other pattern.

      I suspect though that it isn't that adjustment varied but the number of adjusted stations in the sample varied. Or it might be TOBS-related - maybe observers were apt to request variation more frequently in winter.

      Delete
    2. I think the huge fluctuations in winter final-raw from year to year has no valid explanation. All the non-winter months have a pretty smooth curve.

      In winter some years are +.3C and the year before it was -.5C.

      Delete
    3. Bruce,

      Think there might be a bug in your code. When I run the numbers I don't see anything weird in January, for example: http://i81.photobucket.com/albums/j237/hausfath/USHCNAdjustmentsJanuary_zps285bb682.png

      Delete
  12. I think a small code correction is needed:

    for(k in 0:11*9+19)h=rbind(h,as.numeric(substr(b,k,k+3)))

    should I think be

    for(k in 0:11*9+18)h=rbind(h,as.numeric(substr(b,k,k+4)))

    Double digit negative temperatures can occur. For example, the minimum January mean in the raw data for 20140522 which I found as min(w[2,,,1], na.rm=TRUE) is -27.48, and would be treated as +27.48 by the unmodified code which loses the minus sign.

    Comparing, http://oneillp.files.wordpress.com/2014/05/moyhu_4_51.png, the corrected discrepancy, in red, smooths out the fluctuations quite a bit but introduces a final spike, for which I have no explanation so far, although I have not given it much thought yet - I have only just spotted the error.

    ReplyDelete
    Replies
    1. Thanks, Peter - yes it looks like I'm stripping minuses off very cold momths. Interesting that it mainly affects several decades ago. But it's quite an effect, and I'll fix the graph, hopefully without the spike.

      Delete
    2. I've realised why it mainly affects old data. Getting the sign wrong doesn't matter much, as long as it's both raw and final. Where it hurts is where one is above -10, one below. That's more likely as the adjustments get larger. And because it is still rare, the result is jumpy.

      Why I make the format error is that I originally thought the data was in F, and thought an average of -10F unlikely. I may have been wrong even then.

      I get the spike too. It's not very big. I think it means the adjustments have some seasonality which averages out.

      Delete
  13. oneillp is Peter O'Neill - I prefer to comment with a name in full. This blogspot "Comment as:" choice, and preview which is not a complete preview, confused me!

    ReplyDelete
    Replies
    1. Peter,
      Sorry about the name issue. The menu has a Name/URL option, which I think should work. I use a Google ID, which does it automatically, and also gives greater privileges, eg cut/paste. I don't know why. I think a Wordpress ID does the same.

      Delete