Even so, NOAA, again like most surface indices, was still the hottest July in the record, and indeed, at least 12 of the most recent months were the hottest anomaly to date. So the chances of a record 2016 are high, and as I have previously done, I want to show graphically how the rest of the year must fare to make that happen. In the past, I have sought to present the progress to the end year average as a race. But I think more information is conveyed by the type of graph that Sou makes for GISS. This shows the progress during the year of the average to date, compared with other warm years. It has a characteristic that the initial warmth may tail off toward the end, which makes 2016 a little hard to predict. I've supplemented it with extrapolation (faint) assuming that the rest of the year continues as warm as the most recent month. I have made plots for the same set as in the regularly updated plots of comparison of 2016 with 1998. So here it is, you can use the arrows at the top to cycle between plots:
For the meaning of the headings, see the glossary here. All the surface measures, land/ocean, land and SST, are clearly projecting a record if the most recent month temperatures are maintained, with a good deal in reserve. The two lower troposphere indices are projecting a record (ahead of 1998), by a very small margin. There was an end year dip in 1998; if this happens in 2016, it may fall just short of 1998.
Housekeeping - where to note data glitches.
On another subject, while I was away recently, some of the regular data streams failed (for which Walter Dnes helped out, thnaks). This may sometimes happen at other times too, and so there is a question of where is best to comment on this. I have reinstated the comment facility on each of the data pages, and I think this is the most natural place; the comments will appear in the "latest comments" list. I have also amended the title of the top listed page to "Notes, and an index...". The idea is that if you want to note something that is not part of a thread or existing page, this could be the place.A downside here is that comments make pages slower to load. A few short comments won't matter, but it is conceivable that after a long time, I may have to prune the list. If so, I'll try to remove only old comments (on pages) that were relevant to a specific time.
Hello Nick,
ReplyDeletethanks for posting a chart comparing different years with superposed plots showing them.
I like to use that too, especially when comparing e.g. a set of MEI plots each describing an El Niño 2-year sequence:
http://fs5.directupload.net/images/160822/j2aemswq.pdf
Would it not be interesting as well to display these temperature anomaly plots not as anomalies wrt a global baseline (e.g. 1951-1980 or 1981-2010) but rather wrt their common start, e.g. the january of all these years?
That would imo have the advantage of showing, for each temperature record, each year independently of its anomaly level in the record, what would make them better comparable each to another: all would have the same start at 0.0:
http://fs5.directupload.net/images/160822/56vo8byr.pdf
Here you really see how weak 2015/16 is compared to 1997/98 and 1982/83.
Bindi,
DeleteThese plots are of progressive average anomaly, so the start is indeterminate. I could do plots just of anomaly; that is done in these comparisons with 1998. They start in July rather than January, and aren't aligned at the start, but I think it is fairly easy to make the comparison by eye.
Although 2016 may have been weak in MEI, I think the temperature response was stronger, although maybe the peak was narrow compared with 1998. However, 2015 was also a "wannabee" El Nino year, which makes it a bit different to 1997.
Yes Nick: it was stronger. It is interesting to see the difference between 1997/98 and 2015/16:
Deletehttp://fs5.directupload.net/images/160824/2bx33ls9.pdf
Here Klaus Wolter's MEI has been adjusted a bit to offset both time lag and size difference to temperatures, and to have its 1998 peak exactly at UAH's.
You see that in 2016 both UAH and GISS bypass MEI by such an "amount" compared with 1998 that it becomes difficult to view their peak as a temperature response to MEI.
Having additionally stalled a running mean for the 3 time series, I was surprised to discover that MEI experiences a decline as opposed to the temperatures.
How would you interpret this?
ENSO is an erratic standing wave mode of the equatorial Pacific. The way I think about it, unless one has a model for the entire oscillation (which is likely far from being chaotic) it doesn't make sense to compare any two intervals. To understand this, consider a hypothetical situation where the 1997/1998 and 2015/2016 intervals were identical in profile. In that case, the underlying trend would likely have a period of 2015-1997 = 18 years. Yet we know that is not the case, and an actual repeat period in over 130 years of ENSO measurements has not been detected yet.
DeleteSo basically, by trying to compare any two intervals you will get the wrong answer since you will be trying to force an equivalence between two behaviors that simply does not exist. That's why armchair prognosticators of El Nino are usually wrong -- in that they take the lazy way out and think that a comparison between two intervals is enough to make a prediction.
This is another one of those truisms that's not obvious until someone points it out to you. It's also why that even though I have a fairly interesting model of ENSO cooking, I am not going to make a solid prediction for a particular future cycle. It really is more important to get the physics right; same goes for behaviors such as QBO.
Not to say it isn't fascinating reading this kind of blog where everyone is concentrating on following the wiggles in a time series, but unless the physics is there behind it, predicting anything is a lot like reading tea leaves at this stage.
Bindi
Delete"I was surprised to discover that MEI experiences a decline as opposed to the temperatures. "
Well, one obvious interpretation is that the warming was due to AGW and happened despite MEI. But I would want to check the homogeneity of MEI over decades.
But I would want to check the homogeneity of MEI over decades.
DeleteNear the MEI data for 1950-today
http://www.esrl.noaa.gov/psd/enso/mei/table.html
there is also
http://www.esrl.noaa.gov/psd/enso/mei.ext/table.ext.html
starting with 1871.
This helped me in publishing this chart at WUWT:
http://fs5.directupload.net/images/160824/anlxfpuj.jpg
Now to check wether or not this dataset fulfills your expectations on homogeinity: that's a job you'll sure do by far better than I ever could.
It is interesting to see the good ENSO-temp fit between 1940 and 1990, whereas temperature responses to ENSO
- between 1880 and 1940, and
- between 1990 and today
were surprisingly weak.
Viewed over such a longer term, it seems - at least to layman Bindidon - that ENSO hardly can be the climate driver as is so often claimed e.g. at WUWT.
Does this have anything to do with this Wolter comment in his monthly MEI discussion?
Delete...The MEI now stands for 20.1% of the explained variance of all six fields in the tropical Pacific from 30N to 30S, one month after its annual minimum of importance. Eighteen years ago, right after the MEI was introduced to the internet, the explained variance for June-July 1950-1998 amounted to 23.3%. This drop-off by more than 3% reflects the diminished coherence and importance of ENSO events in much of the recent 18 years. The loading patterns shown here resemble the seasonal composite anomaly fields of Year 0 in Rasmusson and Carpenter (1982). ...
How do you stitch together
Deletehttp://www.esrl.noaa.gov/psd/enso/mei/table.html
with
http://www.esrl.noaa.gov/psd/enso/mei.ext/table.ext.html
?
The Ext only goes to 2005, yet the original MEI is up to date. Where they overlap, the numbers do not agree at all. There is no systematic error either. As it turns out, the MEI.EXT is a very simplified version of the MEI and only combines the SOI and NINO 3.4 indices. This is essentially SLP plus SST.
So don't use MEI.ext as it only goes to 2005, and you will only be able to extend it if you know the trick to combining the two indices.
Yet, it is actually a pretty good idea, as the SOI is fairly noisy and adding the NINO3.4 to the mix does improve the signal-to-noise ratio.
I have been using my own version of the mixture in modeling ENSO.
This is a wave-equation transformed fit to the combined SOI+NINO34:
http://imageshack.com/a/img921/1870/SDw7kW.png
It includes a biennial modulation and a phase inversion between 1980 and 1996. Other than that it is a simple periodic forcing combining just a few factors related to known angular momentum frequencies.
This comment has been removed by the author.
DeleteWell whut... sometimes a bit of sound pragmatism is imo welcome.
DeleteI'm neither climatologist nor mathematician, and don't use data presented to us to make science. So I have some more interpretation freedom.
So inaccurate it may look for you, Wolter's work is perfect for me.
The time coincidence of its 1877/78 peak with HadCRUT4 or Berkeley data (though the two differ by 1 month) was a good sign for me.
Now, what concerns its amplitude: who am I to doubt about its correctness?
Bindidon said:
Delete"Now, what concerns its amplitude: who am I to doubt about its correctness?"
I have a different focus than many of the commenters or consumers of the climate data following this blog. What's being reported here I essentially reproduced and accepted several years ago, so I am OK with it. The discussion we are having is a minor quibble. Do a Google search on CSALT temperature to see why I have moved on.
My more recent objective is to parse the mess of atmospheric physics that contrarian scientists such as Lindzen, Salby, Curry and others (Pielke Sr to a degree) have created over the years. The ideal way to do this is to start with the most fundamental of the behaviors such as QBO and ENSO and work the physical models out from scratch.
What I am finding out is that the foundational equations of the GCMs are not being presented at a level that students of other scientific disciplines are used to. Coming from a physics background, I am used to working with Maxwell's equations and applying those from scratch to various geometries and boundary/initial conditions. And this is from a purely analytical perspective, in that a pencil and paper can enable a derivation.
Yet what I think that Lindzen in particular has done is take equations that are perfectly acceptable and by-passed any linear progression to create a bewildering mess out of them. I essentially started from scratch and think that I have a much more acceptable model of QBO than is out there (which is manipulated from Lindzen's original model). And this is applying nothing more than what one learns from calculus-based physics courses in solving problems.
https://forum.azimuthproject.org/discussion/1640/predictability-of-the-quasi-biennial-oscillation#latest
I believe that this is really how we are going to make progress in understanding natural variability. We have to address the uncertainty mess that Lindzen, Curry, and others have created. Remember that it is the likely insane Murry Salby that has written two textbooks on atmospheric physics, which are known to contain some jarringly poor explanations. There is something seriously wrong unless we can correct all the misguided detours that they have been pushing over the years. Science should be self-correcting, but it needs some motivation to get it going. As Yogi Berra didn't say: We can only double our effort if we start from somewhere other than zero.
Certainly looks like a reasonable projection to the end of 2016. An anomaly of 0.10 °C above 2015 would be about 99% statistically likely to have exceeded 2015 as the warmest year on record.
ReplyDeleteAlso, I don't like the "warmest month ever" discussion as well. Climate scientists and communicators have spent so much time trying to get the concept of a global temperature anomaly across to a general audience that bringing in the global annual variation just to make a (weak) point is inconsistent and counterproductive.
Curious that the hack McIntyre brings up the "data torture" accusation against certain scientists, when it is the auditor himself who is going through truckloads of data to find isolated cases of questionable statistical analysis. Cherry picking itself is a form of data torture.
ReplyDeleteMcIntyre won't touch anything related to physics so this kind of stat analysis is the only sandbox he can play in.
--
McIntyre aside, the interesting question is whether self-similarity in intervals of time-series gets affected by AGW. I recommend including a comparison with the largely (?) climate-change-immune indices such as SOI against past years. This is likely the most promising path to take for evaluating natural variations such as ENSO http://contextearth.com/2016/05/12/deterministically-locked-on-the-enso-model/
"...it is the auditor himself who is going through truckloads of data to find isolated cases of questionable statistical analysis. Cherry picking itself is a form of data torture."
DeleteAn excellent point, and one that's obvious once it's pointed out, but not before.
Magma said
Delete"An excellent point, and one that's obvious once it's pointed out, but not before."
I can't remember where I picked up on this point. It certainly wasn't my original idea. (although I will take full credit for any math or physics breakthroughs :)