tag:blogger.com,1999:blog-77290933806751620512018-01-20T23:58:26.681+11:00moyhuNick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.comBlogger712125tag:blogger.com,1999:blog-7729093380675162051.post-39799330105689128412018-01-19T22:38:00.003+11:002018-01-19T22:38:35.787+11:00Graphs of 2017 global temperatures among record years for major indicesA few days ago, I <a href="https://moyhu.blogspot.com.au/2018/01/review-of-2017-heat-records-and-recent.html">posted</a> some graphs in an updated style which shows 2017, as seen by <a href="https://moyhu.blogspot.com.au/p/a-guide-to-global-temperature-program.html">TempLS mesh</a> in its place among records years (2017 came second) in a progressive record style. I also showed a detailed graph of the sequence of months from 2014 to 2017, showing how the warming introduced by the El Niño seems to be lasting. I said I would do a similar set of plots for the major indices when they appeared, as I <a href="https://moyhu.blogspot.com.au/2017/01/giss-and-noaa-both-find-that-2016-was.html">have done</a> in previous years. By now NOAA, HADCRUT and GISS have reported, as well as the satellite indices. So here is the set of progressive record plots. We have so far:<br /><ul><li> GISSlo - Gistemp land/ocean </li><li> HADCRUT 4 land/ocean </li><li> NOAAlo - NOAA land/ocean </li><li> UAH V6.0 - lower troposphere (TLT satellite) </li><li> RSS V4.0 - lower troposphere (TLT satellite) </li><li> CRUTEM 4 - land only </li><li> <a href="https://moyhu.blogspot.com.au/p/a-guide-to-global-temperature-program.html">TempLS mesh</a></li></ul>I'll add more as they arrive. You can find more information about the indices, with source links, <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#L_intro">here</a>. The <a href="http://moyhu.blogspot.com/p/more-pages.html">Glossary</a> may help too. You can flick through the 7 images using the buttons below the plot. <script src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/pages/webgl/MoyJSlib.js" type="text/javascript"></br></script><sscript src="file://c:/pages/webgl/MoyJSlib.js" type="text/javascript"></sscript><br /><div id="PxBody"></div><script type="text/javascript">PxInit=function(p,n,stem){ var i,j,p,q,n,stem; var G={} MoyJSlib(G) eval(G.var) // makes G.cr into cr etc p=getel(p) makeimg(p,n,stem) } PxInit("PxBody",7,"https://s3-us-west-1.amazonaws.com/www.moyhu.org/2018/01/rec17_") </script> <br />GISS and TempLS had 2017 in second place, as did RSS V4.0. HADCRUT, CRUTEM and NOAAlo put it behind 2015 in third place. UAH V6 had things in a very different order, with 1988 in second place, and 2017 a distant third. The grouping of the surface indices is commonly observed. TempLS mesh and GISS interpolate, giving more (and IMO cdue) weight to polar regions. NOAA and HADCRUT do so much less. So insofar as the warmth of 2017 was accentuated at the poles, the less interpolated indices tend to miss that. <br /><br />Here is the set of monthly averages for each of those indices, with as before the annual averages shown as horizontal lines in the appropriate color. Almost all months of 2017 were well above the 2014 average, even though 2014 was a record year in its time. <br /><br /><div id="PxBodz"></div><script type="text/javascript">PxInit("PxBodz",7,"https://s3-us-west-1.amazonaws.com/www.moyhu.org/2018/01/yr17_") </script> <br /><br /><br /><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com0tag:blogger.com,1999:blog-7729093380675162051.post-79521628118753293172018-01-19T07:56:00.004+11:002018-01-19T07:57:52.185+11:00GISS December global up 0.02°C; 2017 was second warmest.GISS <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#L1">warmed slightly</a>, going from 0.87°C in November to 0.89°C in December (GISS report <a href="https://data.giss.nasa.gov/gistemp/news/20171218/">here</a>). That is very similar to <a href="https://moyhu.blogspot.com.au/2018/01/december-global-surface-temperature.html">TempLS mesh</a>; I originally reported no change, but later data pushed that up to a 0.04°C rise. For GISS, that makes 2017 the second warmest year in their record; behind 2016 but ahead of 2015. Their report, with the annual summary too, is <a href="https://www.giss.nasa.gov/research/news/20180118/">here</a>. I showed some aspects of 2017 annual in context <a href="https://moyhu.blogspot.com.au/2018/01/review-of-2017-heat-records-and-recent.html">here</a>, and I'll do that for GISS and other indices in an upcoming post. <br /><br />The overall pattern was similar to that in TempLS. Cold in east N America and Mediterranean and far East Siberia. Very warm in most of Russia, and in the Arctic. A cool La Nina-ish plume, but warm in the Tasman sea. <br /><br />As usual here, I will compare the GISS and previous TempLS plots below the jump. <br /><a name='more'></a><br />Here is GISS<br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2018/01/GISSdec.jpg" /><br /><br />And here is the TempLS spherical harmonics plot <br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/12/map.png" /><br /><!--more--> <br /><div style="color: #aa0000;">This post is part of a series that has now run for six years. The GISS data completes the month cycle, and is compared with the TempLS result and map. GISS lists its reports <a href="https://data.giss.nasa.gov/gistemp/news/">here</a>, and I post the monthly averages <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#mesh">here</a>. <br />The TempLS mesh data is reported <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#mesh">here</a>, and the recent history of monthly readings is <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#L1">here</a>. Unadjusted GHCN is normally used, but if you click the TempLS button there, it will show data with adjusted, and also with different integration methods. There is an interactive graph using 1981-2010 base period <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#Drag">here</a> which you can use to show different periods, or compare with other indices. There is a general guide to TempLS <a href="https://moyhu.blogspot.com.au/p/a-guide-to-global-temperature-program.html">here</a>. <br /><br />The reporting cycle starts with a report of the daily reanalysis index on about the 4th of the month. The next post is this, the TempLS report, usually about the 8th. Then when the GISS result comes out, usually about the 15th, I discuss it and compare with TempLS. The TempLS graph uses a spherical harmonics to the TempLS mesh residuals; the residuals are displayed more directly using a triangular grid in a better resolved WebGL plot <a href="http://www.moyhu.blogspot.com.au/p/blog-page_24.html">here</a>. <br /><br />A list of earlier monthly reports of each series in date order is here:</div><ol><li><a href="https://moyhu.blogspot.com.au/p/moyhu-index.html?NCEP%20Monthly">NCEP/NCAR Reanalysis report</a></li><li><a href="https://moyhu.blogspot.com.au/p/moyhu-index.html?TempLS%20Monthly">TempLS report</a></li><li><a href="https://moyhu.blogspot.com.au/p/moyhu-index.html?GISS%20Monthly">GISS report and comparison with TempLS</a></li></ol><br /><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com5tag:blogger.com,1999:blog-7729093380675162051.post-69145461249506032542018-01-10T18:53:00.002+11:002018-01-10T18:58:47.246+11:00Review of 2017, heat records, and recent warm years.Yesterday I <a href="http://moyhu.blogspot.com/2018/01/december-global-surface-temperature.html">posted</a> the December global anomaly (base 1961-90) results for <a href="https://moyhu.blogspot.com.au/p/a-guide-to-global-temperature-program.html">TempLS mesh</a>, and noted that it made 2017 the second warmest year, after 2016. I'd like to put that in a bit more context. For the last three years (eg <a href="https://moyhu.blogspot.com.au/2017/01/uah-first-index-with-record-warm-2016.html">here</a>) I have posted a progressive plot showing in steps the advance of the hottest year to date. Since 2014, 2015 and 2016 were each the hottest years to date, there was something new to show each year, and the plot showed the rapidity of those rises. This year, with 2017 in second place, it doesn't add new information to that style of plot. So I tried a way of adding information. I superimposed on the steps plot a column plot of each year's temperature. This measn that you can follow the max outline, or focus on the columns, which also show how far the years following a record were cooler. It emphasises the warmth of 2017 relative to earlier years. Here is the plot:<br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2018/01/pex22.png" /><br /><br />The legend shows the color codes for the record years. I'll probably make an active plot of all the indices when they become available. But I was also curious about how 2017 came to be warmer than the near Niño year of 2015. So I drew a column plot by month of the last four years, shown by color <img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2018/01/yr7.png" /><br />I've also marked each year's average in the appropriate colour. 2017 is almost a mirror image of 2015, and the main contribution to its warmth came from the first three months, a somewhat separate peak from the El Niño. But what is clear is that the apparent level of later 2016 and 2017 is a good deal higher than 2014, a record year in its day. Even the coolest month of 2016/7 (June 17) was at about the 2014 average.<br /><br />In my previous post, I reported December 2017 as virtually unchanged from November. Further data has made it a little warmer. In other news, the <a href="http://www.bom.gov.au/climate/current/annual/aus/#tabs=Overview">Australia BoM 2017 climate statement</a> is out, and here 2017 was the third warmest year, after 2013 and 2005. <br /><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com4tag:blogger.com,1999:blog-7729093380675162051.post-60721897288433577832018-01-09T07:26:00.000+11:002018-01-09T07:26:15.327+11:00December global surface temperature unchanged; 2017 was second warmest year.<a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#mesh">TempLS mesh</a> anomaly (1961-90 base) was virtually unchanged, from 0.716°C in November to 0.721°C in December. This compares with the <a href="https://moyhu.blogspot.com.au/2017/12/october-ncepncar-global-anomaly-down.html">rise</a> of 0.075°C in the NCEP/NCAR index, and a <a href="http://www.drroyspencer.com/2017/12/uah-global-temperature-update-for-november-20170-36-deg-c/">similar rise</a> (0.05) in the UAH LT satellite index. <br /><br />The TempLS average for 2017 was thus 0.757°C, which puts it behind 2016 (0.836°C), and ahead of 2015 (0.729°C), and so was the second warmest year in the record. I expect this will be a common finding, although 2015 is close. I'll post a graph showing the history of records. <br /><br />The breakdown is interesting. The main cooling effect came from SST, well down on November. The balancing rises came from the Arctic and Siberia. Since TempLS, like GISS, is sensitive to Arctic temperature, indices like NOAA/HADCRUT may well show a fall. Otherwise the map shows those effects, along with much-discussed cold around the Great Lakes region and also W Sahara. <br /><br />Here is the temperature map: <br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/12/map.png" /><br /><a name='more'></a><div style="color: #aa0000;">This post is part of a series that has now run for six years. The TempLS mesh data is reported <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#mesh">here</a>, and the recent history of monthly readings is <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#L1">here</a>. Unadjusted GHCN is normally used, but if you click the TempLS button there, it will show data with adjusted, and also with different integration methods. There is an interactive graph using 1981-2010 base period <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#Drag">here</a> which you can use to show different periods, or compare with other indices. There is a general guide to TempLS <a href="https://moyhu.blogspot.com.au/p/a-guide-to-global-temperature-program.html">here</a>. <br /><br />The reporting cycle starts with a report of the daily reanalysis index on about the 4th of the month. The next post is this, the TempLS report, usually about the 8th. Then when the GISS result comes out, usually about the 15th, I discuss it and compare with TempLS. The TempLS graph uses a spherical harmonics to the TempLS mesh residuals; the residuals are displayed more directly using a triangular grid in a better resolved WebGL plot <a href="http://www.moyhu.blogspot.com.au/p/blog-page_24.html">here</a>. </div><br /><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com3tag:blogger.com,1999:blog-7729093380675162051.post-58395624965264709352018-01-04T07:20:00.000+11:002018-01-04T07:23:06.639+11:00December NCEP/NCAR global anomaly up 0.075°C from NovemberIn the <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#NCAR">Moyhu NCEP/NCAR index</a>, the monthly reanalysis anomaly average rose from 0.253°C in November to 0.328°C in December, 2017, making it a mid-range month for 2017. The temperature did not oscillate as much as in some recent months. <br /><br />The main cool spots were Canada and E Siberia, NW Africa (Algeria) and Antarctica. East Europe and NW Russia were warm, and also Alaska and a band through the Rockies, S Calif. There was a notable warm spot in the seas around New Zealand. <br /><br />The annual averfage for 2017 was 0.376°C. This puts it behind 2016 (0.531) but ahead of 2015 (0.330) and 2014 (0.190). <br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/data/freq/days.png" /><br /><br /><br /><br /><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com11tag:blogger.com,1999:blog-7729093380675162051.post-58541077328750846682017-12-29T19:38:00.000+11:002017-12-29T19:39:45.210+11:00Upgrading Moyhu infrastructureI have spent part of the holidays tinkering with Moyhu's infrastructure. I keep copies of all posts and pages offline, partly for backup, and partly so I can search them and also make universal changes. This was of great value when Blogger required use of https a couple of years ago. <br /><!--#@_home--><br />I was trying to revive the graphics gallery page, which never really recovered from that change, since it used copies of posts, which also need fixing. But I got side-tracked into upgrading the <a href="http://www.moyhu.blogspot.com.au/p/moyhu-index.html">interactive topic index</a>. I don't think readers use it much, but I do. Google provides a search for the blog, but it returns a display of complete posts, so needs precise search terms. The interactive search just returns a list of linked titles, so you get a wider sweep (maybe too wide). <br /><br />The new index looks like this (non-functional image): <br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/12/newindex.png" /><br /><br />I had clicked on the Marcott button, and the list of links (with dates) appears below. The topics have been extended, and groups in to color-coded types. The types are listed on the left. I have extended the topics considerably. They mostly work by a keyword search, so you get some false positives. I added some at the end (housekeeping, debunkings) which don't have natural keywords, and were manually identified. And I have separated out the three series of monthly postings that I do (NCEP/NCAR, TempLS and GISS). It makes them easier to find, but because I exclude he keyword search for them, it reduces the false positives. <br /><br />The new arrangement should be much easier to update regularly. <br /><br />I have also added a blog glossary. We're used to tossing around terms like CRUTEM, but not everyone is, so I have given a basic description (with links). I have added it to the <a href="http://moyhu.blogspot.com/p/more-pages.html">bottom page</a> , now called "More pages, and blog glossary". The more pages part is just a few lines of links at the top, and then comes the glossary. I'm sure it will need extending - suggestions welcome. <br /><br />That was my New Year's resolution for 2017 :) <br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com0tag:blogger.com,1999:blog-7729093380675162051.post-41856434587357383152017-12-23T06:04:00.000+11:002017-12-23T06:04:23.080+11:00Merry Christmas to all<img src="http://www.theage.com.au/ffximage/2006/12/25/buller_cfa_wideweb__470x305,0.jpg" /><br /><br />Christmas starts early this year at Moyhu, so I'll take this opportunity to wish readers a Merry Christmas and a Happy New Year. Hopefully bushfire-free. In the spirit of recent examinations of past wildfires (Eli has <a href="http://rabett.blogspot.com/2017/12/fire-fire-burning-bright-how-many-acres.html">more</a>), I'll tell the story behind the picture above. The newspaper version is <a href="http://www.theage.com.au/news/national/white-christmas/2006/12/25/1166895230027.html">here</a>. <br /><br />In <a href="https://en.wikipedia.org/wiki/2003_Eastern_Victorian_alpine_bushfires">2003</a> and <a href="https://en.wikipedia.org/wiki/2006%E2%80%9307_Australian_bushfire_season">2006/7</a>, alpine and upland Victoria were swept by long running and very damaging bushfires. The area concerned was rugged terrain and sparsely inhabited. Much was not so far below the snowline, so the trees there were not very tall, and the fuel somewhat less than for our wilder lowland fires. They were in summer, but not exceptionally hot weather, and of course cooler there. Consequently they burnt relatively slowly, with a fair chance of defending critical areas. Some were fought with great energy and danger to crews; this could not be expended everywhere, so they burnt for weeks and did much damage to mountain forests that take a long time to regrow. <br /><br />In December, 2006, our most popular ski resort, <a href="https://en.wikipedia.org/wiki/Mount_Buller_(Victoria)">Mt Buller</a>, was surrounded by fires, again burning over long periods. Although as said the conditions did not make for extreme wildfire, any eucalypt fire is dangerous when it runs up a hill slope. The many buildings of Mt Buller sit near the peak at about 1500 m altitude, and there is a single winding road out, which became impassable on 22nd December. So the fire crews who had assembled were on their own. Fortunately, the resort had a large water storage used for snow-making. But there was a long perimeter to defend. <br /><br />By the 23rd, flames were at that perimeter. I was at Buller the following winter, and saw that it had come right up to a northern edge road, with major buildings just on the other side. And it was pretty close on all sides. Fortunately, some rain came that evening, but still, there were more flareups on Christmas Eve. <br /><br />At last, later on the Eve, more serious rain came with another change in the weather, and somewhat amazingly for our summer solstice, it turned to snow. That indeed marked the end of the danger to the mountaintop. In the morning, when they awoke to a very welcome white Christmas, this photo was taken. <br /><br />Again, may your Christmas be as merry. With special hopes for those in the region of the Thomas fire. <br /><br /><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com12tag:blogger.com,1999:blog-7729093380675162051.post-63146132917316724502017-12-20T22:43:00.001+11:002017-12-21T09:50:25.100+11:00On Climate sensitivity and nonsense claims about an "IPCC model"I have been arguing recently in threads where people insist that there is an "IPCC model" that says that the change in temperature is linearly related to change in forcing, on a short term basis. They are of course mangling the definition of equilibrium sensitivity, which defines the parameter ECS as the ratio of the <b>equilibrium</b> change in temperature to a <b>sustained</b> (step-function) rise in forcing. So in this post I would like to explain some of the interesting mathematics involved in ECS, and why you can't just apply its value to relate just any forcing change to any temperature change. And why extra notions like Transient Climate Sensitivity are needed to express century scale changes. <br /><br />The threads in question are at WUWT. One is actually a series by a Dr Antero Ollila, latest <a href="https://wattsupwiththat.com/2017/12/11/on-the-ipcc-climate-model/">here</a>, "On the ‘IPCC climate model’". The other is by Willis Eschenbach "<a href="https://wattsupwiththat.com/2017/12/18/delta-t-and-delta-f/">Delta T and Delta F</a>". Willis has for years been promoting mangled versions of the climate sensitivity definition to claim that climate scientists assume that ∆T = λ∆F on a short term basis, giving no citation. I frequently challenge him to actually quote what they say, but <a href="https://wattsupwiththat.com/2017/12/18/delta-t-and-delta-f/comment-page-1/#comment-2697580">no luck</a>. However, someone there did have better success, and that made it clear that he is misusing the notion of equilibrium sensitivity. He quoted the AR5 <br /><br /><i>"“The assumed relation between a sustained RF and the equilibrium global mean surface temperature response (∆T) is ∆T = λRF where λ is the climate sensitivity parameter.”"</i><br /><br />Words matter. It says "sustained RF" and equilibrium T. You can't use it to relate any old RF and T. A clear refutation of this misuse is the use also of transient sensitivity (TCS). This typically is defined as the temperature change after CO₂ increased by 1% per year, compounding over 70 years, by which time it has doubled. CS is normally expressed relative to CO₂ doubling, but it is linked to forcing in W/m² by<br />∆F = 5.35* ∆log([CO2])<br />which implies ∆F = 3.7 W/m² for CO₂ doubling. Both ECS and TCS express the same change in forcing. But one is an initial change sustained until equilibrium is reached; the other is temperature after that 70-year ramp. And they are quite different numbers; ECS is usually about twice TCS. So there can't be a simple rule that says that ∆T = λ∆F for any timing.<br /><br />A much better paradigm is that ∆T is a linear function of the full history of forcing F. This would be expressed as a (convolution) integral <br /><br />∆T(t)=∫w(t-τ)*F(τ) dτ, where the integral is from initial time to t. <br /><br />The weight function w is unknown, but tapers to zero as time goes on. I've dropped the ∆ on F, now regarding F as a cumulative change from the initial value, but a similar integral with increments could be used (related by integrating by parts). <br /><br />This is actually the general linear relation constrained by causality. w is called the impulse response function (IRF). It's the evolution (usually decay) that you expect of temperature following a heat impulse (F dt), in the absence of any other changes. That pulse F*dt at time τ would then have decayed by a factor w(t-τ) by time t, and by linear superposition, the effect of all the pulses of heat that would approximate the continuous F over the period adds to the stated integral. <br /><br />I gave <a href="https://wattsupwiththat.com/2017/12/18/delta-t-and-delta-f/comment-page-1/#comment-2696602">at WUWT</a> the example of heating a swimming pool. Suppose it sits in a uniform environment, and has been heated at a uniform burn rate, and settles to a stable temperature. Then you increase the heating rate, and maintain it there. The response of the pool might be expected to be something like: <br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/12/pool.png" /><br /><br />I'm actually using an exponential decay, which means the difference between current and eventual temperature decays exponentially. That is equivalent to the IRF being an exponential decay, and the curve has it convolved with a step function. The result, after a long time, is just the integral of w. This might be called the zeroth order moment. It is what I have marked in blue and called the ECS value. The linearity assumed is that this value varies linearly with the forcing jump. It doesn't mean that T is proportional to F. How could it be, if F is held constant and T is still warming? <br /><br />So let's look at that ECS convolution integral graphically: <br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/12/pool1.png" /><br /><br />Red is the forcing, blue the IRF, purple the overlap, which is what is integrated (IOW, all of it). <br /><br />But suppose we are just interested in shorter periods. We could use a different forcing history, which would give a number relevant to the shorter term part of w. I'll use the linear ramp of TCS, this time just over the range 4-5: <br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/12/pool2.png" /><br /><br />I could have omitted the 0-4 section, since there is zero integrand there. However, I wanted to make a point. The integral now is weighted to the near end of the IRF. Since the IRF conventionally has value 1 at zero, it could be represented over the range of the ramp by a linear function, and the resulting integral would give the slope. That is why I identified TCS with the slope at zero of the response curve. But it isn't quite, because w isn't quite linear. <br /><br />So this shows up a limitation of TCS. It doesn't tell about the part of w beyond its range. That's fine if we are applying it to a forcing known to be zero before the ramp commences. But usually that isn't true. We could extend the range of the TCS (if T data is available) but then it's utility of giving the derivative w'(0) wuld lessen. That means that it wouldn't be as applicable to ranges of other lengths. <br /><br />Because of the general noise, both of observation and GCM computation, this method of getting aspects of the IRF w by trial forcings is really the only one available. It gives numbers of some utility, and is good for comparing GCMs or other diagnostics. And in the initial plot I tried to show that if you have the final value of the evolution and the initial slope, you have a fair handle on the whole story. That corresponds to knowing an ECS and a short term TCS. <br /><br />But it isn't a "model" of T versus forcing. That model is the IRF convolution. The sensitivity definitions are just extracting different numbers that may help. <br /><br /><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com19tag:blogger.com,1999:blog-7729093380675162051.post-17736396737591276872017-12-20T06:47:00.000+11:002017-12-25T19:47:41.835+11:00Burning history - fiery furphies.This post is about some exaggerated accounts that circulate about past forest fires. The exaggeration had been part of a normal human tendency - no-one cares very much whether accounts of events that caused suffering inthe past may be inflated, and so the more dramatic stories win out. But then maybe it does begin to matter.<br /><br />I have two topics - the history of wildfire in the US, and a particular fire in Victoria in 1851. When people start to wonder about whether fires are getting worse in the US, as they seem to be, then a graph is trotted out to show how much worse things were in the 1930's. And if there is a bad fire in Australia, the 1851 fire is said to be so much worse. This is all to head off any suggestion that AGW might be a contributor. <br /><br />I'll start with the US story. The graph that appears is one like this: <br /><br /><img src="https://fabiusmaximus.com/wp-content/uploads/2017/12/US-acres-burned-1926-2017.jpg" width="700" /><br /><br />This appeared in a <a href="https://judithcurry.com/2017/12/13/is-climate-change-the-culprit-causing-californias-wildfires">post</a> by Larry Kummer a few days ago. He gave some supporting information, which has helped me to write this post. I had seen the plot before, and found it incredible. The peak levels of annual burning, around 1931, are over 50 million acres, or 200,000 sq km. That is about the area of Nebraska, or 2.5% of the area of ConUS. Each year. It seems to be 8% of the total forest area of ConUS. I'm referring to ConUS because <a href="https://judithcurry.com/2017/12/13/is-climate-change-the-culprit-causing-californias-wildfires/#comment-862693">it was established</a> that these very large figures for the 1930's did not include Alaska or Hawaii. It seemed to me that any kind of wildfires that burnt an area comparable to Nebraska would be a major disaster, with massive loss of life. <br /><br />So maybe the old data include something other than wildfires. Or maybe they are exaggerated. I wanted to know. Because it is pushed so much, and looks so dodgy, I think the graph should not be accepted. <br /><br />Eli has just put up a <a href="http://rabett.blogspot.com.au/2017/12/forest-fires-burning-bright.html">post</a> on a related matter, and I commented there. He helpfully linked to previous blg discussions at <a href="http://rabett.blogspot.com/2015/10/eli-buds-in.html">RR</a> and <a href="https://andthentheresphysics.wordpress.com/2015/10/30/the-mysterious-wildfire-chart/">ATTP</a><br /><br />Larry K, at JC's, traced the origin of the graph to a <a href="https://www.epw.senate.gov/public/_cache/files/3d0636c4-7ecd-44be-95aa-84973e50b7dd/6314witnesstestimonysouth.pdf">2014 Senate appearance</a> by a Forestry Professor (Emeritus), David South. He is not a fan of global warming notions. Rather bizarrely, he took the opportunity of appearing before the Senate committee to offer a bet on sea level: <br /><br /><i>"I am willing to bet $1,000 that the mean value (e.g. the 3.10 number for year 2012 in Figure 16) will not be greater than 7.0 mm/yr for the year 2024. I wonder, is anyone really convinced the sea will rise by four feet, and if so, will they take me up on my offer?"</i><br /><br />Prof South's graph did not come from his (or anyone's) published paper; he has not published on fire matters. But it was based on real data, of a sort. That is found in a document on the census site: <a href="https://www2.census.gov/library/publications/1975/compendia/hist_stats_colonial-1970/hist_stats_colonial-1970p1-chL.pdf">Historical Statistics of the United States p537</a>. That is a huge compendium of data about all sorts of matters. Here is an extract from the table: <br /><br /><img img="" src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/12/firestats.png" /><br /><br />As you can see, the big numbers from the 30's are in "unprotected areas". And of these, the source says: <br /><br /><i>"The source publication also presents information by regions and States on areas needing protection, areas protected and unprotected, and areas burned on both protected and unprotected forest land by type of ownership, and size of fires on protected areas. No field organizations are available to report fires on unprotected areas and the statistics for these areas are generally the best estimates available."</i><br /><br />IOW, an unattributed guess. That that is for the dominant component. Federal land fires then are basically less than modern. <br /><br />The plot mostly in circulation now, and shown by Larry K above, is due to <a href="https://www.facebook.com/bjornlomborg/photos/a.221758208967.168468.146605843967/10156121084368968/?type=1&theater">a Lomborg facebook post.</a><br /><br />The organisation currently responsible for presenting US fire history statistics is the National Interagency Fire Center. And their <a href="https://www.nifc.gov/fireInfo/fireInfo_stats_totalFires.html">equivalent table</a> lists data back to 1960, which pretty much agrees where it overlaps with the old census data to 1970. But they rather pointedly go back no further. And at the bottom of the table, they say (my bold): <br /><br /><i>"Prior to 1983, sources of these figures are not known, or cannot be confirmed, and were not derived from the current situation reporting process. As a result <b>the figures above prior to 1983 shouldn't be compared to later data</b>."</i><br /><br />So not only do they not show data before 1960; they advise against comparison with data before 1983. <br /><br />Interestingly, ATTP noted that a somewhat similar graph appeared on a USDA Forest Service at <a href="https://www.fs.fed.us/research/sustain/criteria-indicators/indicators/indicator-316.php">this link</a>. It now seems to have disappeared. <br /><br />I think these graphs should be rejected as totally unreliable. <br /><h4>The Victorian Bushfires of 1851.</h4>First some background. Victoria was first settled in 1834, mainly around Melbourne/Geelong (Port Phillip District), with a smaller area around Portland in the West. In Feb 1851, the time of the fires, the region was still part of the colony of New South Wales. Two big events to come later in the year were acquiring separate colony status, and the discovery of gold, which created a gold rush. But in Feb, the future colony had about 77,000 people, mainly around Melbourne/Geelong. There were no railways or telegraphs, and no real roads outside that region. <br /><br />There undoubtedly was a very hot day on Thursday Feb 6, 1851, and the populated area was beset with many fires. It may well have been the first such day in the seventeen years of the region, and it made a deep impression. <a href="http://www.chig.asn.au/black_thursday_bushfires_1851.htm">This site</a> gathers some reports. But the main furphy concerns claims of statewide devastation. <a href="https://en.wikipedia.org/wiki/Black_Thursday_bushfires">Wiki</a> has swallowed it: <br /><br /><i>"They are considered the largest Australian bushfires in a populous region in recorded history, with approximately five million hectares (twelve million acres), or a quarter of Victoria, being burnt."</i><br /><br />A quarter of Victoria! But how would anyone know? Nothing like a quarter of Victoria had been settled. Again, virtually no roads. No communications. Certainly no satellites or aerial surveys. Where does this stuff come from? <br /><br />It is certainly widespread. The ABS, (our census org) says <br /><i>" The 'Black Thursday' fires of 6 February 1851 in Victoria, burnt the largest area (approximately 5 million ha) in European-recorded history and killed more than one million sheep and thousands of cattle as well as taking the lives of 12 people (CFA 2003a; DSE 2003b)"</i><br />It is echoed on the site of the <a href="https://www.ffm.vic.gov.au/history-and-incidents/past-bushfires">Forest Fire Management (state gov)</a> .The 1 million sheep (who counted them?) and 12 people are echod in most reports. But the ratio is odd. People had very limited means of escape from fire then, and there were no fire-fighting organisations. No cars, no roads, small wooden dwellings, often in the bush. Only 12, for a fire that burnt a quarter of Victoria? <br /><br />The ABS cites the CFA, our firefighting organisation. I don't know what they said in 2003, but what they <a href="http://www.cfa.vic.gov.au/about/major-fires/">currently list</a> is the 12 people killed, and 1000000 livestock, but not the area burnt. <br /><br />By way of background, here is a survey map of Victoria in 1849. The rectangles are the survey regions (counties and parishes). It is only within these areas that the Crown would issue title to land. That doesn't mean that the areas were yet extensively settled; the dark grey areas are currently purchased. I think even that may be areas in which there are purchases, rather than the area actually sold. <br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/12/vicmap49.png " /><br /><br />There are a few small locations beyond - around Horsham in the Wimmera, Port Albert, in Gippsland, Portland, Kyneton and Kilmore. This is a tiny fraction of the state. So even if a quarter of the state had burnt, who would be in a position to assess it? <br /><br />So what are the original sources? Almost entirely a few newspaper articles. <a href="https://trove.nla.gov.au/newspaper/article/4776100">Here</a> is one from the Melbourne Argus, on Monday 10th Feb, four days after the fire. They explain why they judiciously waited: <br /><br /><i>"In our Saturday's issue we briefly alluded to the extensive and destructive bush fires that prevailed throughout the country, more particularly on the Thursday preceding. Rumours had reached us of conflagrations on every side, but as we did not wish to appear alarmists, we refrained from noticing any but those that were well authenticated, knowing how exceedingly prone report is to magnify and distort particulars. Since then, however, we learn with regret that little only of the ill news had reached us, and that what we thought magnified, is unhappily very far from the fearful extent of the truth."</i><br /><br />And they give an exceedingly detailed listing of damages and losses, but all pretty much within that surveyed area. A large part of the report quotes from the Geelong Advertiser. There is a report from Portland, but of a fire some days earlier. Many of the fires described are attributed to some local originating incident. <br /><br />There is an interesting 1924 account <a href="https://trove.nla.gov.au/newspaper/article/4300936">here</a> by someone who was there, at the age of 12 (so now about 85). It gives an idea of how exaggerated notions of area might have arisen. He says that the conflagration extended from Gippsland northward as far as the Goulburn river. But even now, a lot of that country is not much inhabited. It's pretty clear that he means that fires were observed in Gippsland (a few settlements) and the Goulburn (probably Seymour). <br /><br />So again, where does this notion of 5 million hectares come from? I haven't been able to track down an original source for that number, but I think I know. Here (from <a href="https://guides.slv.vic.gov.au/bushfires/1851">here</a>) is a common summary of the regions affected <i>"'Fires covered a quarter of what is now Victoria (approximately 5 million hectares). Areas affected include Portland, Plenty Ranges, Westernport, the Wimmera and Dandenong districts. Approximately 12 lives, one million sheep and thousands of cattle were lost'"</i><br /><br />Those are all localities, except for the Wimmera, which is a very large area. But the only report from Wimmera is of a fire at Horsham, virtually the only settled area. Another formulation is from the CFA: <br /><br /><i>"Wimmera, Portland, Gippsland, Plenty Ranges, Westernport, Dandenong districts, Heidelberg."</i><br /><br />Apart from the western town Portland, and "Wimmera", the other regions are now within the Melbourne suburbs (so they left out Geelong and the Barrabool ranges, which seems to have been seriously burnt). So where could 5 million hectares come from? I think from the reference to Wimmera. Some reports also list Dandenong as Gippsland, which is a large area too. I think someone put together the Wimmera, which has no clear definition, but Wiki <a href="https://en.wikipedia.org/wiki/Wimmera">puts it</a> at 4.193 million hectares. With a generous interpretation of Westernport and Portland, that might then stretch to 5 million hectares. But again, Wimmera has only one settled location, Horsham, where there was indeed a report of a fire. <br /><br /><br /><br /><br /><br /><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com10tag:blogger.com,1999:blog-7729093380675162051.post-43271176060633037922017-12-19T08:00:00.000+11:002018-01-04T16:49:09.372+11:00GISS November global down 0.03°C from October, now 0.87°C.GISS <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#L1">cooled slightly</a>, going from 0.90°C in October to 0.87°C in November (GISS report <a href="https://data.giss.nasa.gov/gistemp/news/20171218/">here</a>). That is very similar to the <a href="https://moyhu.blogspot.com.au/2017/12/november-global-surface-temperature.html">decrease</a> (now 0.064°C with ERSST V5) in TempLS mesh. It was the third warmest November on record, after 2015 and 2016. It makes it almost certain that 2017 will be the second warmest in the record (after 2016): December would have to average 0.51°C for 2015 to catch up, and December has been <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#NCAR">warmer than November</a> so far. <br /><br />The overall pattern was similar to that in TempLS. Cool in Canada and far East Siberia. Warm in W Siberia, and very warm in the Arctic. A cool La Nina plume. <br /><br />As usual here, I will compare the GISS and previous TempLS plots below the jump. <br /><a name='more'></a><br />Here is GISS<br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/12/GISSnov.jpg" /><br /><br />And here is the TempLS spherical harmonics plot <br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/11/map.png" /><br /><br /><div style="color: #aa0000;">This post is part of a series that has now run for six years. The TempLS mesh data is reported <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#mesh">here</a>, and the recent history of monthly readings is <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#L1">here</a>. Unadjusted GHCN is normally used, but if you click the TempLS button there, it will show data with adjusted, and also with different integration methods. There is an interactive graph using 1981-2010 base period <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#Drag">here</a> which you can use to show different periods, or compare with other indices. There is a general guide to TempLS <a href="https://moyhu.blogspot.com.au/p/a-guide-to-global-temperature-program.html">here</a>. <br /><br />The reporting cycle starts with a report of the daily reanalysis index on about the 4th of the month. The next post is this, the TempLS report, usually about the 8th. Then when the GISS result comes out, usually about the 15th, I discuss it and compare with TempLS. The TempLS graph uses a spherical harmonics to the TempLS mesh residuals; the residuals are displayed more directly using a triangular grid in a better resolved WebGL plot <a href="http://www.moyhu.blogspot.com.au/p/blog-page_24.html">here</a>. </div><br /><br /><br /><br /><br /><br /><br /><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com1tag:blogger.com,1999:blog-7729093380675162051.post-52993130493518724962017-12-16T09:43:00.000+11:002017-12-16T10:01:22.901+11:00New version of monthly webGL surface temperature map page.Moyhu has a maintained page <a href="https://moyhu.blogspot.com.au/p/blog-page_24.html">here</a> showing monthly surface temperature anomalies in an interactive webGL spherical shaded plot. The anomalies are based on the <a href="https://moyhu.blogspot.com.au/p/a-guide-to-global-temperature-program.html">TempLS analysis</a>, using unadjusted GHCN V3 and ERSST V5. TempLS provides the triangular mesh and normals (base 1961-90), and the anomalies are just the station monthly averages with these subtracted. The shading is set to give correct color at the nodes (stations). It is updated daily, so also provides a detailed picture from early in the month (which means the first few days are sparse). <br /><br />The page had used its own webGL program, with a month selection mechanism (using XMLHTTPrequest to dowload data). I have been developing a general webGL facility (described <a href="http://moyhu.blogspot.com.au/p/moyhu-webgl-earth-facility.html">here</a>), and I wanted to make a nw page that conforms with that. This makes extra facilities available and is easier for me to maintain. <br /><br />To do this I had to extend the WebGL facility, to a version I am currently calling V2.2. It is a considerable restructure, which will probably become V3. It enables me to replace the constructors of various parts of the page within the user files that go with each application, rather than writing it into the core facility. There were issues with making variables (and functions) visible to functions defines in separate files. I'm also now rather extensively using the MoyJS library. <br /><br />The functionality of older MoyGL versions is preserved, and so, I hope, is the functionality of the page. One novelty of the version is a reintroduction of the flat globe pointing mechanism, which I used before WebGL. In the top right corner is a world map; you can click on any place to make the globe turn to show this point in he centre, with the longitude vertical (oriented). Otherwise, it's pretty much as in <a href="http://moyhu.blogspot.com.au/p/moyhu-webgl-earth-facility.html">V2.1</a> as used previously. <br /><br />The month selector is different. It is the box in the top right corner: <br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/12/selector.png" /> <img height="163px" src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/12/ghcnpage.png" /><br /><br />The three colored bars let you select decade (1900-2010), year (0-9) and month(1-12). The black squares show the current selection. As you click on the bars, these move, and the date in the red box below changes accordingly. When you have the date you want, click "Fetch and Show" and the data will be retrieved and a new plot will appear. <br /><br />Since I am using unadjusted anomalies (1961-90), some exceptions will show out. I discussed <a href="https://moyhu.blogspot.com.au/2012/12/using-present-expectation-anomalies-for.html">here</a> a method for remedying this, and of course adjusted data would also help. I am not currently using that method, which makes extrapolated current expected value the anomaly base. It is good for recent years, but gets troublesome in earlier years. I may offer it as an alternative some time. <br /><br /><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com0tag:blogger.com,1999:blog-7729093380675162051.post-76105238141758538272017-12-08T03:00:00.004+11:002017-12-10T08:14:37.444+11:00November global surface temperature down 0.04°C<a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#mesh">TempLS mesh</a> anomaly (1961-90 base) was down from 0.724°C in October to 0.683°C in November. This compares with the <a href="https://moyhu.blogspot.com.au/2017/12/october-ncepncar-global-anomaly-down.html">larger reduction</a> of 0.12°C in the NCEP/NCAR index, and a <a href="http://www.drroyspencer.com/2017/12/uah-global-temperature-update-for-november-20170-36-deg-c/">much greater fall</a> (0.27) in the UAH LT satellite index. The TEmpLS average for 2017 so far is 0.720°C, which would put it behind 2016, and just behind 2015 (0.729°C). It's unlikely that December will be warm enough to change that order, or cool enough to lose third place (0.594°C in 2014). <br /><br /><div style="color: #880000;"><b>Update 10 Dec: </b>As Olof noted in comments, although I had tested and enabled use of ERSST 5 (instead of 4) earlier this year, it wasn't actually implemented (failure to correctly set a switch). It doesn't make much difference on a month to month basis. The fall in November was 0.064°C instead of 0.04°C, and the spatial pattern is very similar. But as he astutely noted, it does make a difference to 2017 versus 2015. The means for the last three years are 0.729, 0.835, and 0.758. That puts 2017 0.029°C clear of 2015, which means 2017 will be in second place if December exceeds 0.41°C, which seems likely. </div><br />The anomaly pattern showed two marked warm spots in W Siberia and N of Bering Strait, and less pronounced in SW US. Cool in Canada and E Siberia, fairly warm near the poles. Theer was a weak La Nina pattern; much less than with the NCEP/NCAR index. <br /><br />Here is the temperature map: <br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/11/map.png" /><br /><a name='more'></a><div style="color: #aa0000;">This post is part of a series that has now run for six years. The TempLS mesh data is reported <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#mesh">here</a>, and the recent history of monthly readings is <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#L1">here</a>. Unadjusted GHCN is normally used, but if you click the TempLS button there, it will show data with adjusted, and also with different integration methods. There is an interactive graph using 1981-2010 base period <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#Drag">here</a> which you can use to show different periods, or compare with other indices. There is a general guide to TempLS <a href="https://moyhu.blogspot.com.au/p/a-guide-to-global-temperature-program.html">here</a>. <br /><br />The reporting cycle starts with a report of the daily reanalysis index on about the 4th of the month. The next post is this, the TempLS report, usually about the 8th. Then when the GISS result comes out, usually about the 15th, I discuss it and compare with TempLS. The TempLS graph uses a spherical harmonics to the TempLS mesh residuals; the residuals are displayed more directly using a triangular grid in a better resolved WebGL plot <a href="http://www.moyhu.blogspot.com.au/p/blog-page_24.html">here</a>. </div><br /><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com8tag:blogger.com,1999:blog-7729093380675162051.post-8970687251263595282017-12-04T06:28:00.000+11:002017-12-04T19:58:26.231+11:00November NCEP/NCAR global anomaly down 0.12°C from OctoberIn the <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#NCAR">Moyhu NCEP/NCAR index</a>, the monthly reanalysis anomaly average fell from 0.372°C in October to 0.253°C in November, 2017, making it the coolest month since June. The month started warm, with a cold period mid-month, and ended warm. <br /><br />The main cold spots were China and E Siberia, and Canada. Arctic was warm, as were SW USA and European Russia. The Pacific showed a rather La Nina like pattern; cold in the SE and a band at the Equator. Warm around New Zealand/Tasmania. <br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/data/freq/days.png" /><br /><br /><br /><br /><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com14tag:blogger.com,1999:blog-7729093380675162051.post-32751214476182184412017-11-19T09:32:00.000+11:002017-11-19T13:14:52.098+11:00Pat Frank and Error Propagation in GCMs<meta charset="UTF-8">This post reviews recent controversy over some strange theories of Pat Frank. It reviews his WUWT posts, blog discussion, some obvious errors, and more substantively, how error propagation really works in solution of numerical partial differential equations (pde, as GCM climate models are), why it is important, and why it is well understood. It's a bit long so I'll start with a TOC. <h4>Contents:</h4><ul><li><a href="#1">The story</a><li><a href="#2">Responses</a><li><a href="#3">ad Absurdum</a><li><a href="#4">How error really propagates in solving differential equations</a><li><a href="#5">Eigenvalues</a><li><a href="#6">Summary</a></ul><br><a name='more'></a><h4 id="1">The story</h4>There has been a long running soap opera about attempts by <a href="https://wattsupwiththat.com/2015/02/24/are-climate-modelers-scientists/">Pat Frank</a> to publish a paper which claims that climate modellers are ignoring something he calls "propagation of errors", which he claims would yield extraordinarily large error bars, and invalidate GCM results. These attempts have been unsuccessful, about which he has been becoming <a href="https://wattsupwiththat.com/2017/10/23/propagation-of-error-and-the-reliability-of-global-air-temperature-projections/">increasingly strident</a>. He says the six journals and many referees that have rejected his papers, often with scathing reviews, are motivated only by conflict of interest, desire to preserve their funding etc. Publication of Pat Frank's papers would bring that all down. The most recent outburst at WUWT was <a href="https://wattsupwiththat.com/2017/11/12/consensus-climatology-in-a-nutshell-betrayal-of-integrity/">here</a>, where a seventh journal promptly rejected it. James Annan and Julie H were involved. <br><br>What he means by "propagation of errors" is just the process of forming an expression combining variables with associated error, and expressing this in terms of a total derivative, with the error magnitudes then combining in quadrature (they are assumed independent). That is certainly not something scientists are ignorant of, but it doesn't express what is happening here. There is a lot more involved in differential equation solution. <h4 id="2">Responses</h4> Far too much time has been taked up with this nuttiness. First of course by the journals, who have to deal with not only normal reviewing, but a stream of corrosive correspondence. ATTP has had, I think, several threads, the latest <a href="https://andthentheresphysics.wordpress.com/2017/10/23/watt-about-breaking-the-pal-review-glass-ceiling/">here</a>. James Annan expands on his cameo <a href="http://julesandjames.blogspot.com/2017/11/watts-up-with-pat-frank.html">here</a>. And Patrick Brown even <a href="https://patricktbrown.org/2017/01/25/do-propagation-of-error-calculations-invalidate-climate-model-projections-of-global-warming/">put together a video</a>. Further back, similar stuff made the rounds of skeptic sites, eg <a href="https://noconsensus.wordpress.com/2011/08/08/a-more-detailed-reply-to-pat-frank-part-1/">here</a>. The more mathematical folks there thought it was pretty nutty too. <br><br>As reference materials, Pat Frank posted his paper and also a listing of the journal reviews and correspondence <a href="https://wattsupwiththat.com/2017/10/23/propagation-of-error-and-the-reliability-of-global-air-temperature-projections/">here at WUWT</a>. His link to the paper is <a href="https://uploadfiles.io/zlrt6">here</a>. It's not a short paper - 13.4Mb. His link to the zipfile of reviews (44 Mb) is <a href="https://uploadfiles.io/f5luc">here</a>. <h4 id="3">ad Absurdum</h4>As you might expect, I have been joining in at WUWT too. My view, which I'll expand on below, is that propagation of error in numerical partial differential equations is very important, necessarily gets much attention, and is nothing like what he describes. But when folks like Eric Worrall at WUWT are solemnly chiding me for not understanding that propagation is a random walk, I can see expounding the correct theory there would be a waste of time. So I instead try <i>reductio ad absurdum</i> on a few rather obvious points. But the audience at WUWT is not sensitive to <i>absurdum</i>. <br><br>In the first recent post I <a href="https://wattsupwiththat.com/2017/10/23/propagation-of-error-and-the-reliability-of-global-air-temperature-projections/#comment-2643840">picked up on</a> a criticism of a reviewer that Pat Frank had insisted on a change of units when he averaged quantities over 20 years. He said that the units became quantity/year. Others had raised this, so in the text he had doubled down. Surely this qualifies as absurd? <br><br>I again tried a somewhat peripheral approach, saying, why years? Why not regard it as a period of 240 months, and say quantity/month. He said, well the quantity would change. 4 W/m2/year would become 1/3 W/m2/month. But the problem there is that he was using data from someone else's paper (Lauer and Hamilton, 2013) who simply said the average was 4 W/m2. No time interval specified. <br><br>It actually relates to the objection many referees raised, including James Annan. If you're going to say the error adds like a random walk, what is the time interval? Why per year? It obviously affects the result how long between steps. He gave some strange replies. <a href="https://wattsupwiththat.com/2017/10/23/propagation-of-error-and-the-reliability-of-global-air-temperature-projections/#comment-2647233">Here</a> he expounds the difference between a "measurement average" and a "statistical average": <br><br><div style="color:#aa0000">Several measurements of the height of one person: meters. A measurement average. <br>Average height of people in a room: meters per person. A statistical average. <br>Middle school math, and a numerical methods PhD <span style="color:#888888">[moi]</span> stumbles over it.</div><br><br>So, what if you sample in Eurpoe to see if Dutchmen are taller than Greeks. You get 1.8 m/Dutch and 1.7 m/Greek. Can you compare if they are in different units? (h/t JA) <br><br>On the second thread, I <a href="https://wattsupwiththat.com/2017/11/12/consensus-climatology-in-a-nutshell-betrayal-of-integrity/#comment-2663573">picked up on</a> a claim from PF's review of JA's review: <br><br><div style="color:#aa0000">He wrote, “… ~4W/m^2 error in cloud forcing…” except it is ±4 W/m^2 not Dr. Annan’s positive sign +4 W/m^2. Apparently for Dr. Annan, ± = +. <br>...<br>How does it happen that a PhD in mathematics does not understand rms (root-mean-square) and cannot distinguish a “±” from a “+”?</div><br><br>Thius was a less central point, but it throws down the gauntlet. Either JA or PF doesn't have a clue. So I thought, surely there are some there who are familiar with RMS, or the particular case of standard deviation? Surely they know they are always written as positive numbers, with good reason. But to reinforce it, I challenged them to find somewhere, anywhere, where an RMS quantity was written with a ±. But no-one could. PF did come up with a number of locations, which wrote the number as positive, but used it in an expression like a±σ. I couldn't get across the distinction. Yes, 3±2 is an interval, but 2 is a positive number. It isn't JA who is clueless. <br><br>The depressing thing to me was that there were people there who seemed to have some acquaintance with statistics or math, and would never before have thought of adding a ± to express a RMS. But now it was all so obvious to them that you should. The same conversion happened with the average units. <h4 id="4">How error really propagates in solving differential equations</h4>It's actually a very important practical issue for anyone who solved large systems of differential equations (de's), as I do. Because you always have error, if only from rounding. And if it can grow without limit, the solution is unstable. <br><br>A system of partial de's, as with GCMs, can be first discretised in space, to give a large but finite set of nodal values (and perhaps derivatives, or other quantities) with relations between them. There is a simplification that usually only nearby locations are related, which means the resulting set of ordinary de's (in time) are sparse. If the quantities are A (big vector) and the drivers F, the de could be written linearised as <br>dA/dt = -S*A + F<br>where S is the big sparse matrix. The time step will need to be taken short enough so that S varies slowly between steps, and it is enough to take it as constant. It can be expressed in terms of eigenvectors. So the driver, which could include an error term, is then also expressed in terms of the eigenvectors, and you can think of the equation in each direction as just being a scalar equation <br>da/dt = -s*a + f<br>The solution of this is a = exp(-s*t) ∫ exp(s*τ)*f dτ <br>where the range of integration is from arbitrary t0 to t.<br>As you go backward in time from t, the multiplier of f dies away exponentially, if s>0. So a "propagates" f to only a limited extent. It's actually an exponentially smoothed version of f. S may not be symmetric, so the eigenvalues could be complex, and the criterion then is real part Re(s)>0. <br><br>The same thing persists after you discretise in time, so integration is replaced by summation in discrete steps. This is actually where care is needed to ensure that a stable de converts to a stable recurrence. But that can be done. <br><br>So to prevent instability, it must certainly be true that eigenvalues s≥0. The interesting cases are when Re(s) is small. I'll look at the possible spectrum. <h4 id="5">Eigenvalues</h4>In the recurrences that result from a GCM, there are millions of eigenvalues. The salvation is that most of them have relatively large real part, and correspond to rapidly decaying processes (typically, hours for GCM). They are all the local flows that could be generated on the element scale, which lose kinetic energy rapidly through shearing. That is why the solution process can make some sense, because one can focus on a much smaller eigenspace where decay is either absent or slow on the scale, perhaps decades, of the solution being sought. I'll look at three classes: <ul><li>Waves. Here s has zero (or near) real part, but large imaginary part, so decay is slow and period is comparable to the time step. In classical CFD, these would be sound waves. They are important, because they create action at a distance, propagating the pressure field so the medium behaves like a not very compressible fluid (elasticity). In GCM's, they merge into gravity waves, because a pressure peak can move the air vertically. That actually reduces the propagation velocity. The waves do have some dissipation (eg surface shear) so the ideal s has small positive real part.<br><br>These waves are important, because to the extent they aren't well resolved in time after time discretisation, s can have negative real part, and if it outweighs dissipation, the wave will grow and the solution fail. That is why GCMs have a maximum time step of about half an hour, which demands powerful computers. <li>Secular motion and slow decay. The secular part is the desired solution responding to the driver F; I've bundled in the slow decay processes because they are hard to separate. The usual way is to wind back, so that only the slowest remain from the initial perturbation. But some does remain, and that is why GCMs tend to have apparently similar solutions which may differ by a degree or two in temperature, say. There are two remedies for this <ul><li>use of anomalies, which subtracts out the very slow decay, leaving he secular variation<li>Increased care with initial conditions. The smaller the component of these modes that is present initially, the smaller the later problem.</ul><li>Conserved quantities - energy and mass (of each component). These totalled correspond to zero eigenvalues; they shouldn't change. But in this case small changes do indeed accumulate, and that is a problem. GCMs include energy and mass "fixers" (see <a href="http://www.cesm.ucar.edu/models/atm-cam/docs/description/node2.html">CAM 3</a> for examples). The totals are monitored, discrepancies noted, and then corrected by adding in distributed amounts as required. <br><br>I remember a post at WUWT where Willis Eschenbach discovered this and cried malpractice. But it is perfectly legitimate. GCMs are models which incorporate all the math that the solution should conform to, and conservation is one such. It is true that as an extra constraint it appears to make more equations than variables. But large systems of equations are always ill-conditioned in some way; there aren't as many effective equations as it seems. That is why the conservations were able to erode, and the fixers just restore that. "Fixing" creates errors in the first class of eigenvalues, because while care should be taken to do it without introducing perturbations in their space, this can't be done perfectly. But again the salvation is that such errors rapidly decay. </ul><h4 id="6">Summary</h4>So what do we have with error propagation, and why is Pat Frank so wrong? <ul><li>Inhomogeneous equation systems are driven by a mix of intended drivers and error. <li>The effect of each is generally subject to exponential decay. The resulting solution is not a growing function of past errors, but in effect, like an exponential smooth of them. This may increase by a steady factor, but will also attenuate noisy error. <li>Propagation is not something that is ignored, as Pat Frank claims, but is a central and much studied part of numerical pde. </ul><br><br><br><br> Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com42tag:blogger.com,1999:blog-7729093380675162051.post-31505131889412173062017-11-17T04:44:00.000+11:002017-11-17T04:44:33.738+11:00GISS October global up 0.1°C from September, now 0.9°C.GISS <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#L1">warmed</a>, going from 0.80°C in September to 0.90°C in October (GISS report <a href="https://data.giss.nasa.gov/gistemp/news/20171116/">here</a>). That is very similar to the <a href="https://moyhu.blogspot.com.au/2017/10/giss-september-down-004-from-august.html">increase</a> (now 0.105°C) in TempLS mesh. It was the second warmest October on record, after 2015. It is interesting to reflect on that month two years ago, which at 1.04°C was at the time by some margin the warmest month ever. <br /><br />The overall pattern was similar to that in TempLS. Even warmer in Antarctica, cool in W US, and E Mediterranean, but the band of cool through to China is less continuous. <br /><br />As usual here, I will compare the GISS and previous TempLS plots below the jump. <br /><a name='more'></a><br />Here is GISS<br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/11/GISSoct.png" /><br /><br />And here is the TempLS spherical harmonics plot <br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/10/map.png" /><br /><br /><div style="color: #aa0000;">This post is part of a series that has now run for six years. The TempLS mesh data is reported <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#mesh">here</a>, and the recent history of monthly readings is <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#L1">here</a>. Unadjusted GHCN is normally used, but if you click the TempLS button there, it will show data with adjusted, and also with different integration methods. There is an interactive graph using 1981-2010 base period <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#Drag">here</a> which you can use to show different periods, or compare with other indices. There is a general guide to TempLS <a href="https://moyhu.blogspot.com.au/p/a-guide-to-global-temperature-program.html">here</a>. <br /><br />The reporting cycle starts with a report of the daily reanalysis index on about the 4th of the month. The next post is this, the TempLS report, usually about the 8th. Then when the GISS result comes out, usually about the 15th, I discuss it and compare with TempLS. The TempLS graph uses a spherical harmonics to the TempLS mesh residuals; the residuals are displayed more directly using a triangular grid in a better resolved WebGL plot <a href="http://www.moyhu.blogspot.com.au/p/blog-page_24.html">here</a>. </div><br /><br /><br /><br /><br /><br /><br /><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com2tag:blogger.com,1999:blog-7729093380675162051.post-58298633115838515852017-11-08T03:58:00.000+11:002017-11-08T04:00:33.435+11:00October TempLS global surface temperature up 0.11°C<a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#mesh">TempLS mesh</a> anomaly (1961-90 base) was up from 0.618°C in September to 0.73°C in October. This compares with the <a href="https://moyhu.blogspot.com.au/2017/11/october-ncepncar-global-anomaly-up-0055.html">smaller rise</a> of 0.055°C in the NCEP/NCAR index, and a <a href="http://www.drroyspencer.com/2017/11/uah-global-temperature-update-for-october-2017-0-63-deg-c/">similar rise</a> (0.09) in the UAH LT satellite index. <br /><br />The anomaly pattern was similar to the NCEP/NCAR picture. No great heat, except in Antarctica, but warm almost everywhere. A band of cool between the Sahara and Far East Russia, cool in the SE Pacific, and some cool in the western US. The change toward warmth at the poles will again lead to varying results in the major indices, with GISS likely to rise strongly compared with NOAA and HADCRUT. In fact, TempLS grid, which also has less coverage at the poles, fell slightly to October. Overall, sea temperatures rose a little, after dropping last month. <br /><br /><a href="http://www.drroyspencer.com/2017/11/uah-global-temperature-update-for-october-2017-0-63-deg-c/">Roy Spencer</a> noted the recent marked rise in the satellite records, which are now relatively a lot higher than the surface. I'll show (from <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#fig4">here</a>) the graph of the last four years on a common 1981-2010 base. Even UAH, which had been a low outlier for most of the time, is now well above the surface measures, and is not far below the 1998 peak. <br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/pages/latest/T/mth2.png" /><br /><br />Here is the temperature map: : <br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/10/map.png" /><br /><a name='more'></a><div style="color: #aa0000;">This post is part of a series that has now run for six years. The TempLS mesh data is reported <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#mesh">here</a>, and the recent history of monthly readings is <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#L1">here</a>. Unadjusted GHCN is normally used, but if you click the TempLS button there, it will show data with adjusted, and also with different integration methods. There is an interactive graph using 1981-2010 base period <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#Drag">here</a> which you can use to show different periods, or compare with other indices. There is a general guide to TempLS <a href="https://moyhu.blogspot.com.au/p/a-guide-to-global-temperature-program.html">here</a>. <br /><br />The reporting cycle starts with a report of the daily reanalysis index on about the 4th of the month. The next post is this, the TempLS report, usually about the 8th. Then when the GISS result comes out, usually about the 15th, I discuss it and compare with TempLS. The TempLS graph uses a spherical harmonics to the TempLS mesh residuals; the residuals are displayed more directly using a triangular grid in a better resolved WebGL plot <a href="http://www.moyhu.blogspot.com.au/p/blog-page_24.html">here</a>. </div><br /><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com11tag:blogger.com,1999:blog-7729093380675162051.post-21236367758857701102017-11-03T05:18:00.000+11:002017-11-03T05:18:09.217+11:00October NCEP/NCAR global anomaly up 0.055°C from SeptemberIn the <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#NCAR">Moyhu NCEP/NCAR index</a>, the monthly reanalysis average rose from 0.317°C in September to 0.372°C in October, 2017, making it the warmest month since May It was again a very varied month; starting cool, them a warm spike, and cool again at the end. <br><br>The main feature was a big cool band from the Sahara through Siberia to China. Also cold in the SE Pacific, with some La Nina like pattern. Mixed at the poles, but more warm than cold. Since Antarctica was cold last month, this suggests the warming will be reflected more strongly in GISS/TempLS rather than NOAA/HADCRUT. <br><br><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/data/freq/days.png"></img><br><br><br><br><br><br> Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com12tag:blogger.com,1999:blog-7729093380675162051.post-67471062847389542612017-11-01T22:21:00.000+11:002017-11-02T16:32:53.567+11:00Penguins - more fantasy about a Time coverThere was a <a href="https://wattsupwiththat.com/2017/10/30/how-google-and-msm-use-fact-checkers-to-flood-us-with-fake-claims">really crazy article</a> at WUWT by Leo Goldstein on fact checkers and allegations of fakery. Leo G regularly publishes really paranoid stuff on Google conspiracies etc. But this one was a doozy, titled "How Google and MSM Use “Fact Checkers” to Flood Us with Fake Claims". Here is an extract: <br /><blockquote style="background-color: #eef2ff;">An example is a global cooling scare of the 70s. In 1977, Time magazine published an issue under the following cover: <br /><br /><img src="https://wattsupwiththat.files.wordpress.com/2017/10/time-1977-bigfreeze.jpg" width="200" /><br /><br />That cover is a seriously inconvenient truth for climate alarmists and their media accessories. So, Time attempted to re-write a history. It published a forged version of its own cover, the left one on the following picture (the “Time-2013-version-of-1977”): <br /><br /><img src="https://wattsupwiththat.files.wordpress.com/2017/10/facebook_meme_global_cooling_11-fake.gif" width="400" /><br /><br />…and then easily debunked it as a photoshopped version of its April 2007 cover (3). As I will explain below, Time magazine knew it was launching a hoax. The rest of the liberal media popularized it, although it could have easily recognized it. Snopes adopted it (4), invented additional details that were not present in the Time article, and angrily condemned “climate deniers.” <br />... </blockquote>And there is lots more about how "notorious Greg Laden" exposed the hoax etc <br /><blockquote style="background-color: #eef2ff;">The “original source” of the fake cover is hard to trace. It is almost certainly somebody in the climate alarmism camp: the real cover from 1977 was very clearly making a point against climate alarmism. But the point of entry of the forgery into mass circulation was Time magazine, June 6 of 2013. Good job, motherf*ckers.</blockquote>My initial commentary was a bit confused, mainly because we have recently had NBN (a new optic fibre system, much complained about) installed, and it kept disconnecting from the internet. However, I loooked into it a bit more and found quite a lot of history. <br /><br />Firstly, some links.<br /><ul><li> <a href="http://time.com/4778937/fake-time-cover-ice-age/">2017 Time report</a> on how it even got to Trump's desk, supplied by senior aide <a href="https://en.wikipedia.org/wiki/K._T._McFarland">K.T.MacFarland</a>, now US ambassador to Singapore. </li><li><a href="https://www.politico.com/story/2017/05/15/donald-trump-fake-news-238379">The Politico story</a> about the Trump occurrence. </li><li><a href="http://science.time.com/2013/06/06/sorry-a-time-magazine-cover-did-not-predict-a-coming-ice-age/">Time's 2013 article</a> on the forgery </li><li><a href="http://gregladen.com/blog/2013/06/04/the-1970s-ice-age-myth-and-time-magazine-covers-by-david-kirtley/">David Kirtley at Greg Laden</a>, a few days earlier </li><li><a href="https://www.snopes.com/the-coming-ice-age/">Snopes 2017</a></li><li>The actual Time 1977 cover is <a href="https://archive.is/gWpXa">here</a>. </li></ul><br />Goldstein is of course talking nonsense about the forgery being a plant designed to pick up on the 1977 Big Freeze cover. That cover wasn't about global cooling at all. It was a straight forward factual article about a very snowy winter in 1976/7 in North America. There was in fact a 1974 Time article on global cooling that people might have wanted to look up, as the Time link describes. But there was no cover associated with that, although the contrary is widely believed. <br /><br />But I did some more searching. First some notable occurrences of the hoax, which is actually ten years old: <br /><ul><li> As mentioned, it apparently got to President Trump via K.T.MacFarland </li><li> <a href="http://www.dailymail.co.uk/news/article-2294560/The-great-green-1-The-hard-proof-finally-shows-global-warming-forecasts-costing-billions-WRONG-along.html">David Rose</a> at Daily Mail, "Great Green Con". See the blue box which says <i>"In the Seventies, scientists and policymakers were just as concerned about a looming ‘ice age’ as they have been lately about global warming – as the Time magazine cover pictured here illustrates</i>. The picture has now been removed without explanation, which doesn't help the clarity of the text. </li><li> <a href="https://wattsupwiththat.com/2017/01/28/homogenization-of-temperature-data-makes-capetown-south-africa-have-a-warmer-climate-record/">WUWT, 2017</a></li><li><a href="http://www.drroyspencer.com/2013/10/the-danger-of-hanging-your-hat-on-no-future-warming/">Roy Spencer, 2013</a></li></ul><br />There clearly was more history, since most of these don't have the 1977 pasted over as in the Time/Kirtley versions. So I did a bit more searching. <br /><br /><a href="https://stevengoddard.wordpress.com/2011/06/18/tima-magazine-warned-us-about-climate-change/">Steven Goddard</a>, 2011 is an old reliable. This predates 2013, so clearly debunks the Goldstein fantasy about Time forging its own cover in 2013. <br /><br /><a href="http://neoconexpress.blogspot.com.au/2007/02/time-like-newsweek-predicted-iceage-in.html">Neocon Exresss</a> is the earliest nominal date, at Feb 12, 2007. But that predates the real cover, so I presume the image was added later. Of interest is a Jan 2011 comment, drawing attention to the fakery. <br /><br />But the most interesting early occurrence was in August 2007, in <a href="http://www.freerepublic.com/focus/f-chat/1887747/posts#comment">Free Republic</a>. That was soon after the genuine cover in April 2007. But this one is an animated GIF, and shows alternately the fake and the real. I'm not sure what the point is, but it must be getting close to the source, where it was somehow, I suppose, seen as parody. The url links to <a href="https://www.strangepolitics.com/archive/archive_40.html">this site</a>, which seems to be for prank pictures, but I couldn't find an original there. <span style="color:#9955ff">Update: The picture numbering on the StrangePolitics site isn't entirely consistent, but seems to place the original in April 2007, the month of the genuine cover.</span><<br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/11/penguins.gif" /><br /><br />Here is Goldstein's summary:<br /><blockquote>In this example, multiple entities are involved: Google, Snopes, Time magazine, and ScienceBlogs. They are independent entities, but each of them knowingly plays its own well-defined role in the chain of injection, amplification, propagation, and utilization of a lie. Thus, they might be referred to as a single body. </blockquote><br /><br /><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com14tag:blogger.com,1999:blog-7729093380675162051.post-76062797011442652512017-10-22T05:46:00.001+11:002017-10-22T05:53:26.739+11:00Averaging and error propagation - random walk - math<meta charset="UTF-8" />I have been <a href="https://wattsupwiththat.com/2017/10/14/durable-original-measurement-uncertainty">arguing again</a> at WUWT. There is a persistent belief there which crops up over and over, that averages of large numbers of temperatures must have error estimates comparable to those for their individual components. The usual statement is that you can't make a bad measure good just by repeating over and over. I try to point out that the usually criticised climate averages are not reached by repeating one measure many times, and have invited people to identify the occurrence of such a problem that concerns them, without success. <br><br>I dealt with a rather similar occurrence of this issue <a href="https://moyhu.blogspot.com.au/2016/04/averaging-temperature-data-improves.html">last year</a>. There I showed an example where Melbourne daily maxima given to 1 decimal (dp) were averaged over a month, for several months, and then averaged again after rounding to nearest integer. As expected, the errors in averaging were much less than 1°C. The theoretical is the standard deviation of the unit uniform distribution (sqrt(1/12) approx 0.29, divided by the sqrt of the number in the average, and the results were close. This time I did a <a href="https://wattsupwiththat.com/2017/10/14/durable-original-measurement-uncertainty/comment-page-1/#comment-2636539">more elaborate averaging</a> with a century of data for each month. As expected this reduced the error (discrepancy between the 1dp mean and the 0dp mean) by a factor of 10. <br><br>I also showed <a href="https://moyhu.blogspot.com.au/2015/02/surface-temperature-global-average-is.html">here</a> that for the whole process of averaging over time and globally over space, adding white noise to all monthly averages of amplitude 1°C made almost no difference to the global anomaly time series. <br><br><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/GHCN/AddRandom/ts.png" width=500></img><br><br>The general response that there is something special about the measurement errors which would make them behave differently to the rounding change. And there are usually arguments about whether the original data was really as accurate as claimed. But if one could somehow have perfect data, it would just be a set of rather similar numbers distributed over a similar range, and there is no reason to expect rounding to have a different effect. Nor is there any kind of variation that could be expected to have different effect to rounding, as long as there is no bias; that is, as long as the errors are equally likely to be up or down. If there is bias, then it will be propagated. That is why bias should be the focus. <br><br>Here is a table of contents for what is below the fold: <ul><li><a href="https://moyhu.blogspot.com.au/2017/10/averaging-and-error-propagation-random.html#H1">Random Walk</a><li><a href="https://moyhu.blogspot.com.au/2017/10/averaging-and-error-propagation-random.html#H2">Extreme rounding</a><li><a href="https://moyhu.blogspot.com.au/2017/10/averaging-and-error-propagation-random.html#H3">Binning and Poisson Summation.</a></ul><a name='more'></a><h4 id="H1">Random walk</h4>I think a useful way of viewing this is to think of the process of building the average. I'll stick to simple averages here, formed by just summing N numbers and dividing by N. The last step is just scaling, so we can focus on the addition. If the elements just have a mean value and a random error, then the cumulative sums are a classical one dimensional random walk, with drift. If the mean can be subtracted out, the drift is zero, and the walk is just the accumulation of error over N steps. Now if the steps are just a unit step either up or down, equally likely and independent from step to step, the expected distance from the origin after N steps is sqrt(N). It could be up or down, or of course it could be zero - back at the start. The reduced result from N steps reflects the inefficiency of random walk as a way of getting anywhere. With averaging, the step is of varying length, from a distribution, but again if independent and with sd σ, the expected distance is σ*sqrt(N). If not independent, it may be larger. For the simplest autocorrelation, Ar(1), with correlation factor r between 0 and 1, the distance is amplified by the Quenouille correction sqrt((1+r)/(1-r)). But it would take a very large correlation to bring the distance up close to σN. <h4 id="H2">Extreme rounding</h4>At WUWT, a commenter Mark S Johnson was vigorously arguing, and I agree, that it was all about sampling error, and he said that you could actually round the heights of US adult males to the nearest foot, and still get an average to the nearest inch. I blanched at that, but he insisted, and he was right. This is <a href="https://wattsupwiththat.com/2017/10/14/durable-original-measurement-uncertainty/#comment-2640646">the test I described</a>:<br><i>I tried it, and Mark's method did still do well. I assumed heights normally distributed, mean 5.83, sd 0.4. Centered, the expected numbers were <pre><br />4.5 5.5 6.5 7.5<br />190 6456 3337 17<br /></pre>Weighted average is 5.818, so it is within nearest inch.</i><br><br>Mark suggested 5.83 was the correct mean, and I chose a sd of 4 to ensure that 7 ft was infrequent but not unknown. I was surprised that it did so well, but I had an idea why, and I was interested. Here's why it works, and what the limits are: <h4 id="H3">Binning and Poisson Summation.</h4>Numerically, the discrete separation of data into "bins" through rounding has a lot in common with integration formulae. The mean is ∫p dC, where C is the cumulative density function (cdf); the rounded version makes a sum Σ pΔC. There are a great variety of finite interval integration formulae, going back to Newton-Cotes, and the almost equally ancient <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula">Euler-MacLaurin</a> formula, which relates the sum of regular spaced samples ito the integral with error terms involving the powers of the spacing h and the end-point derivatives. The error here is polynomial in h, and depending on the first derivative that is discontinuous at the end-point. But that raises an interesting question - what if there isn't an end-point, or if there is and all the derivatives are zero? It turns out that the approach to the integral, with diminishing h, can be faster than any power of h. The key is a very elegant and powerful formula from nearly 200 years ago - the <a href="https://en.wikipedia.org/wiki/Poisson_summation_formula">Poisson summation formula</a>. <br><br>At its simplest, this formula equates the sum of equally spaced samples of a function with a similar sum of samples of the Fourier transform: <br>Σ f(ih) = Σ F(k/h)<br>where F is the Fourier transform (FT) in the convention F(ω)=∫f(t) exp(-2iπt) dt and h is the spacing. <br>Sums and integrals here are from -∞ to ∞, and summed over whatever looks like an index (i,k etc).<br>This is an integral approximation because F(0) is the integral, and F(1/h) will be the first error term. If F tapers rapidly, as it will when f is smooth, and if h is small, the error will also be small. <br><br>From above, mean M = Σ kh ΔC_k. <br>The cdf C does not work so well with the Fourier transform, because it doesn't go to zero. But if we let the mean μ of the probability density function (pdf) P vary, then the derivative<br>dM/dμ = Σ kh ΔP(kh+μ), since P is the derivative of C <br>and with this shift, summing by parts<br>Σ kh ΔP(μ)_k = Σ hP(kh+μ) = Σ Þ(k/h)exp(2kπt/h) <br>using now Þ for FT of P<br><br>The practical use of these series is that the k=±1 terms are enough to describe the error, unless h is very large indeed. So in real terms, since for pdf the integral is 1, then <br><br>dM/dμ = ≅ Σ kh ΔP(μ)_k = 1 + 2*Þ(1/h)*sin(2πμ/h) + ... <br><br> or M = μ + Þ(k/h)*h/π * sin(2kπμ/h) <br><br>This tells us that the binned mean M is exact if the mean μ of the pdf P lies on an integer point of the binning (a ruler mark) and oscillates between, with another zero half-way. The maximum error is Þ(k/h)*h/π. <br><br>In the case of the normal distribution assumed here, Þ(1/h)*h/π = h/π exp(-2*(πσ)²). And in our particular case, h=1, σ=0.4, so the maximum excursion (if the mean were 5.75ft) is 0.0135, or about 1/6" (inch). That's still pretty accurate, with a ruler with 1 foot spacings. How much further could we go? With 2ft spacing, it gets dramatically worse, about 1.75" error. Put more generally, binning with 2.5σ spacing is good, but 5σ starts losing accuracy. <br><br>I think that robustness is remarkable, but how much does it depends on normal distribution? The Poisson formula gives a way of thinking about that. The FT of a gaussian is another Gaussian, and tapers very fast. That is basically because the pdf is maximally smooth. If higher frequencies enter, then that tapering that governs convergence is not so fast, and if there is actually a discontinuous derivative, it reduces to polynomial order. And of course we never really know what the distribution is; it is generally inferred from the look of the histogram. However there are often physical reasons for it to be smooth. Or perhaps that overstates it - the most powerful argument is, why not? What physics would cause the distribution to have kinks? <br><br><br><br> Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com26tag:blogger.com,1999:blog-7729093380675162051.post-9691395136639830322017-10-18T08:38:00.002+11:002017-10-18T08:38:28.643+11:00GISS September down 0.04°C from August.GISS showed a <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#L1">small decrease</a>, going from 0.84°C in August to 0.80°C in September (GISS report <a href="https://data.giss.nasa.gov/gistemp/news/20171017/">here</a>). It was the fourth warmest September in the record. That decrease is very similar to the <a href="https://moyhu.blogspot.com.au/2017/09/august-global-surface-temperature-down.html">0.06°C fall</a> in TempLS mesh. <br /><br />The overall pattern was similar to that in TempLS. Warm almost everywhere, especially across N America, and S America and Middle/near East. Cool spots in W Europe and N central Siberia. <br /><br />As usual here, I will compare the GISS and previous TempLS plots below the jump. <br /><a name='more'></a><br />Here is GISS<br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/10/GISSsep.jpg" /><br /><br />And here is the TempLS spherical harmonics plot <br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/10/map.png" /><br /><br /><div style="color: #aa0000;">This post is part of a series that has now run for six years. The TempLS mesh data is reported <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#mesh">here</a>, and the recent history of monthly readings is <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#L1">here</a>. Unadjusted GHCN is normally used, but if you click the TempLS button there, it will show data with adjusted, and also with different integration methods. There is an interactive graph using 1981-2010 base period <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#Drag">here</a> which you can use to show different periods, or compare with other indices. There is a general guide to TempLS <a href="https://moyhu.blogspot.com.au/p/a-guide-to-global-temperature-program.html">here</a>. <br /><br />The reporting cycle starts with a report of the daily reanalysis index on about the 4th of the month. The next post is this, the TempLS report, usually about the 8th. Then when the GISS result comes out, usually about the 15th, I discuss it and compare with TempLS. The TempLS graph uses a spherical harmonics to the TempLS mesh residuals; the residuals are displayed more directly using a triangular grid in a better resolved WebGL plot <a href="http://www.moyhu.blogspot.com.au/p/blog-page_24.html">here</a>. </div><br /><br /><br /><br /><br /><br /><br /><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com1tag:blogger.com,1999:blog-7729093380675162051.post-37247727566900202442017-10-08T10:21:00.003+11:002017-10-08T10:21:50.713+11:00September global surface temperature down 0.06°C<a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#mesh">TempLS mesh</a> anomaly (1961-90 base) was down from 0.673°C in August to 0.613°C in September. This compares with the <a href="https://moyhu.blogspot.com.au/2017/10/september-ncepncar-global-anomaly-down.html">smaller drop</a> of 0.02°C in the NCEP/NCAR index, and a <a href="http://www.drroyspencer.com/2017/10/uah-global-temperature-update-for-september-2017-0-54-deg-c/">substantial rise</a> (0.12) in the UAH LT satellite index. UAH is up a lot in the last two months. <br /><br />Despite the fall, most of the world looked pretty warm. Warmth in Canada, S America, near East, China. A cool spot in Siberia, but warmth in Antarctica, which contrasts with the earlier NCEP/NCAR report, which showed predominant cold. Here is the temperature map: : <br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/10/map.png" /><br /><a name='more'></a><div style="color: #aa0000;">This post is part of a series that has now run for six years. The TempLS mesh data is reported <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#mesh">here</a>, and the recent history of monthly readings is <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#L1">here</a>. Unadjusted GHCN is normally used, but if you click the TempLS button there, it will show data with adjusted, and also with different integration methods. There is an interactive graph using 1981-2010 base period <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#Drag">here</a> which you can use to show different periods, or compare with other indices. There is a general guide to TempLS <a href="https://moyhu.blogspot.com.au/p/a-guide-to-global-temperature-program.html">here</a>. <br /><br />The reporting cycle starts with a report of the daily reanalysis index on about the 4th of the month. The next post is this, the TempLS report, usually about the 8th. Then when the GISS result comes out, usually about the 15th, I discuss it and compare with TempLS. The TempLS graph uses a spherical harmonics to the TempLS mesh residuals; the residuals are displayed more directly using a triangular grid in a better resolved WebGL plot <a href="http://www.moyhu.blogspot.com.au/p/blog-page_24.html">here</a>. </div><br /><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com5tag:blogger.com,1999:blog-7729093380675162051.post-10618698876572075612017-10-03T06:21:00.001+11:002017-10-05T14:08:18.855+11:00September NCEP/NCAR global anomaly down 0.02°C from AugustIn the <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#NCAR">Moyhu NCEP/NCAR index</a>, the monthly reanalysis average declined from 0.337°C in August to 0.317°C in September, 2017. It was again a very varied month; it looked as if it would come out quite warm until a steep dip about the 23rd; that cool spell then lasted until the end of the month. <br><br>The main feature was cold in Antarctica, so again we can expect this to be strongly reflected in GISS and TempLS, and less in NOAA and HADCRUT. Elsewhere, cold in Central Russia, but warm in the west; fairly warm around the Arctic. <br><br><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/data/freq/days.png"></img><br><br><br><br><br><br> Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com6tag:blogger.com,1999:blog-7729093380675162051.post-26114893680065873662017-09-29T15:55:00.000+10:002017-09-30T14:59:56.108+10:00Nested gridding, Hadcrut, and Cowtan/Way .<div style="color: red;"><b>Update</b>I had made an error in coding for the HADCRUT/C&W example - see below. The agreement with C&W is now much improved.</div><br />In my <a href="https://moyhu.blogspot.com.au/2017/09/simple-use-of-complex-grid-earth.html">previous post</a>, I introduced the idea of hierarchical, or nested gridding. In earlier posts, eg <a href="https://moyhu.blogspot.com.au/2017/09/the-best-grid-for-earth-temperature.html">here</a> and <a href="https://moyhu.blogspot.com.au/2017/09/grids-platonic-solids-and-surface.html">here</a>, I had described using platonic solids as a basis for grids on the sphere that were reasonably uniform, and free of the pole singularities of latitude/longitude. I gave data files for icosahedral hexagon meshes of various degrees of resolution, usually proceeding by a factor of two in cell number or length scale. And in that previous post, I emphasised the simplicity of a scheme for working out which cell a point belonged by finding the nearest centre point. I foreshadowed the idea of embedding each such grid in a coarser parent, with grid averaging proceeding downward, and using the progressive information to supply estimates for empty cells.<br /><br />The following graph from HADCRUT illustrates the problem. It shows July 2017 temperature anomalies on a 5°x5° grid, with colors for cells that have data, and white otherwise. They average the area colored, and omit the rest from the average. As I often argue, as a global estimate, this effectively replaces the rest by the average value. HADCRUT is aware of this, because they actually average by hemispheres, which means the infilling is done with the hemisphere average rather than global. As they point out, this has an important benefit in earlier years when the majority of missing cells were in the SH, which was also behaving differently, so the hemisphere average is more eappropriate than global. On the right, I show the same figure, but this time with my crude coloring in (with Paint) of that hemisphere average. You can assess how appropriate the infill is:<br /><br /><table><tbody><tr><td><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/09/July17.png" width="400" /></td><td><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/09/July17painted.png" width="400" /></td></tr></tbody></table><br /><br />A much-discussed paper by <a href="https://moyhu.blogspot.com.au/2013/11/cowton-and-way-trends.html">Cowtan and Way 2013</a> noted that this process led to bias in that the areas thus infilled tended not to have the behaviour of the average, but were warming faster, and this was underestimated particularly since 2000 because of the Arctic. They described a number of remedies, and I'll concentrate on the use of kriging. This is a fairly elaborate geostatistical interpolation method. When applied, HADCRUT data-based trends increased to be more in line with other indices which did some degree of interpolation. <br /><br />I think the right way to look at this is getting infilling right. HADCRUT was on the right track in using hemisphere averages, but it should be much more local. Every missing cell should be assigned the best estimate based on local data. This is in the spirit of spatial averaging. The cells are chosen as regions of proximity to a finite number of measurement points, and are assigned an average from those points because of the proximity. Proximity does not end at an artificial cell boundary. <br /><br />In the previous post, I set up a grid averaging based on an inventory of about 11000 stations (including GHCN and ERSST) but integrated not temperature but a simple function sin(latitude)^2, which should give 1/3. I used averaging omitting empty cells, and showed that at coarse resolution the correct value was closely approximated, but this degraded with refinement, because of the accession of empty cells. I'll now complete that table using nested integration with hexagonal grid. At each successive level, if a cell is empty, it is assigned the average value of the smallest cell from a previous integration that includes it. (<span style="color: blue;">I have fixed the which.min error here too; it made little difference).</span><br /><br /> <table><tbody><tr><td width="10"></td><td width="100">level</td><td width="100">Numcells</td><td width="120">Simple average</td><td width="120">Nested average</td></tr><tr><td>1<td>1<td>32<td>0.3292<td>0.3292 <tr><td>2<td>2<td>122<td>0.3311<td>0.3311 <tr><td>3<td>3<td>272<td>0.3275<td>0.3317 <tr><td>4<td>4<td>482<td>0.3256<td>0.3314 <tr><td>5<td>5<td>1082<td>0.3206<td>0.3317 <tr><td>6<td>6<td>1922<td>0.3167<td>0.332 <tr><td>7<td>7<td>3632<td>0.311<td>0.3313 <tr><td>8<td>8<td>7682<td>0.3096<td>0.3315</td></tr></tbody></table><br /><br />The simple average shows that there is an optimum; a grid fine enough to resolve the (small) variation, but coarse enough to have data in most cells. The function is smooth, so there is little penalty for too coarse, but a larger one for too fine, since the areas of empty cells coincides with the function peak at the poles. The merit of the nested average is that it removes this downside. Further refinement may not help very much, but it does no harm, because a near local value is always used. <br /><br />The actual coding for nested averaging is quite simple, and I'll give a more complete example below. <br /><h4>HADCRUT and Cowtan/Way</h4>Cowtan and Way thankfully released a <a href="http://www-users.york.ac.uk/~kdc3/papers/coverage2013/series.html">very complete data set</a> with their paper, so I'll redo their calculation (with kriging) with nested gridding and compare results. They used HADCRUT 4.1.1.0, data ending at end 2012. Here is a plot of results from 1980, with nested integration of the HADCRUT gridded data at centres (but on a hex grid). I'm showing every even step as hier1-4, with hier4 being the highest resolution at 7682 cells. All anomalies relative to 1961-90.. <br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/09/hadcwnested.png" width="1000" /><br /><div style="color: red;"><b>Update</b>I had made an error in coding for the HADCRUT/C&W example - see code. I had used which.min instead of which.max. This almost worked, because it placed locations in the cells on the opposite side of the globe, consistently. However, the result is now much more consistent with C&W. With refining, the integrals now approach from below, and also converge much more tightly.</div><br />The HADCRUT 4 published monthly average (V4.1.1.0) is given in red, and the Cowtan and Way Version 1 kriging in black. <strike>The nested integration makes even more difference than C&W, mainly in the time from 1995 to early 2000's. Otherwise, a A</strike>s with C&W, it adheres closely to HADCRUT in earlier years, when presumably there isn't much bias associated with the missing data. C&W focussed on the effect on OLS trends, particularly since 1/1997. Here is a table, in °C/Century:<br /><br /><table><tbody><tr><td width="140"></td><td width="140">Trend 1997-2012</td><td width="140">Trend 1980-2012 </td></tr><tr><td>HAD 4</td><td>0.462</td><td>1.57 </td></tr><tr><td>Hier1</td><td>0.902</td><td>1.615 </td></tr><tr><td>Hier2</td><td>0.929</td><td>1.635 </td></tr><tr><td>Hier3</td><td>0.967</td><td>1.625 </td></tr><tr><td>Hier4</td><td>0.956</td><td>1.624 </td></tr><tr><td>C&W krig</td><td>0.97</td><td>1.689 </td></tr></tbody></table><br /><br />Convergence is very good to the C&W trend I calculate. In their paper, for 1997-2012 C&W give a trend of 1.08 °C/Cen (table III) <strike>which would agree very well with the nested results</strike>. C&W used ARMA(1,1) rather than OLS, but the discrepancy seems too large for that.<span style="color: blue;">Update: Kevin Cowtan has explained the difference in a comment below.</span><br /><br /><h4>Method and Code</h4>This is the code for the integration of the monthly sequence. I'll omit the reading of the initial files and the graphics, and assume that we start with the HADCRUT 4.1.1.0 gridded 1980-2012 data reorganised into an array had[72,36,12,33] (lon,lat,month,year). The hexmin[[]] lists are as described and <a href="https://moyhu.blogspot.com.au/2017/09/the-best-grid-for-earth-temperature.html">posted</a> previously. The 4 columns of $cells are the cell centres and areas (on sphere). The first section is just making pointer lists from the anomaly data into the grids, and from each grid into its parent. If you were doing this regularly, you would store the pointers and just re-use as needed, since it only has location data. The result is the gridpointer and datapointer lists. The code takes a few seconds. <br /><pre>monthave=array(NA,c(12,33,8)) #array for monthly averages <br />datapointer=gridpointer=cellarea=list(); g=0;<br />for(i in 1:8){ # making pointer lists for each grid level<br /> g0=g; # previous g<br /> g=as.matrix(hexmin[[i]]$cells); ng=nrow(g);<br /> cellarea[[i]]=g[,4]<br /> if(i>1){ # pointers to coarser grid i-1<br /> gp=rep(0,ng)<br /> for(j in 1:ng)gp[j]=which.max(g0[,1:3]%*%g[j,1:3])<br /> gridpointer[[i]]=gp<br /> }<br /> y=inv; ny=nrow(y); dp=rep(0,ny) # y is list of HAD grid centres in 3D cartesian<br /> for(j in 1:ny) dp[j]=which.max(g[,1:3]%*%y[j,])<br /> datapointer[[i]]=dp # datapointers into grid i<br />}<br /></pre><span style="color: magenta;">Update: Note the use of which.max here, which is the key instruction locating points in cells. I had originally used which.min, which actually almost worked, because it places ponts on the opposite side of the globe, and symmetry nearly makes that OK. But not quite. Although the idea is to minimise the distance, that is implemented as maximising the scalar product.</span><br /><br />The main data loop just loops over months, counting and adding the data in each cell (using datapointer); forming a cell average. It then inherits from the parent grid values (for empty cells) from the parent average vector using gridpointer to find the match, so at each ave is complete. There is an assumption that the coarsest level has no empty cells. It is then combined with area weighting (cellarea, from hexmin) for the monthly average. Then on to the next month. The result is the array monthave[month, year, level] of global averages. <br /><pre>for(I in 1:33)for(J in 1:12){ # looping over months in data from 1980<br /> if(J==1)print(Sys.time())<br /> ave=rep(NA,8) # initialising<br /> #tab=data.frame(level=ave,Numcells=ave,average=ave)<br /> g=0<br /> for(K in 1:8){ # over resolution levels<br /> ave0=ave<br /> integrand=c(had[,,J,I+130]) # Set integrand to HAD 4 for the month <br /> area=cellarea[[K]]; <br /> cellsum=cellnum=rep(0,length(area)) # initialising<br /> dp=datapointer[[K]]<br /> for(i in 1:n){ # loop over "stations"<br /> ii=integrand[i]<br /> if(is.na(ii))next # no data in cell<br /> j=dp[i]<br /> cellsum[j]=cellsum[j]+ii<br /> cellnum[j]=cellnum[j]+1<br /> }<br /> j=which(cellnum==0) # cells without data<br /> gp=gridpointer[[K]]<br /> if(K>1)for(i in j){cellnum[i]=1;cellsum[i]=ave0[gp[i]]}<br /> ave=cellsum/cellnum # cell averages<br /> Ave=sum(ave*area)/sum(area) # global average (area-weighted)<br /> if(is.na(Ave))stop("A cell inherits no data")<br /> monthave[J,I,K] = round(Ave,4) # weighted average<br /> }<br />}# end I,J<br /></pre><br /><h4>Data</h4>Moyhuhexmin has the hex cell data and was given in the earlier post. I have put a new zipped ascii version <a href="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/09/Moyhu_hexmin_ascii.zip">here</a> <br /><br /><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com9tag:blogger.com,1999:blog-7729093380675162051.post-19970571305189461292017-09-28T14:55:00.000+10:002017-09-28T15:00:07.085+10:00Simple use of a complex grid - Earth temperature.This is a follow-up to my <a href="https://moyhu.blogspot.com.au/2017/09/the-best-grid-for-earth-temperature.html">last post</a>, which refined ideas from an <a href="https://moyhu.blogspot.com.au/2017/09/grids-platonic-solids-and-surface.html">earlier post</a> on using platonic solids as a basis for grids on the sphere that were reasonably uniform, and free of the pole singularities of latitude/longitude. I gave data files for use, as I did with an <a href="https://moyhu.blogspot.com.au/2017/06/cubing-sphere.html">earlier post</a> on a special case, the cubed sphere.<br /><br />The geometry involved can be complicated, but a point I made in that last post was that users need never deal with the complexity. I gave a minimal set of data for grids of varying resolution, which basically recorded the mid-points of the cells, and their area. That is all you need to make use of them. <br /><br />I should add that I don't think the hexagon method I recommend is a critical improvement over, say, the cubed sphere method. Both work well. But since this method of application is the same for any variant, just using cell centres and areas in the same way, there is no cost in using the optimal. <br /><br />In this post, I'd like to demonstrate that with an example, with R code for definiteness. I'd also like to expand on the basic idea, which is that near-regular grids of any complexity have he Voronoi property, which is that cells are the domain of points closest to the mid-points. That is why mid-point location is sufficient information. I can extend that to embedding grids in grids of lower resolution; I will recommend a method of hierarchical integration in which empty cells inherit estimates from the most refined grid that has information for their area. I think this is the most logical answer to the empty cell problem. <br /><br />In the demonstration, I will take the inventory of stations that I use for <a href="https://moyhu.blogspot.com.au/p/a-guide-to-global-temperature-program.html">TempLS</a>. It has all GHCN V3 stations together with a selection of ERSST cells, treated as stations located at grid centres. It has 10997 locations. I will show how to bin these, and use the result to do a single integration of data on those points. <br /><br />I start with calling the data for the inventory ("invo.sav") (posted in the dataset for cubed sphere above). Then I call the Moyhuhexmin data that I posted in the last post. I am going to do the integration over all 8 resolution levels, so I loop over variable K, collecting results in ave[]: <br /><pre>load("invo.sav")<br />load("Moyhuhexmin.sav")<br />ave=rep(NA,8) # initialising<br />for(K in 1:8){<br /> h=hexmin[[K]] # dataframe for level K<br /> g=as.matrix(h$cells) # 3D coords of centres, and areas<br /> y=invo$z; n=nrow(y); # invo$z are stations; <br /> pointer=rep(0,n)<br /></pre>This is just gathering the information. g and y are the two sets of 3D cartesian coordinates on the sphere to work with. Next I locate y in the cells which have centre g: <br /><pre> pointer=rep(0,n)<br /> for(i in 1:n) pointer[i]=which.min(g[,1:3]%*%y[i,]) # finding closest g to y <br /></pre>If this were a standalone calculation, I wouldn't have done this as a separate loop. But the idea is that, once I have found the pointers, I would store them as a property of the stations, and never have to do this again. Not that it is such a pain; although I discussed last time a multi-stage process, first identifying the face and then searching that subset, in fact with near 11000 nodes and 7682 cells (highest resolution), the time taken is still negligible - maybe 2 seconds on my PC. <br /><br />Now to do an actual integration. I'll use a simple known function, where one would normally use temperature anomalies assigned to a subset of stations y. I'll use the y-coord in my 3D, which is sin(latitude),, and sunce that has zero integral, I'll integrate the square. The answer should be 1/3. <br /><pre>integrand=y[,2]^2<br />area=g[,4]; <br />cellsum=cellnum=rep(0,nrow(g)) # initialising<br />for(i in 1:n){<br /> j=pointer[i]<br /> cellsum[j]=cellsum[j]+integrand[i]<br /> cellnum[j]=cellnum[j]+1<br />}<br /></pre>area[] is just the fourth column of data from hexmin; it is the area of each cell on the sphere. cellsum[] with be the sum of integrand values in the cell, and cellnum[] the count (for averaging). This is where the pointers are used. The final stage is the weighted summation: <br /><pre>o=cellnum>0 # cells with data<br />ave[K] = sum(cellsum[o]*area[o]/cellnum[o])/sum(area[o]) # weighted average<br />} # end of K loop<br /></pre>This is, I hope, fairly obvious R stuff. o[] marks cells with data which are the only ones included in the sum. area[o] are the weights, and to get the averages I divide by the sum of weights. This is just conventional grid averaging. <br /><h4>Integration results</h4>Here are the results of grid integration of sin^2(lat) at various resolution levels: <br /><br /><table><tbody><tr><td width="10"></td><td width="100">level</td><td width="120">Number of cells</td><td width="100">average </td></tr><tr><td></td><td>1</td><td>32</td><td>0.3292 </td></tr><tr><td></td><td>2</td><td>122</td><td>0.3311 </td></tr><tr><td></td><td>3</td><td>272</td><td>0.3275 </td></tr><tr><td></td><td>4</td><td>482</td><td>0.3256 </td></tr><tr><td></td><td>5</td><td>1082</td><td>0.3206 </td></tr><tr><td></td><td>6</td><td>1922</td><td>0.3167 </td></tr><tr><td></td><td>7</td><td>3632</td><td>0.3108 </td></tr><tr><td></td><td>8</td><td>7682</td><td>0.3096 </td></tr></tbody></table>The exact answer is 1/3. This was reached at fairly coarse resolution, which is adequate for this very smooth function. At finer resolution, empty cells are an increasing problem. Simple averaging ignoring empty cells effectively assigns to those eells the average of the rest. Because the integrand has peak value 1 at the poles, where many cells are empty, those cells are assigned a value of about 1/3, when they really should be 1. That is why the integral diminishes with increasing resolution. It is also why the shrinkage tapers off, because once most cells in the region are empty, further refinement can't make much difference. <br /><h4>Empty cells and HADCRUT</h4>This is the problem that <a href="https://moyhu.blogspot.com.au/2013/11/cowton-and-way-trends.html">Cowtan and Way 2013</a> studied with <a href="https://www.metoffice.gov.uk/hadobs/hadcrut4/">HADCRUT</a>. HADCRUT averages hemispheres separately, so they effectively infill empty cells with hemisphere averages. But Arctic areas especially are warming faster than average, and HADCRUT tends to miss this. C&W tried various methods of interplating, particularly polar values, and got what many thought was an improvement, more in line with other indices. I showed <a href="https://moyhu.blogspot.com.au/2013/11/coverage-hadcrut-4-and-trends.html">at the time</a> that just averaging by latitude bands went a fair way in the same direction. <br /><br />With the new grid methods here, that can be done more systematically. The Voronoi based matching can be used to embed grids in grids of lower resolution, but fewer empty cells. Integrtaion can be done starting with a coarse grid, and then going to higher resolution. Infilling of an empty cell can be done with the best value from the heirarchy. <br /><br />I use an alternative diffusion based interpolation as one of the <a href="https://moyhu.blogspot.com.au/2015/10/new-integration-methods-for-global.html">four methods</a> for TempLS. It works very well, and gives results similar to the other two of the three best (node-nased triangular mesh and spherical harmonics). I have tried variants of the heirarchical method, with similar effect. <br /><h4>Next</h4>In the next post, I will check out the hierarchical method applied to this simple example, and also to HADCRUT4 gridded version. I'm hoping from a better match with Cowtan and Way.<br /><br /><br /><br /><br /><br /><br /><br /><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com0tag:blogger.com,1999:blog-7729093380675162051.post-769485428459176792017-09-26T08:24:00.001+10:002017-09-26T21:05:28.663+10:00The best grid for Earth temperature calculation.<a href="https://moyhu.blogspot.com.au/2017/09/grids-platonic-solids-and-surface.html">Earlier this month</a>, I wrote about the general ideas of gridding, and how the conventional latitude/longitude grids were much inferior to grids that could be derived from various platonic solids. The uses of gridding in calculating temperature (or other field variables) on a sphere are <br /><ol><li>To gather like information into cells of known area </li><li> To form an area weighted sum or average, representative of the whole sphere </li><li> a necessary requirement is that it is feasible to work out in which cell an arbitrary location belongs </li></ol>So a good grid must have cells small enough that variation within them has little effect on the result ("like"), but large enough that they do significant gathering. It is not much use having a grid where most cells are empty of data. This leads to two criteria for cells that balance these needs:<br /><ul><li>The cells should be of approximately equal area.</li><li>The cells should be compact, so that cells of a given area can maximise "likeness".</li></ul>Lat/lon fails because:<br /><ul><li>cells near poles are much smaller</li><li>the cells become long and thin, with poor compactness</li></ul>I showed platonic solid meshes with triangles and squares that are much less distorted, and with more even area distribution, Clive Best, too, has been looking at <a href="http://clivebest.com/?p=8014">icosahedra</a>. I have also been looking at ways of improving area uniformity. But I haven't been thinking much about compactness. The ideal there is a circle. Triangles deviate most; rectangles are better, if nearly square. But better still are regular hexagons, and that is my topic here. <br /><br />With possibly complex grids, practical usability is important. You don't want to keep having to deal with complicated geometry. With the cubed sphere, I posted <a href="https://moyhu.blogspot.com.au/2017/06/cubing-sphere.html">here</a> a set of data which enables routine use with just lookup. It includes a set of meshes with resolution increasing by factors of 2. The nodes have been remapped to optimise area uniformity. There is a lookup process so that arbitrary points can be celled. But there is also a list showing in which cell the stations of the inventory that I use are found. So although the stations that report vary each month, there is a simple geometry-free grid average process <br /><ul><li>For each month, sort the stations by cell label</li><li>Work out cell averages, then look up cell areas for weighted sum.</li></ul>I want to do that here for what I think is an optimal grid. <br /><br />The optimal grid is derived from the triangle grid for icosahedra, although it can also be derived from the dual dodecahedron. If the division allows, the triangles can be gathered into hexagons, except near vertices of the icosahedron, where pentagons will emerge. This works provided the triangles faces are initially trisected, and then further divided. There will be 12 pentagons, and the rest hexagons. I'll describe the mapping for uniform sphere surface area in an appendix. <br /><h4>Lookup</h4>I have realised that the cell finding process can be done simply and generally. Most regular or near-regular meshes are also <a href="https://en.wikipedia.org/wiki/Voronoi_diagram">Voronoi nets</a> relative to their centres. Thatis, a cell includes the points closest to its centre, and not those closer to any other centre. So you can find the cell for a point by simply looking for the closest cell center. For a sphere that is even easier; it is the centre for which the scalar product (cos angle) of 3D coordinates is greatest. <br /><br />If you have a lot of points to locate, this can still be time-consuming, if mechanical. But it can be sped up. You can look first for the closest face centre (of the icosahedron). Then you can just check the cells within that face. That reduces the time by a factor of about 20. <br /><h4>The grids</h4>Here is a WebGL depiction of the results. I'm using the <a href="https://moyhu.blogspot.com.au/p/moyhu-webgl-earth-facility.html">WebGL facility, V2.1</a>. The sphere is a trackball. You can choose the degree of resolution with the radio buttons on the right; hex122, for example, means a total of 122 cells. They progress with factors of approx 2. The checkboxes at the top let you hide various objects. There are separate objects for red, yellow and green, but if you hide them all, you see just the mesh. The colors are designed to help see the icosahedral pattern. Pentagons are red, surrounded by a ring of yellow. <br /><br /><div id="PxBody"></div><script src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/pages/webgl/MoyJSlib.js" type="text/javascript"> </script><script src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/09/hex.js" type="text/javascript"> </script><script src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/pages/webgl/Map.js" type="text/javascript"> </script><script src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/pages/webgl/MoyGLV2.js" type="text/javascript"></script><br /><br />The grid imperfections now are just a bit of distortion near the pentagons. This is partly because I have forced them to expand to have simiar area to the hexagons. For grid use, the penalty is just a small loss of compactness. <br /><h4>Data</h4>The data is in the form of a R save file, for which I use the suffix .sav. There are two. One <a href="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/09/hexmin.sav">here</a> is a minimal set for use. It includes the cell centre locations, areas, and a listing of the cells within each face, for faster search. That is all you need for routine use. There is a data-frame with this information for each of about 8 levels of resolution, with maximum 7682 cells (hex7682). There is a doc string. <br /><br />The longer data set is <a href="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/09/hex.sav">here</a>. This has the same levels, but for each there are dataframes for cells, nodes, and the underlying triangular mesh. A dataframe is just R for a matrix that can have columns of various types, suitably labelled. It gives all the nodes of the triangular mesh, with various details. There are pointers from one set to another. there is also a doc string with details. <br /><h4>Appendix - equalising area</h4>As I've occasionally mentioned, I've spent a lot of time on this interesting math problem. The basic mapping from platonic solid to sphere is radial projection. But that distorts areas that were uniform on the solid. Areas near the face centres are projected further (thinking of the solid as within the sphere) and grow. There is also, near the edges, an effect due to the face plane slanting differently to the sphere (like your shadow gets smaller when the sun is high). These distortions get worse when the solid is further from spherical. <br /><br />I counter this with a mapping which moves the mesh on the face towards the face centre. I initially used various polynomials. But now I find it best to group the nodes by symmetry - subsets that have to move in step. Each has one (if on edge) or two degrees of freedom. Then the areas are also constrained by symmetry, and can be grouped. I use a Newton-Raphson method (actually secant) to move the nodes so that the triangles have area closest to the ideal, which is the appropriate fraction of the sphere. There are fewer degrees of freedom than areas, so it is a kind of regression calculation. It is best least squares, not exact. You can check the variation in areas; it gets down to a few percent. <br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com12