tag:blogger.com,1999:blog-77290933806751620512017-11-25T07:17:50.712+11:00moyhuNick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.comBlogger699125tag:blogger.com,1999:blog-7729093380675162051.post-32751214476182184412017-11-19T09:32:00.000+11:002017-11-19T13:14:52.098+11:00Pat Frank and Error Propagation in GCMs<meta charset="UTF-8">This post reviews recent controversy over some strange theories of Pat Frank. It reviews his WUWT posts, blog discussion, some obvious errors, and more substantively, how error propagation really works in solution of numerical partial differential equations (pde, as GCM climate models are), why it is important, and why it is well understood. It's a bit long so I'll start with a TOC. <h4>Contents:</h4><ul><li><a href="#1">The story</a><li><a href="#2">Responses</a><li><a href="#3">ad Absurdum</a><li><a href="#4">How error really propagates in solving differential equations</a><li><a href="#5">Eigenvalues</a><li><a href="#6">Summary</a></ul><br><a name='more'></a><h4 id="1">The story</h4>There has been a long running soap opera about attempts by <a href="https://wattsupwiththat.com/2015/02/24/are-climate-modelers-scientists/">Pat Frank</a> to publish a paper which claims that climate modellers are ignoring something he calls "propagation of errors", which he claims would yield extraordinarily large error bars, and invalidate GCM results. These attempts have been unsuccessful, about which he has been becoming <a href="https://wattsupwiththat.com/2017/10/23/propagation-of-error-and-the-reliability-of-global-air-temperature-projections/">increasingly strident</a>. He says the six journals and many referees that have rejected his papers, often with scathing reviews, are motivated only by conflict of interest, desire to preserve their funding etc. Publication of Pat Frank's papers would bring that all down. The most recent outburst at WUWT was <a href="https://wattsupwiththat.com/2017/11/12/consensus-climatology-in-a-nutshell-betrayal-of-integrity/">here</a>, where a seventh journal promptly rejected it. James Annan and Julie H were involved. <br><br>What he means by "propagation of errors" is just the process of forming an expression combining variables with associated error, and expressing this in terms of a total derivative, with the error magnitudes then combining in quadrature (they are assumed independent). That is certainly not something scientists are ignorant of, but it doesn't express what is happening here. There is a lot more involved in differential equation solution. <h4 id="2">Responses</h4> Far too much time has been taked up with this nuttiness. First of course by the journals, who have to deal with not only normal reviewing, but a stream of corrosive correspondence. ATTP has had, I think, several threads, the latest <a href="https://andthentheresphysics.wordpress.com/2017/10/23/watt-about-breaking-the-pal-review-glass-ceiling/">here</a>. James Annan expands on his cameo <a href="http://julesandjames.blogspot.com/2017/11/watts-up-with-pat-frank.html">here</a>. And Patrick Brown even <a href="https://patricktbrown.org/2017/01/25/do-propagation-of-error-calculations-invalidate-climate-model-projections-of-global-warming/">put together a video</a>. Further back, similar stuff made the rounds of skeptic sites, eg <a href="https://noconsensus.wordpress.com/2011/08/08/a-more-detailed-reply-to-pat-frank-part-1/">here</a>. The more mathematical folks there thought it was pretty nutty too. <br><br>As reference materials, Pat Frank posted his paper and also a listing of the journal reviews and correspondence <a href="https://wattsupwiththat.com/2017/10/23/propagation-of-error-and-the-reliability-of-global-air-temperature-projections/">here at WUWT</a>. His link to the paper is <a href="https://uploadfiles.io/zlrt6">here</a>. It's not a short paper - 13.4Mb. His link to the zipfile of reviews (44 Mb) is <a href="https://uploadfiles.io/f5luc">here</a>. <h4 id="3">ad Absurdum</h4>As you might expect, I have been joining in at WUWT too. My view, which I'll expand on below, is that propagation of error in numerical partial differential equations is very important, necessarily gets much attention, and is nothing like what he describes. But when folks like Eric Worrall at WUWT are solemnly chiding me for not understanding that propagation is a random walk, I can see expounding the correct theory there would be a waste of time. So I instead try <i>reductio ad absurdum</i> on a few rather obvious points. But the audience at WUWT is not sensitive to <i>absurdum</i>. <br><br>In the first recent post I <a href="https://wattsupwiththat.com/2017/10/23/propagation-of-error-and-the-reliability-of-global-air-temperature-projections/#comment-2643840">picked up on</a> a criticism of a reviewer that Pat Frank had insisted on a change of units when he averaged quantities over 20 years. He said that the units became quantity/year. Others had raised this, so in the text he had doubled down. Surely this qualifies as absurd? <br><br>I again tried a somewhat peripheral approach, saying, why years? Why not regard it as a period of 240 months, and say quantity/month. He said, well the quantity would change. 4 W/m2/year would become 1/3 W/m2/month. But the problem there is that he was using data from someone else's paper (Lauer and Hamilton, 2013) who simply said the average was 4 W/m2. No time interval specified. <br><br>It actually relates to the objection many referees raised, including James Annan. If you're going to say the error adds like a random walk, what is the time interval? Why per year? It obviously affects the result how long between steps. He gave some strange replies. <a href="https://wattsupwiththat.com/2017/10/23/propagation-of-error-and-the-reliability-of-global-air-temperature-projections/#comment-2647233">Here</a> he expounds the difference between a "measurement average" and a "statistical average": <br><br><div style="color:#aa0000">Several measurements of the height of one person: meters. A measurement average. <br>Average height of people in a room: meters per person. A statistical average. <br>Middle school math, and a numerical methods PhD <span style="color:#888888">[moi]</span> stumbles over it.</div><br><br>So, what if you sample in Eurpoe to see if Dutchmen are taller than Greeks. You get 1.8 m/Dutch and 1.7 m/Greek. Can you compare if they are in different units? (h/t JA) <br><br>On the second thread, I <a href="https://wattsupwiththat.com/2017/11/12/consensus-climatology-in-a-nutshell-betrayal-of-integrity/#comment-2663573">picked up on</a> a claim from PF's review of JA's review: <br><br><div style="color:#aa0000">He wrote, “… ~4W/m^2 error in cloud forcing…” except it is ±4 W/m^2 not Dr. Annan’s positive sign +4 W/m^2. Apparently for Dr. Annan, ± = +. <br>...<br>How does it happen that a PhD in mathematics does not understand rms (root-mean-square) and cannot distinguish a “±” from a “+”?</div><br><br>Thius was a less central point, but it throws down the gauntlet. Either JA or PF doesn't have a clue. So I thought, surely there are some there who are familiar with RMS, or the particular case of standard deviation? Surely they know they are always written as positive numbers, with good reason. But to reinforce it, I challenged them to find somewhere, anywhere, where an RMS quantity was written with a ±. But no-one could. PF did come up with a number of locations, which wrote the number as positive, but used it in an expression like a±σ. I couldn't get across the distinction. Yes, 3±2 is an interval, but 2 is a positive number. It isn't JA who is clueless. <br><br>The depressing thing to me was that there were people there who seemed to have some acquaintance with statistics or math, and would never before have thought of adding a ± to express a RMS. But now it was all so obvious to them that you should. The same conversion happened with the average units. <h4 id="4">How error really propagates in solving differential equations</h4>It's actually a very important practical issue for anyone who solved large systems of differential equations (de's), as I do. Because you always have error, if only from rounding. And if it can grow without limit, the solution is unstable. <br><br>A system of partial de's, as with GCMs, can be first discretised in space, to give a large but finite set of nodal values (and perhaps derivatives, or other quantities) with relations between them. There is a simplification that usually only nearby locations are related, which means the resulting set of ordinary de's (in time) are sparse. If the quantities are A (big vector) and the drivers F, the de could be written linearised as <br>dA/dt = -S*A + F<br>where S is the big sparse matrix. The time step will need to be taken short enough so that S varies slowly between steps, and it is enough to take it as constant. It can be expressed in terms of eigenvectors. So the driver, which could include an error term, is then also expressed in terms of the eigenvectors, and you can think of the equation in each direction as just being a scalar equation <br>da/dt = -s*a + f<br>The solution of this is a = exp(-s*t) ∫ exp(s*τ)*f dτ <br>where the range of integration is from arbitrary t0 to t.<br>As you go backward in time from t, the multiplier of f dies away exponentially, if s>0. So a "propagates" f to only a limited extent. It's actually an exponentially smoothed version of f. S may not be symmetric, so the eigenvalues could be complex, and the criterion then is real part Re(s)>0. <br><br>The same thing persists after you discretise in time, so integration is replaced by summation in discrete steps. This is actually where care is needed to ensure that a stable de converts to a stable recurrence. But that can be done. <br><br>So to prevent instability, it must certainly be true that eigenvalues s≥0. The interesting cases are when Re(s) is small. I'll look at the possible spectrum. <h4 id="5">Eigenvalues</h4>In the recurrences that result from a GCM, there are millions of eigenvalues. The salvation is that most of them have relatively large real part, and correspond to rapidly decaying processes (typically, hours for GCM). They are all the local flows that could be generated on the element scale, which lose kinetic energy rapidly through shearing. That is why the solution process can make some sense, because one can focus on a much smaller eigenspace where decay is either absent or slow on the scale, perhaps decades, of the solution being sought. I'll look at three classes: <ul><li>Waves. Here s has zero (or near) real part, but large imaginary part, so decay is slow and period is comparable to the time step. In classical CFD, these would be sound waves. They are important, because they create action at a distance, propagating the pressure field so the medium behaves like a not very compressible fluid (elasticity). In GCM's, they merge into gravity waves, because a pressure peak can move the air vertically. That actually reduces the propagation velocity. The waves do have some dissipation (eg surface shear) so the ideal s has small positive real part.<br><br>These waves are important, because to the extent they aren't well resolved in time after time discretisation, s can have negative real part, and if it outweighs dissipation, the wave will grow and the solution fail. That is why GCMs have a maximum time step of about half an hour, which demands powerful computers. <li>Secular motion and slow decay. The secular part is the desired solution responding to the driver F; I've bundled in the slow decay processes because they are hard to separate. The usual way is to wind back, so that only the slowest remain from the initial perturbation. But some does remain, and that is why GCMs tend to have apparently similar solutions which may differ by a degree or two in temperature, say. There are two remedies for this <ul><li>use of anomalies, which subtracts out the very slow decay, leaving he secular variation<li>Increased care with initial conditions. The smaller the component of these modes that is present initially, the smaller the later problem.</ul><li>Conserved quantities - energy and mass (of each component). These totalled correspond to zero eigenvalues; they shouldn't change. But in this case small changes do indeed accumulate, and that is a problem. GCMs include energy and mass "fixers" (see <a href="http://www.cesm.ucar.edu/models/atm-cam/docs/description/node2.html">CAM 3</a> for examples). The totals are monitored, discrepancies noted, and then corrected by adding in distributed amounts as required. <br><br>I remember a post at WUWT where Willis Eschenbach discovered this and cried malpractice. But it is perfectly legitimate. GCMs are models which incorporate all the math that the solution should conform to, and conservation is one such. It is true that as an extra constraint it appears to make more equations than variables. But large systems of equations are always ill-conditioned in some way; there aren't as many effective equations as it seems. That is why the conservations were able to erode, and the fixers just restore that. "Fixing" creates errors in the first class of eigenvalues, because while care should be taken to do it without introducing perturbations in their space, this can't be done perfectly. But again the salvation is that such errors rapidly decay. </ul><h4 id="6">Summary</h4>So what do we have with error propagation, and why is Pat Frank so wrong? <ul><li>Inhomogeneous equation systems are driven by a mix of intended drivers and error. <li>The effect of each is generally subject to exponential decay. The resulting solution is not a growing function of past errors, but in effect, like an exponential smooth of them. This may increase by a steady factor, but will also attenuate noisy error. <li>Propagation is not something that is ignored, as Pat Frank claims, but is a central and much studied part of numerical pde. </ul><br><br><br><br> Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com29tag:blogger.com,1999:blog-7729093380675162051.post-31505131889412173062017-11-17T04:44:00.000+11:002017-11-17T04:44:33.738+11:00GISS October global up 0.1°C from September, now 0.9°C.GISS <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#L1">warmed</a>, going from 0.80°C in September to 0.90°C in October (GISS report <a href="https://data.giss.nasa.gov/gistemp/news/20171116/">here</a>). That is very similar to the <a href="https://moyhu.blogspot.com.au/2017/10/giss-september-down-004-from-august.html">increase</a> (now 0.105°C) in TempLS mesh. It was the second warmest October on record, after 2015. It is interesting to reflect on that month two years ago, which at 1.04°C was at the time by some margin the warmest month ever. <br /><br />The overall pattern was similar to that in TempLS. Even warmer in Antarctica, cool in W US, and E Mediterranean, but the band of cool through to China is less continuous. <br /><br />As usual here, I will compare the GISS and previous TempLS plots below the jump. <br /><a name='more'></a><br />Here is GISS<br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/11/GISSoct.png" /><br /><br />And here is the TempLS spherical harmonics plot <br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/10/map.png" /><br /><br /><div style="color: #aa0000;">This post is part of a series that has now run for six years. The TempLS mesh data is reported <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#mesh">here</a>, and the recent history of monthly readings is <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#L1">here</a>. Unadjusted GHCN is normally used, but if you click the TempLS button there, it will show data with adjusted, and also with different integration methods. There is an interactive graph using 1981-2010 base period <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#Drag">here</a> which you can use to show different periods, or compare with other indices. There is a general guide to TempLS <a href="https://moyhu.blogspot.com.au/p/a-guide-to-global-temperature-program.html">here</a>. <br /><br />The reporting cycle starts with a report of the daily reanalysis index on about the 4th of the month. The next post is this, the TempLS report, usually about the 8th. Then when the GISS result comes out, usually about the 15th, I discuss it and compare with TempLS. The TempLS graph uses a spherical harmonics to the TempLS mesh residuals; the residuals are displayed more directly using a triangular grid in a better resolved WebGL plot <a href="http://www.moyhu.blogspot.com.au/p/blog-page_24.html">here</a>. </div><br /><br /><br /><br /><br /><br /><br /><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com2tag:blogger.com,1999:blog-7729093380675162051.post-58298633115838515852017-11-08T03:58:00.000+11:002017-11-08T04:00:33.435+11:00October TempLS global surface temperature up 0.11°C<a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#mesh">TempLS mesh</a> anomaly (1961-90 base) was up from 0.618°C in September to 0.73°C in October. This compares with the <a href="https://moyhu.blogspot.com.au/2017/11/october-ncepncar-global-anomaly-up-0055.html">smaller rise</a> of 0.055°C in the NCEP/NCAR index, and a <a href="http://www.drroyspencer.com/2017/11/uah-global-temperature-update-for-october-2017-0-63-deg-c/">similar rise</a> (0.09) in the UAH LT satellite index. <br /><br />The anomaly pattern was similar to the NCEP/NCAR picture. No great heat, except in Antarctica, but warm almost everywhere. A band of cool between the Sahara and Far East Russia, cool in the SE Pacific, and some cool in the western US. The change toward warmth at the poles will again lead to varying results in the major indices, with GISS likely to rise strongly compared with NOAA and HADCRUT. In fact, TempLS grid, which also has less coverage at the poles, fell slightly to October. Overall, sea temperatures rose a little, after dropping last month. <br /><br /><a href="http://www.drroyspencer.com/2017/11/uah-global-temperature-update-for-october-2017-0-63-deg-c/">Roy Spencer</a> noted the recent marked rise in the satellite records, which are now relatively a lot higher than the surface. I'll show (from <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#fig4">here</a>) the graph of the last four years on a common 1981-2010 base. Even UAH, which had been a low outlier for most of the time, is now well above the surface measures, and is not far below the 1998 peak. <br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/pages/latest/T/mth2.png" /><br /><br />Here is the temperature map: : <br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/10/map.png" /><br /><a name='more'></a><div style="color: #aa0000;">This post is part of a series that has now run for six years. The TempLS mesh data is reported <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#mesh">here</a>, and the recent history of monthly readings is <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#L1">here</a>. Unadjusted GHCN is normally used, but if you click the TempLS button there, it will show data with adjusted, and also with different integration methods. There is an interactive graph using 1981-2010 base period <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#Drag">here</a> which you can use to show different periods, or compare with other indices. There is a general guide to TempLS <a href="https://moyhu.blogspot.com.au/p/a-guide-to-global-temperature-program.html">here</a>. <br /><br />The reporting cycle starts with a report of the daily reanalysis index on about the 4th of the month. The next post is this, the TempLS report, usually about the 8th. Then when the GISS result comes out, usually about the 15th, I discuss it and compare with TempLS. The TempLS graph uses a spherical harmonics to the TempLS mesh residuals; the residuals are displayed more directly using a triangular grid in a better resolved WebGL plot <a href="http://www.moyhu.blogspot.com.au/p/blog-page_24.html">here</a>. </div><br /><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com11tag:blogger.com,1999:blog-7729093380675162051.post-21236367758857701102017-11-03T05:18:00.000+11:002017-11-03T05:18:09.217+11:00October NCEP/NCAR global anomaly up 0.055°C from SeptemberIn the <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#NCAR">Moyhu NCEP/NCAR index</a>, the monthly reanalysis average rose from 0.317°C in September to 0.372°C in October, 2017, making it the warmest month since May It was again a very varied month; starting cool, them a warm spike, and cool again at the end. <br><br>The main feature was a big cool band from the Sahara through Siberia to China. Also cold in the SE Pacific, with some La Nina like pattern. Mixed at the poles, but more warm than cold. Since Antarctica was cold last month, this suggests the warming will be reflected more strongly in GISS/TempLS rather than NOAA/HADCRUT. <br><br><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/data/freq/days.png"></img><br><br><br><br><br><br> Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com12tag:blogger.com,1999:blog-7729093380675162051.post-67471062847389542612017-11-01T22:21:00.000+11:002017-11-02T16:32:53.567+11:00Penguins - more fantasy about a Time coverThere was a <a href="https://wattsupwiththat.com/2017/10/30/how-google-and-msm-use-fact-checkers-to-flood-us-with-fake-claims">really crazy article</a> at WUWT by Leo Goldstein on fact checkers and allegations of fakery. Leo G regularly publishes really paranoid stuff on Google conspiracies etc. But this one was a doozy, titled "How Google and MSM Use “Fact Checkers” to Flood Us with Fake Claims". Here is an extract: <br /><blockquote style="background-color: #eef2ff;">An example is a global cooling scare of the 70s. In 1977, Time magazine published an issue under the following cover: <br /><br /><img src="https://wattsupwiththat.files.wordpress.com/2017/10/time-1977-bigfreeze.jpg" width="200" /><br /><br />That cover is a seriously inconvenient truth for climate alarmists and their media accessories. So, Time attempted to re-write a history. It published a forged version of its own cover, the left one on the following picture (the “Time-2013-version-of-1977”): <br /><br /><img src="https://wattsupwiththat.files.wordpress.com/2017/10/facebook_meme_global_cooling_11-fake.gif" width="400" /><br /><br />…and then easily debunked it as a photoshopped version of its April 2007 cover (3). As I will explain below, Time magazine knew it was launching a hoax. The rest of the liberal media popularized it, although it could have easily recognized it. Snopes adopted it (4), invented additional details that were not present in the Time article, and angrily condemned “climate deniers.” <br />... </blockquote>And there is lots more about how "notorious Greg Laden" exposed the hoax etc <br /><blockquote style="background-color: #eef2ff;">The “original source” of the fake cover is hard to trace. It is almost certainly somebody in the climate alarmism camp: the real cover from 1977 was very clearly making a point against climate alarmism. But the point of entry of the forgery into mass circulation was Time magazine, June 6 of 2013. Good job, motherf*ckers.</blockquote>My initial commentary was a bit confused, mainly because we have recently had NBN (a new optic fibre system, much complained about) installed, and it kept disconnecting from the internet. However, I loooked into it a bit more and found quite a lot of history. <br /><br />Firstly, some links.<br /><ul><li> <a href="http://time.com/4778937/fake-time-cover-ice-age/">2017 Time report</a> on how it even got to Trump's desk, supplied by senior aide <a href="https://en.wikipedia.org/wiki/K._T._McFarland">K.T.MacFarland</a>, now US ambassador to Singapore. </li><li><a href="https://www.politico.com/story/2017/05/15/donald-trump-fake-news-238379">The Politico story</a> about the Trump occurrence. </li><li><a href="http://science.time.com/2013/06/06/sorry-a-time-magazine-cover-did-not-predict-a-coming-ice-age/">Time's 2013 article</a> on the forgery </li><li><a href="http://gregladen.com/blog/2013/06/04/the-1970s-ice-age-myth-and-time-magazine-covers-by-david-kirtley/">David Kirtley at Greg Laden</a>, a few days earlier </li><li><a href="https://www.snopes.com/the-coming-ice-age/">Snopes 2017</a></li><li>The actual Time 1977 cover is <a href="https://archive.is/gWpXa">here</a>. </li></ul><br />Goldstein is of course talking nonsense about the forgery being a plant designed to pick up on the 1977 Big Freeze cover. That cover wasn't about global cooling at all. It was a straight forward factual article about a very snowy winter in 1976/7 in North America. There was in fact a 1974 Time article on global cooling that people might have wanted to look up, as the Time link describes. But there was no cover associated with that, although the contrary is widely believed. <br /><br />But I did some more searching. First some notable occurrences of the hoax, which is actually ten years old: <br /><ul><li> As mentioned, it apparently got to President Trump via K.T.MacFarland </li><li> <a href="http://www.dailymail.co.uk/news/article-2294560/The-great-green-1-The-hard-proof-finally-shows-global-warming-forecasts-costing-billions-WRONG-along.html">David Rose</a> at Daily Mail, "Great Green Con". See the blue box which says <i>"In the Seventies, scientists and policymakers were just as concerned about a looming ‘ice age’ as they have been lately about global warming – as the Time magazine cover pictured here illustrates</i>. The picture has now been removed without explanation, which doesn't help the clarity of the text. </li><li> <a href="https://wattsupwiththat.com/2017/01/28/homogenization-of-temperature-data-makes-capetown-south-africa-have-a-warmer-climate-record/">WUWT, 2017</a></li><li><a href="http://www.drroyspencer.com/2013/10/the-danger-of-hanging-your-hat-on-no-future-warming/">Roy Spencer, 2013</a></li></ul><br />There clearly was more history, since most of these don't have the 1977 pasted over as in the Time/Kirtley versions. So I did a bit more searching. <br /><br /><a href="https://stevengoddard.wordpress.com/2011/06/18/tima-magazine-warned-us-about-climate-change/">Steven Goddard</a>, 2011 is an old reliable. This predates 2013, so clearly debunks the Goldstein fantasy about Time forging its own cover in 2013. <br /><br /><a href="http://neoconexpress.blogspot.com.au/2007/02/time-like-newsweek-predicted-iceage-in.html">Neocon Exresss</a> is the earliest nominal date, at Feb 12, 2007. But that predates the real cover, so I presume the image was added later. Of interest is a Jan 2011 comment, drawing attention to the fakery. <br /><br />But the most interesting early occurrence was in August 2007, in <a href="http://www.freerepublic.com/focus/f-chat/1887747/posts#comment">Free Republic</a>. That was soon after the genuine cover in April 2007. But this one is an animated GIF, and shows alternately the fake and the real. I'm not sure what the point is, but it must be getting close to the source, where it was somehow, I suppose, seen as parody. The url links to <a href="https://www.strangepolitics.com/archive/archive_40.html">this site</a>, which seems to be for prank pictures, but I couldn't find an original there. <span style="color:#9955ff">Update: The picture numbering on the StrangePolitics site isn't entirely consistent, but seems to place the original in April 2007, the month of the genuine cover.</span><<br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/11/penguins.gif" /><br /><br />Here is Goldstein's summary:<br /><blockquote>In this example, multiple entities are involved: Google, Snopes, Time magazine, and ScienceBlogs. They are independent entities, but each of them knowingly plays its own well-defined role in the chain of injection, amplification, propagation, and utilization of a lie. Thus, they might be referred to as a single body. </blockquote><br /><br /><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com14tag:blogger.com,1999:blog-7729093380675162051.post-76062797011442652512017-10-22T05:46:00.001+11:002017-10-22T05:53:26.739+11:00Averaging and error propagation - random walk - math<meta charset="UTF-8" />I have been <a href="https://wattsupwiththat.com/2017/10/14/durable-original-measurement-uncertainty">arguing again</a> at WUWT. There is a persistent belief there which crops up over and over, that averages of large numbers of temperatures must have error estimates comparable to those for their individual components. The usual statement is that you can't make a bad measure good just by repeating over and over. I try to point out that the usually criticised climate averages are not reached by repeating one measure many times, and have invited people to identify the occurrence of such a problem that concerns them, without success. <br><br>I dealt with a rather similar occurrence of this issue <a href="https://moyhu.blogspot.com.au/2016/04/averaging-temperature-data-improves.html">last year</a>. There I showed an example where Melbourne daily maxima given to 1 decimal (dp) were averaged over a month, for several months, and then averaged again after rounding to nearest integer. As expected, the errors in averaging were much less than 1°C. The theoretical is the standard deviation of the unit uniform distribution (sqrt(1/12) approx 0.29, divided by the sqrt of the number in the average, and the results were close. This time I did a <a href="https://wattsupwiththat.com/2017/10/14/durable-original-measurement-uncertainty/comment-page-1/#comment-2636539">more elaborate averaging</a> with a century of data for each month. As expected this reduced the error (discrepancy between the 1dp mean and the 0dp mean) by a factor of 10. <br><br>I also showed <a href="https://moyhu.blogspot.com.au/2015/02/surface-temperature-global-average-is.html">here</a> that for the whole process of averaging over time and globally over space, adding white noise to all monthly averages of amplitude 1°C made almost no difference to the global anomaly time series. <br><br><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/GHCN/AddRandom/ts.png" width=500></img><br><br>The general response that there is something special about the measurement errors which would make them behave differently to the rounding change. And there are usually arguments about whether the original data was really as accurate as claimed. But if one could somehow have perfect data, it would just be a set of rather similar numbers distributed over a similar range, and there is no reason to expect rounding to have a different effect. Nor is there any kind of variation that could be expected to have different effect to rounding, as long as there is no bias; that is, as long as the errors are equally likely to be up or down. If there is bias, then it will be propagated. That is why bias should be the focus. <br><br>Here is a table of contents for what is below the fold: <ul><li><a href="https://moyhu.blogspot.com.au/2017/10/averaging-and-error-propagation-random.html#H1">Random Walk</a><li><a href="https://moyhu.blogspot.com.au/2017/10/averaging-and-error-propagation-random.html#H2">Extreme rounding</a><li><a href="https://moyhu.blogspot.com.au/2017/10/averaging-and-error-propagation-random.html#H3">Binning and Poisson Summation.</a></ul><a name='more'></a><h4 id="H1">Random walk</h4>I think a useful way of viewing this is to think of the process of building the average. I'll stick to simple averages here, formed by just summing N numbers and dividing by N. The last step is just scaling, so we can focus on the addition. If the elements just have a mean value and a random error, then the cumulative sums are a classical one dimensional random walk, with drift. If the mean can be subtracted out, the drift is zero, and the walk is just the accumulation of error over N steps. Now if the steps are just a unit step either up or down, equally likely and independent from step to step, the expected distance from the origin after N steps is sqrt(N). It could be up or down, or of course it could be zero - back at the start. The reduced result from N steps reflects the inefficiency of random walk as a way of getting anywhere. With averaging, the step is of varying length, from a distribution, but again if independent and with sd σ, the expected distance is σ*sqrt(N). If not independent, it may be larger. For the simplest autocorrelation, Ar(1), with correlation factor r between 0 and 1, the distance is amplified by the Quenouille correction sqrt((1+r)/(1-r)). But it would take a very large correlation to bring the distance up close to σN. <h4 id="H2">Extreme rounding</h4>At WUWT, a commenter Mark S Johnson was vigorously arguing, and I agree, that it was all about sampling error, and he said that you could actually round the heights of US adult males to the nearest foot, and still get an average to the nearest inch. I blanched at that, but he insisted, and he was right. This is <a href="https://wattsupwiththat.com/2017/10/14/durable-original-measurement-uncertainty/#comment-2640646">the test I described</a>:<br><i>I tried it, and Mark's method did still do well. I assumed heights normally distributed, mean 5.83, sd 0.4. Centered, the expected numbers were <pre><br />4.5 5.5 6.5 7.5<br />190 6456 3337 17<br /></pre>Weighted average is 5.818, so it is within nearest inch.</i><br><br>Mark suggested 5.83 was the correct mean, and I chose a sd of 4 to ensure that 7 ft was infrequent but not unknown. I was surprised that it did so well, but I had an idea why, and I was interested. Here's why it works, and what the limits are: <h4 id="H3">Binning and Poisson Summation.</h4>Numerically, the discrete separation of data into "bins" through rounding has a lot in common with integration formulae. The mean is ∫p dC, where C is the cumulative density function (cdf); the rounded version makes a sum Σ pΔC. There are a great variety of finite interval integration formulae, going back to Newton-Cotes, and the almost equally ancient <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula">Euler-MacLaurin</a> formula, which relates the sum of regular spaced samples ito the integral with error terms involving the powers of the spacing h and the end-point derivatives. The error here is polynomial in h, and depending on the first derivative that is discontinuous at the end-point. But that raises an interesting question - what if there isn't an end-point, or if there is and all the derivatives are zero? It turns out that the approach to the integral, with diminishing h, can be faster than any power of h. The key is a very elegant and powerful formula from nearly 200 years ago - the <a href="https://en.wikipedia.org/wiki/Poisson_summation_formula">Poisson summation formula</a>. <br><br>At its simplest, this formula equates the sum of equally spaced samples of a function with a similar sum of samples of the Fourier transform: <br>Σ f(ih) = Σ F(k/h)<br>where F is the Fourier transform (FT) in the convention F(ω)=∫f(t) exp(-2iπt) dt and h is the spacing. <br>Sums and integrals here are from -∞ to ∞, and summed over whatever looks like an index (i,k etc).<br>This is an integral approximation because F(0) is the integral, and F(1/h) will be the first error term. If F tapers rapidly, as it will when f is smooth, and if h is small, the error will also be small. <br><br>From above, mean M = Σ kh ΔC_k. <br>The cdf C does not work so well with the Fourier transform, because it doesn't go to zero. But if we let the mean μ of the probability density function (pdf) P vary, then the derivative<br>dM/dμ = Σ kh ΔP(kh+μ), since P is the derivative of C <br>and with this shift, summing by parts<br>Σ kh ΔP(μ)_k = Σ hP(kh+μ) = Σ Þ(k/h)exp(2kπt/h) <br>using now Þ for FT of P<br><br>The practical use of these series is that the k=±1 terms are enough to describe the error, unless h is very large indeed. So in real terms, since for pdf the integral is 1, then <br><br>dM/dμ = ≅ Σ kh ΔP(μ)_k = 1 + 2*Þ(1/h)*sin(2πμ/h) + ... <br><br> or M = μ + Þ(k/h)*h/π * sin(2kπμ/h) <br><br>This tells us that the binned mean M is exact if the mean μ of the pdf P lies on an integer point of the binning (a ruler mark) and oscillates between, with another zero half-way. The maximum error is Þ(k/h)*h/π. <br><br>In the case of the normal distribution assumed here, Þ(1/h)*h/π = h/π exp(-2*(πσ)²). And in our particular case, h=1, σ=0.4, so the maximum excursion (if the mean were 5.75ft) is 0.0135, or about 1/6" (inch). That's still pretty accurate, with a ruler with 1 foot spacings. How much further could we go? With 2ft spacing, it gets dramatically worse, about 1.75" error. Put more generally, binning with 2.5σ spacing is good, but 5σ starts losing accuracy. <br><br>I think that robustness is remarkable, but how much does it depends on normal distribution? The Poisson formula gives a way of thinking about that. The FT of a gaussian is another Gaussian, and tapers very fast. That is basically because the pdf is maximally smooth. If higher frequencies enter, then that tapering that governs convergence is not so fast, and if there is actually a discontinuous derivative, it reduces to polynomial order. And of course we never really know what the distribution is; it is generally inferred from the look of the histogram. However there are often physical reasons for it to be smooth. Or perhaps that overstates it - the most powerful argument is, why not? What physics would cause the distribution to have kinks? <br><br><br><br> Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com25tag:blogger.com,1999:blog-7729093380675162051.post-9691395136639830322017-10-18T08:38:00.002+11:002017-10-18T08:38:28.643+11:00GISS September down 0.04°C from August.GISS showed a <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#L1">small decrease</a>, going from 0.84°C in August to 0.80°C in September (GISS report <a href="https://data.giss.nasa.gov/gistemp/news/20171017/">here</a>). It was the fourth warmest September in the record. That decrease is very similar to the <a href="https://moyhu.blogspot.com.au/2017/09/august-global-surface-temperature-down.html">0.06°C fall</a> in TempLS mesh. <br /><br />The overall pattern was similar to that in TempLS. Warm almost everywhere, especially across N America, and S America and Middle/near East. Cool spots in W Europe and N central Siberia. <br /><br />As usual here, I will compare the GISS and previous TempLS plots below the jump. <br /><a name='more'></a><br />Here is GISS<br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/10/GISSsep.jpg" /><br /><br />And here is the TempLS spherical harmonics plot <br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/10/map.png" /><br /><br /><div style="color: #aa0000;">This post is part of a series that has now run for six years. The TempLS mesh data is reported <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#mesh">here</a>, and the recent history of monthly readings is <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#L1">here</a>. Unadjusted GHCN is normally used, but if you click the TempLS button there, it will show data with adjusted, and also with different integration methods. There is an interactive graph using 1981-2010 base period <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#Drag">here</a> which you can use to show different periods, or compare with other indices. There is a general guide to TempLS <a href="https://moyhu.blogspot.com.au/p/a-guide-to-global-temperature-program.html">here</a>. <br /><br />The reporting cycle starts with a report of the daily reanalysis index on about the 4th of the month. The next post is this, the TempLS report, usually about the 8th. Then when the GISS result comes out, usually about the 15th, I discuss it and compare with TempLS. The TempLS graph uses a spherical harmonics to the TempLS mesh residuals; the residuals are displayed more directly using a triangular grid in a better resolved WebGL plot <a href="http://www.moyhu.blogspot.com.au/p/blog-page_24.html">here</a>. </div><br /><br /><br /><br /><br /><br /><br /><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com1tag:blogger.com,1999:blog-7729093380675162051.post-37247727566900202442017-10-08T10:21:00.003+11:002017-10-08T10:21:50.713+11:00September global surface temperature down 0.06°C<a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#mesh">TempLS mesh</a> anomaly (1961-90 base) was down from 0.673°C in August to 0.613°C in September. This compares with the <a href="https://moyhu.blogspot.com.au/2017/10/september-ncepncar-global-anomaly-down.html">smaller drop</a> of 0.02°C in the NCEP/NCAR index, and a <a href="http://www.drroyspencer.com/2017/10/uah-global-temperature-update-for-september-2017-0-54-deg-c/">substantial rise</a> (0.12) in the UAH LT satellite index. UAH is up a lot in the last two months. <br /><br />Despite the fall, most of the world looked pretty warm. Warmth in Canada, S America, near East, China. A cool spot in Siberia, but warmth in Antarctica, which contrasts with the earlier NCEP/NCAR report, which showed predominant cold. Here is the temperature map: : <br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/10/map.png" /><br /><a name='more'></a><div style="color: #aa0000;">This post is part of a series that has now run for six years. The TempLS mesh data is reported <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#mesh">here</a>, and the recent history of monthly readings is <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#L1">here</a>. Unadjusted GHCN is normally used, but if you click the TempLS button there, it will show data with adjusted, and also with different integration methods. There is an interactive graph using 1981-2010 base period <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#Drag">here</a> which you can use to show different periods, or compare with other indices. There is a general guide to TempLS <a href="https://moyhu.blogspot.com.au/p/a-guide-to-global-temperature-program.html">here</a>. <br /><br />The reporting cycle starts with a report of the daily reanalysis index on about the 4th of the month. The next post is this, the TempLS report, usually about the 8th. Then when the GISS result comes out, usually about the 15th, I discuss it and compare with TempLS. The TempLS graph uses a spherical harmonics to the TempLS mesh residuals; the residuals are displayed more directly using a triangular grid in a better resolved WebGL plot <a href="http://www.moyhu.blogspot.com.au/p/blog-page_24.html">here</a>. </div><br /><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com5tag:blogger.com,1999:blog-7729093380675162051.post-10618698876572075612017-10-03T06:21:00.001+11:002017-10-05T14:08:18.855+11:00September NCEP/NCAR global anomaly down 0.02°C from AugustIn the <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#NCAR">Moyhu NCEP/NCAR index</a>, the monthly reanalysis average declined from 0.337°C in August to 0.317°C in September, 2017. It was again a very varied month; it looked as if it would come out quite warm until a steep dip about the 23rd; that cool spell then lasted until the end of the month. <br><br>The main feature was cold in Antarctica, so again we can expect this to be strongly reflected in GISS and TempLS, and less in NOAA and HADCRUT. Elsewhere, cold in Central Russia, but warm in the west; fairly warm around the Arctic. <br><br><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/data/freq/days.png"></img><br><br><br><br><br><br> Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com6tag:blogger.com,1999:blog-7729093380675162051.post-26114893680065873662017-09-29T15:55:00.000+10:002017-09-30T14:59:56.108+10:00Nested gridding, Hadcrut, and Cowtan/Way .<div style="color: red;"><b>Update</b>I had made an error in coding for the HADCRUT/C&W example - see below. The agreement with C&W is now much improved.</div><br />In my <a href="https://moyhu.blogspot.com.au/2017/09/simple-use-of-complex-grid-earth.html">previous post</a>, I introduced the idea of hierarchical, or nested gridding. In earlier posts, eg <a href="https://moyhu.blogspot.com.au/2017/09/the-best-grid-for-earth-temperature.html">here</a> and <a href="https://moyhu.blogspot.com.au/2017/09/grids-platonic-solids-and-surface.html">here</a>, I had described using platonic solids as a basis for grids on the sphere that were reasonably uniform, and free of the pole singularities of latitude/longitude. I gave data files for icosahedral hexagon meshes of various degrees of resolution, usually proceeding by a factor of two in cell number or length scale. And in that previous post, I emphasised the simplicity of a scheme for working out which cell a point belonged by finding the nearest centre point. I foreshadowed the idea of embedding each such grid in a coarser parent, with grid averaging proceeding downward, and using the progressive information to supply estimates for empty cells.<br /><br />The following graph from HADCRUT illustrates the problem. It shows July 2017 temperature anomalies on a 5°x5° grid, with colors for cells that have data, and white otherwise. They average the area colored, and omit the rest from the average. As I often argue, as a global estimate, this effectively replaces the rest by the average value. HADCRUT is aware of this, because they actually average by hemispheres, which means the infilling is done with the hemisphere average rather than global. As they point out, this has an important benefit in earlier years when the majority of missing cells were in the SH, which was also behaving differently, so the hemisphere average is more eappropriate than global. On the right, I show the same figure, but this time with my crude coloring in (with Paint) of that hemisphere average. You can assess how appropriate the infill is:<br /><br /><table><tbody><tr><td><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/09/July17.png" width="400" /></td><td><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/09/July17painted.png" width="400" /></td></tr></tbody></table><br /><br />A much-discussed paper by <a href="https://moyhu.blogspot.com.au/2013/11/cowton-and-way-trends.html">Cowtan and Way 2013</a> noted that this process led to bias in that the areas thus infilled tended not to have the behaviour of the average, but were warming faster, and this was underestimated particularly since 2000 because of the Arctic. They described a number of remedies, and I'll concentrate on the use of kriging. This is a fairly elaborate geostatistical interpolation method. When applied, HADCRUT data-based trends increased to be more in line with other indices which did some degree of interpolation. <br /><br />I think the right way to look at this is getting infilling right. HADCRUT was on the right track in using hemisphere averages, but it should be much more local. Every missing cell should be assigned the best estimate based on local data. This is in the spirit of spatial averaging. The cells are chosen as regions of proximity to a finite number of measurement points, and are assigned an average from those points because of the proximity. Proximity does not end at an artificial cell boundary. <br /><br />In the previous post, I set up a grid averaging based on an inventory of about 11000 stations (including GHCN and ERSST) but integrated not temperature but a simple function sin(latitude)^2, which should give 1/3. I used averaging omitting empty cells, and showed that at coarse resolution the correct value was closely approximated, but this degraded with refinement, because of the accession of empty cells. I'll now complete that table using nested integration with hexagonal grid. At each successive level, if a cell is empty, it is assigned the average value of the smallest cell from a previous integration that includes it. (<span style="color: blue;">I have fixed the which.min error here too; it made little difference).</span><br /><br /> <table><tbody><tr><td width="10"></td><td width="100">level</td><td width="100">Numcells</td><td width="120">Simple average</td><td width="120">Nested average</td></tr><tr><td>1<td>1<td>32<td>0.3292<td>0.3292 <tr><td>2<td>2<td>122<td>0.3311<td>0.3311 <tr><td>3<td>3<td>272<td>0.3275<td>0.3317 <tr><td>4<td>4<td>482<td>0.3256<td>0.3314 <tr><td>5<td>5<td>1082<td>0.3206<td>0.3317 <tr><td>6<td>6<td>1922<td>0.3167<td>0.332 <tr><td>7<td>7<td>3632<td>0.311<td>0.3313 <tr><td>8<td>8<td>7682<td>0.3096<td>0.3315</td></tr></tbody></table><br /><br />The simple average shows that there is an optimum; a grid fine enough to resolve the (small) variation, but coarse enough to have data in most cells. The function is smooth, so there is little penalty for too coarse, but a larger one for too fine, since the areas of empty cells coincides with the function peak at the poles. The merit of the nested average is that it removes this downside. Further refinement may not help very much, but it does no harm, because a near local value is always used. <br /><br />The actual coding for nested averaging is quite simple, and I'll give a more complete example below. <br /><h4>HADCRUT and Cowtan/Way</h4>Cowtan and Way thankfully released a <a href="http://www-users.york.ac.uk/~kdc3/papers/coverage2013/series.html">very complete data set</a> with their paper, so I'll redo their calculation (with kriging) with nested gridding and compare results. They used HADCRUT 4.1.1.0, data ending at end 2012. Here is a plot of results from 1980, with nested integration of the HADCRUT gridded data at centres (but on a hex grid). I'm showing every even step as hier1-4, with hier4 being the highest resolution at 7682 cells. All anomalies relative to 1961-90.. <br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/09/hadcwnested.png" width="1000" /><br /><div style="color: red;"><b>Update</b>I had made an error in coding for the HADCRUT/C&W example - see code. I had used which.min instead of which.max. This almost worked, because it placed locations in the cells on the opposite side of the globe, consistently. However, the result is now much more consistent with C&W. With refining, the integrals now approach from below, and also converge much more tightly.</div><br />The HADCRUT 4 published monthly average (V4.1.1.0) is given in red, and the Cowtan and Way Version 1 kriging in black. <strike>The nested integration makes even more difference than C&W, mainly in the time from 1995 to early 2000's. Otherwise, a A</strike>s with C&W, it adheres closely to HADCRUT in earlier years, when presumably there isn't much bias associated with the missing data. C&W focussed on the effect on OLS trends, particularly since 1/1997. Here is a table, in °C/Century:<br /><br /><table><tbody><tr><td width="140"></td><td width="140">Trend 1997-2012</td><td width="140">Trend 1980-2012 </td></tr><tr><td>HAD 4</td><td>0.462</td><td>1.57 </td></tr><tr><td>Hier1</td><td>0.902</td><td>1.615 </td></tr><tr><td>Hier2</td><td>0.929</td><td>1.635 </td></tr><tr><td>Hier3</td><td>0.967</td><td>1.625 </td></tr><tr><td>Hier4</td><td>0.956</td><td>1.624 </td></tr><tr><td>C&W krig</td><td>0.97</td><td>1.689 </td></tr></tbody></table><br /><br />Convergence is very good to the C&W trend I calculate. In their paper, for 1997-2012 C&W give a trend of 1.08 °C/Cen (table III) <strike>which would agree very well with the nested results</strike>. C&W used ARMA(1,1) rather than OLS, but the discrepancy seems too large for that.<span style="color: blue;">Update: Kevin Cowtan has explained the difference in a comment below.</span><br /><br /><h4>Method and Code</h4>This is the code for the integration of the monthly sequence. I'll omit the reading of the initial files and the graphics, and assume that we start with the HADCRUT 4.1.1.0 gridded 1980-2012 data reorganised into an array had[72,36,12,33] (lon,lat,month,year). The hexmin[[]] lists are as described and <a href="https://moyhu.blogspot.com.au/2017/09/the-best-grid-for-earth-temperature.html">posted</a> previously. The 4 columns of $cells are the cell centres and areas (on sphere). The first section is just making pointer lists from the anomaly data into the grids, and from each grid into its parent. If you were doing this regularly, you would store the pointers and just re-use as needed, since it only has location data. The result is the gridpointer and datapointer lists. The code takes a few seconds. <br /><pre>monthave=array(NA,c(12,33,8)) #array for monthly averages <br />datapointer=gridpointer=cellarea=list(); g=0;<br />for(i in 1:8){ # making pointer lists for each grid level<br /> g0=g; # previous g<br /> g=as.matrix(hexmin[[i]]$cells); ng=nrow(g);<br /> cellarea[[i]]=g[,4]<br /> if(i>1){ # pointers to coarser grid i-1<br /> gp=rep(0,ng)<br /> for(j in 1:ng)gp[j]=which.max(g0[,1:3]%*%g[j,1:3])<br /> gridpointer[[i]]=gp<br /> }<br /> y=inv; ny=nrow(y); dp=rep(0,ny) # y is list of HAD grid centres in 3D cartesian<br /> for(j in 1:ny) dp[j]=which.max(g[,1:3]%*%y[j,])<br /> datapointer[[i]]=dp # datapointers into grid i<br />}<br /></pre><span style="color: magenta;">Update: Note the use of which.max here, which is the key instruction locating points in cells. I had originally used which.min, which actually almost worked, because it places ponts on the opposite side of the globe, and symmetry nearly makes that OK. But not quite. Although the idea is to minimise the distance, that is implemented as maximising the scalar product.</span><br /><br />The main data loop just loops over months, counting and adding the data in each cell (using datapointer); forming a cell average. It then inherits from the parent grid values (for empty cells) from the parent average vector using gridpointer to find the match, so at each ave is complete. There is an assumption that the coarsest level has no empty cells. It is then combined with area weighting (cellarea, from hexmin) for the monthly average. Then on to the next month. The result is the array monthave[month, year, level] of global averages. <br /><pre>for(I in 1:33)for(J in 1:12){ # looping over months in data from 1980<br /> if(J==1)print(Sys.time())<br /> ave=rep(NA,8) # initialising<br /> #tab=data.frame(level=ave,Numcells=ave,average=ave)<br /> g=0<br /> for(K in 1:8){ # over resolution levels<br /> ave0=ave<br /> integrand=c(had[,,J,I+130]) # Set integrand to HAD 4 for the month <br /> area=cellarea[[K]]; <br /> cellsum=cellnum=rep(0,length(area)) # initialising<br /> dp=datapointer[[K]]<br /> for(i in 1:n){ # loop over "stations"<br /> ii=integrand[i]<br /> if(is.na(ii))next # no data in cell<br /> j=dp[i]<br /> cellsum[j]=cellsum[j]+ii<br /> cellnum[j]=cellnum[j]+1<br /> }<br /> j=which(cellnum==0) # cells without data<br /> gp=gridpointer[[K]]<br /> if(K>1)for(i in j){cellnum[i]=1;cellsum[i]=ave0[gp[i]]}<br /> ave=cellsum/cellnum # cell averages<br /> Ave=sum(ave*area)/sum(area) # global average (area-weighted)<br /> if(is.na(Ave))stop("A cell inherits no data")<br /> monthave[J,I,K] = round(Ave,4) # weighted average<br /> }<br />}# end I,J<br /></pre><br /><h4>Data</h4>Moyhuhexmin has the hex cell data and was given in the earlier post. I have put a new zipped ascii version <a href="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/09/Moyhu_hexmin_ascii.zip">here</a> <br /><br /><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com8tag:blogger.com,1999:blog-7729093380675162051.post-19970571305189461292017-09-28T14:55:00.000+10:002017-09-28T15:00:07.085+10:00Simple use of a complex grid - Earth temperature.This is a follow-up to my <a href="https://moyhu.blogspot.com.au/2017/09/the-best-grid-for-earth-temperature.html">last post</a>, which refined ideas from an <a href="https://moyhu.blogspot.com.au/2017/09/grids-platonic-solids-and-surface.html">earlier post</a> on using platonic solids as a basis for grids on the sphere that were reasonably uniform, and free of the pole singularities of latitude/longitude. I gave data files for use, as I did with an <a href="https://moyhu.blogspot.com.au/2017/06/cubing-sphere.html">earlier post</a> on a special case, the cubed sphere.<br /><br />The geometry involved can be complicated, but a point I made in that last post was that users need never deal with the complexity. I gave a minimal set of data for grids of varying resolution, which basically recorded the mid-points of the cells, and their area. That is all you need to make use of them. <br /><br />I should add that I don't think the hexagon method I recommend is a critical improvement over, say, the cubed sphere method. Both work well. But since this method of application is the same for any variant, just using cell centres and areas in the same way, there is no cost in using the optimal. <br /><br />In this post, I'd like to demonstrate that with an example, with R code for definiteness. I'd also like to expand on the basic idea, which is that near-regular grids of any complexity have he Voronoi property, which is that cells are the domain of points closest to the mid-points. That is why mid-point location is sufficient information. I can extend that to embedding grids in grids of lower resolution; I will recommend a method of hierarchical integration in which empty cells inherit estimates from the most refined grid that has information for their area. I think this is the most logical answer to the empty cell problem. <br /><br />In the demonstration, I will take the inventory of stations that I use for <a href="https://moyhu.blogspot.com.au/p/a-guide-to-global-temperature-program.html">TempLS</a>. It has all GHCN V3 stations together with a selection of ERSST cells, treated as stations located at grid centres. It has 10997 locations. I will show how to bin these, and use the result to do a single integration of data on those points. <br /><br />I start with calling the data for the inventory ("invo.sav") (posted in the dataset for cubed sphere above). Then I call the Moyhuhexmin data that I posted in the last post. I am going to do the integration over all 8 resolution levels, so I loop over variable K, collecting results in ave[]: <br /><pre>load("invo.sav")<br />load("Moyhuhexmin.sav")<br />ave=rep(NA,8) # initialising<br />for(K in 1:8){<br /> h=hexmin[[K]] # dataframe for level K<br /> g=as.matrix(h$cells) # 3D coords of centres, and areas<br /> y=invo$z; n=nrow(y); # invo$z are stations; <br /> pointer=rep(0,n)<br /></pre>This is just gathering the information. g and y are the two sets of 3D cartesian coordinates on the sphere to work with. Next I locate y in the cells which have centre g: <br /><pre> pointer=rep(0,n)<br /> for(i in 1:n) pointer[i]=which.min(g[,1:3]%*%y[i,]) # finding closest g to y <br /></pre>If this were a standalone calculation, I wouldn't have done this as a separate loop. But the idea is that, once I have found the pointers, I would store them as a property of the stations, and never have to do this again. Not that it is such a pain; although I discussed last time a multi-stage process, first identifying the face and then searching that subset, in fact with near 11000 nodes and 7682 cells (highest resolution), the time taken is still negligible - maybe 2 seconds on my PC. <br /><br />Now to do an actual integration. I'll use a simple known function, where one would normally use temperature anomalies assigned to a subset of stations y. I'll use the y-coord in my 3D, which is sin(latitude),, and sunce that has zero integral, I'll integrate the square. The answer should be 1/3. <br /><pre>integrand=y[,2]^2<br />area=g[,4]; <br />cellsum=cellnum=rep(0,nrow(g)) # initialising<br />for(i in 1:n){<br /> j=pointer[i]<br /> cellsum[j]=cellsum[j]+integrand[i]<br /> cellnum[j]=cellnum[j]+1<br />}<br /></pre>area[] is just the fourth column of data from hexmin; it is the area of each cell on the sphere. cellsum[] with be the sum of integrand values in the cell, and cellnum[] the count (for averaging). This is where the pointers are used. The final stage is the weighted summation: <br /><pre>o=cellnum>0 # cells with data<br />ave[K] = sum(cellsum[o]*area[o]/cellnum[o])/sum(area[o]) # weighted average<br />} # end of K loop<br /></pre>This is, I hope, fairly obvious R stuff. o[] marks cells with data which are the only ones included in the sum. area[o] are the weights, and to get the averages I divide by the sum of weights. This is just conventional grid averaging. <br /><h4>Integration results</h4>Here are the results of grid integration of sin^2(lat) at various resolution levels: <br /><br /><table><tbody><tr><td width="10"></td><td width="100">level</td><td width="120">Number of cells</td><td width="100">average </td></tr><tr><td></td><td>1</td><td>32</td><td>0.3292 </td></tr><tr><td></td><td>2</td><td>122</td><td>0.3311 </td></tr><tr><td></td><td>3</td><td>272</td><td>0.3275 </td></tr><tr><td></td><td>4</td><td>482</td><td>0.3256 </td></tr><tr><td></td><td>5</td><td>1082</td><td>0.3206 </td></tr><tr><td></td><td>6</td><td>1922</td><td>0.3167 </td></tr><tr><td></td><td>7</td><td>3632</td><td>0.3108 </td></tr><tr><td></td><td>8</td><td>7682</td><td>0.3096 </td></tr></tbody></table>The exact answer is 1/3. This was reached at fairly coarse resolution, which is adequate for this very smooth function. At finer resolution, empty cells are an increasing problem. Simple averaging ignoring empty cells effectively assigns to those eells the average of the rest. Because the integrand has peak value 1 at the poles, where many cells are empty, those cells are assigned a value of about 1/3, when they really should be 1. That is why the integral diminishes with increasing resolution. It is also why the shrinkage tapers off, because once most cells in the region are empty, further refinement can't make much difference. <br /><h4>Empty cells and HADCRUT</h4>This is the problem that <a href="https://moyhu.blogspot.com.au/2013/11/cowton-and-way-trends.html">Cowtan and Way 2013</a> studied with <a href="https://www.metoffice.gov.uk/hadobs/hadcrut4/">HADCRUT</a>. HADCRUT averages hemispheres separately, so they effectively infill empty cells with hemisphere averages. But Arctic areas especially are warming faster than average, and HADCRUT tends to miss this. C&W tried various methods of interplating, particularly polar values, and got what many thought was an improvement, more in line with other indices. I showed <a href="https://moyhu.blogspot.com.au/2013/11/coverage-hadcrut-4-and-trends.html">at the time</a> that just averaging by latitude bands went a fair way in the same direction. <br /><br />With the new grid methods here, that can be done more systematically. The Voronoi based matching can be used to embed grids in grids of lower resolution, but fewer empty cells. Integrtaion can be done starting with a coarse grid, and then going to higher resolution. Infilling of an empty cell can be done with the best value from the heirarchy. <br /><br />I use an alternative diffusion based interpolation as one of the <a href="https://moyhu.blogspot.com.au/2015/10/new-integration-methods-for-global.html">four methods</a> for TempLS. It works very well, and gives results similar to the other two of the three best (node-nased triangular mesh and spherical harmonics). I have tried variants of the heirarchical method, with similar effect. <br /><h4>Next</h4>In the next post, I will check out the hierarchical method applied to this simple example, and also to HADCRUT4 gridded version. I'm hoping from a better match with Cowtan and Way.<br /><br /><br /><br /><br /><br /><br /><br /><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com0tag:blogger.com,1999:blog-7729093380675162051.post-769485428459176792017-09-26T08:24:00.001+10:002017-09-26T21:05:28.663+10:00The best grid for Earth temperature calculation.<a href="https://moyhu.blogspot.com.au/2017/09/grids-platonic-solids-and-surface.html">Earlier this month</a>, I wrote about the general ideas of gridding, and how the conventional latitude/longitude grids were much inferior to grids that could be derived from various platonic solids. The uses of gridding in calculating temperature (or other field variables) on a sphere are <br /><ol><li>To gather like information into cells of known area </li><li> To form an area weighted sum or average, representative of the whole sphere </li><li> a necessary requirement is that it is feasible to work out in which cell an arbitrary location belongs </li></ol>So a good grid must have cells small enough that variation within them has little effect on the result ("like"), but large enough that they do significant gathering. It is not much use having a grid where most cells are empty of data. This leads to two criteria for cells that balance these needs:<br /><ul><li>The cells should be of approximately equal area.</li><li>The cells should be compact, so that cells of a given area can maximise "likeness".</li></ul>Lat/lon fails because:<br /><ul><li>cells near poles are much smaller</li><li>the cells become long and thin, with poor compactness</li></ul>I showed platonic solid meshes with triangles and squares that are much less distorted, and with more even area distribution, Clive Best, too, has been looking at <a href="http://clivebest.com/?p=8014">icosahedra</a>. I have also been looking at ways of improving area uniformity. But I haven't been thinking much about compactness. The ideal there is a circle. Triangles deviate most; rectangles are better, if nearly square. But better still are regular hexagons, and that is my topic here. <br /><br />With possibly complex grids, practical usability is important. You don't want to keep having to deal with complicated geometry. With the cubed sphere, I posted <a href="https://moyhu.blogspot.com.au/2017/06/cubing-sphere.html">here</a> a set of data which enables routine use with just lookup. It includes a set of meshes with resolution increasing by factors of 2. The nodes have been remapped to optimise area uniformity. There is a lookup process so that arbitrary points can be celled. But there is also a list showing in which cell the stations of the inventory that I use are found. So although the stations that report vary each month, there is a simple geometry-free grid average process <br /><ul><li>For each month, sort the stations by cell label</li><li>Work out cell averages, then look up cell areas for weighted sum.</li></ul>I want to do that here for what I think is an optimal grid. <br /><br />The optimal grid is derived from the triangle grid for icosahedra, although it can also be derived from the dual dodecahedron. If the division allows, the triangles can be gathered into hexagons, except near vertices of the icosahedron, where pentagons will emerge. This works provided the triangles faces are initially trisected, and then further divided. There will be 12 pentagons, and the rest hexagons. I'll describe the mapping for uniform sphere surface area in an appendix. <br /><h4>Lookup</h4>I have realised that the cell finding process can be done simply and generally. Most regular or near-regular meshes are also <a href="https://en.wikipedia.org/wiki/Voronoi_diagram">Voronoi nets</a> relative to their centres. Thatis, a cell includes the points closest to its centre, and not those closer to any other centre. So you can find the cell for a point by simply looking for the closest cell center. For a sphere that is even easier; it is the centre for which the scalar product (cos angle) of 3D coordinates is greatest. <br /><br />If you have a lot of points to locate, this can still be time-consuming, if mechanical. But it can be sped up. You can look first for the closest face centre (of the icosahedron). Then you can just check the cells within that face. That reduces the time by a factor of about 20. <br /><h4>The grids</h4>Here is a WebGL depiction of the results. I'm using the <a href="https://moyhu.blogspot.com.au/p/moyhu-webgl-earth-facility.html">WebGL facility, V2.1</a>. The sphere is a trackball. You can choose the degree of resolution with the radio buttons on the right; hex122, for example, means a total of 122 cells. They progress with factors of approx 2. The checkboxes at the top let you hide various objects. There are separate objects for red, yellow and green, but if you hide them all, you see just the mesh. The colors are designed to help see the icosahedral pattern. Pentagons are red, surrounded by a ring of yellow. <br /><br /><div id="PxBody"></div><script src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/pages/webgl/MoyJSlib.js" type="text/javascript"> </script><script src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/09/hex.js" type="text/javascript"> </script><script src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/pages/webgl/Map.js" type="text/javascript"> </script><script src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/pages/webgl/MoyGLV2.js" type="text/javascript"></script><br /><br />The grid imperfections now are just a bit of distortion near the pentagons. This is partly because I have forced them to expand to have simiar area to the hexagons. For grid use, the penalty is just a small loss of compactness. <br /><h4>Data</h4>The data is in the form of a R save file, for which I use the suffix .sav. There are two. One <a href="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/09/hexmin.sav">here</a> is a minimal set for use. It includes the cell centre locations, areas, and a listing of the cells within each face, for faster search. That is all you need for routine use. There is a data-frame with this information for each of about 8 levels of resolution, with maximum 7682 cells (hex7682). There is a doc string. <br /><br />The longer data set is <a href="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/09/hex.sav">here</a>. This has the same levels, but for each there are dataframes for cells, nodes, and the underlying triangular mesh. A dataframe is just R for a matrix that can have columns of various types, suitably labelled. It gives all the nodes of the triangular mesh, with various details. There are pointers from one set to another. there is also a doc string with details. <br /><h4>Appendix - equalising area</h4>As I've occasionally mentioned, I've spent a lot of time on this interesting math problem. The basic mapping from platonic solid to sphere is radial projection. But that distorts areas that were uniform on the solid. Areas near the face centres are projected further (thinking of the solid as within the sphere) and grow. There is also, near the edges, an effect due to the face plane slanting differently to the sphere (like your shadow gets smaller when the sun is high). These distortions get worse when the solid is further from spherical. <br /><br />I counter this with a mapping which moves the mesh on the face towards the face centre. I initially used various polynomials. But now I find it best to group the nodes by symmetry - subsets that have to move in step. Each has one (if on edge) or two degrees of freedom. Then the areas are also constrained by symmetry, and can be grouped. I use a Newton-Raphson method (actually secant) to move the nodes so that the triangles have area closest to the ideal, which is the appropriate fraction of the sphere. There are fewer degrees of freedom than areas, so it is a kind of regression calculation. It is best least squares, not exact. You can check the variation in areas; it gets down to a few percent. <br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com12tag:blogger.com,1999:blog-7729093380675162051.post-87976014852743488572017-09-19T03:16:00.000+10:002017-09-19T08:21:49.945+10:00GISS August up 0.01°C from July.GISS showed a <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#L1">very small rise</a>, going from 0.84°C in July to 0.85°C in August (GISS report <a href="https://data.giss.nasa.gov/gistemp/news/20170918/">here</a>). TempLS mesh showed a very slight fall, <a href="https://moyhu.blogspot.com.au/2017/09/august-global-surface-temperature-down.html">which I posted</a> at 0.013°C, although with further data is is now almost no change at all. I see that GISS is now using ERSST V5, as TempLS does. <br /><br />The overall pattern was similar to that in TempLS. Warm almost everywhere, with a big band across mid-latitude Eurasia and N Africa. Cool in Eastern US and high Arctic, which may be responsible for the slowdown in ice melting.. <br /><br />As usual here, I will compare the GISS and previous TempLS plots below the jump. <br /><a name='more'></a><br />Here is GISS<br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/09/GISSaug.jpg" /><br /><br />And here is the TempLS spherical harmonics plot <br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/08/map.png" /><br /><br /><div style="color: #aa0000;">This post is part of a series that has now run for six years. The TempLS mesh data is reported <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#mesh">here</a>, and the recent history of monthly readings is <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#L1">here</a>. Unadjusted GHCN is normally used, but if you click the TempLS button there, it will show data with adjusted, and also with different integration methods. There is an interactive graph using 1981-2010 base period <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#Drag">here</a> which you can use to show different periods, or compare with other indices. There is a general guide to TempLS <a href="https://moyhu.blogspot.com.au/p/a-guide-to-global-temperature-program.html">here</a>. <br /><br />The reporting cycle starts with a report of the daily reanalysis index on about the 4th of the month. The next post is this, the TempLS report, usually about the 8th. Then when the GISS result comes out, usually about the 15th, I discuss it and compare with TempLS. The TempLS graph uses a spherical harmonics to the TempLS mesh residuals; the residuals are displayed more directly using a triangular grid in a better resolved WebGL plot <a href="http://www.moyhu.blogspot.com.au/p/blog-page_24.html">here</a>. </div><br /><br /><br /><br /><br /><br /><br /><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com8tag:blogger.com,1999:blog-7729093380675162051.post-3879555759150840552017-09-18T17:35:00.000+10:002017-09-18T19:39:25.098+10:00Grids, Platonic solids, and surface temperature (and GCMs)This follows a series, predecessor <a href="https://moyhu.blogspot.com.au/2017/08/temperature-averaging-and-integrtaion.html">here</a>, in which I am looking at ways of dealing with surface variation of Earth's temperature, particularly global averaging. I have written a few posts on the cubed sphere, eg <a href="https://moyhu.blogspot.com.au/2017/06/cubing-sphere.html">here</a>. I have also shown some examples of using an icosahedron, as <a href="https://moyhu.blogspot.com.au/2017/04/spherical-harmonics-movie.html">here</a>. <a href="http://clivebest.com/blog/?p=8014">Clive Best</a> has been looking at similar matters, including use of icosahedron. For the moment, I'd like to write rather generally about grids and ways of mapping the sphere. <br /><h4>Why grids?</h4>For surface temperature, grids are commonly used to form local averages from data, which can then be combined with area weighting to average globally. I have described the general considerations <a href="https://moyhu.blogspot.com.au/2017/08/temperature-averaging-and-integrtaion.html">here</a>. All that is really required is any subdivision of reasonable (compact) shapes. They should be small enough that the effect of variation within is minimal, but large enough that there is enough data for a reasonable estimate. So they should be of reasonably equal area. <br /><br />The other requirement, important later for some, is that any point on the sphere can be associated with the cell that contains it. For a regular grid like lat/lon, this is easy, and just involves conversion to integers. So if each data point is located, and each cell area is known, that is all that is needed. As a practical matter, once the cell locations are known for an inventory of stations, the task of integrating the subset for a given month is just a look-up, whatever the geometry. <br /><br />I described <a href="https://moyhu.blogspot.com.au/search?q=coverage">way back</a> a fairly simple subdivision scheme that works for this. Equal latitude bands are selected. Then each band is divided as nearly as possible into square elements (on the sphere). The formula for this division can then be used to locate arbitrary points within the cells. I think this is all that is required for surface averaging. <br /><br />However, for anything involving partial differentiation, such as finite element or GCM modelling, more is required. Fluxes between cells need to be measured, so hey have to line up. Nodes have to be related to each cell they abut. This suggests regular grids. In my case, I sometimes want to use a kind of diffusive process to estimate what is happening in empty cells. Again, regular is better. <br /><h4>Platonic solids</h4>Something that looks a bit like a sphere and is easy to fit with a regular grid is a <a href="https://en.wikipedia.org/wiki/Platonic_solid">Platonic solid</a>. There are five of them - I'll show the Wiki diagram:<br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/09/plato1.png" /><br /><br />Regular means that each side has the same length, and each face is a congruent regular polygon. The reason why there are only five is seen if you analyse what has to happen at vertices (Wiki): <br /><br /><a name='more'></a><br /><br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/09/plato2.png" /><br /><br />There are 8 shown, but the three with lilac background have defect zero, and are the polygons you can use to grid the plane. The rest correspond to what you would see if you unfolded the polyhedron. The size of defect says how much distortion results, and is to be expected if you map the sphere onto the solid. <br /><br />The simplest mapping is a projection. You can enclose any of these solids with a sphere that goes through all the vertices. The sphere is mapped by following, for any point, the radius until the surface is reached, and inverted by going the other way. The relation between defect and distortion is that on the sphere the angles will be stretched to add to 360°. The effcets of this can be seen on my <a href="https://moyhu.blogspot.com.au/2017/06/world-map-equal-area-projection-more.html">cubed maps.</a><br /><br />So the cube (4,3) has a moderate defect of 90°. The icosahedron (3,5) has a smaller (better) defect of 60°. The dodecahedron (5,3) is even better, but the resulting pentagons are harder to subdivide, which we'll want to do. Tetrahedron and octahedron have no advantage to compensate the high defect. <br /><h4>Subdividing and equal area</h4>Even an icosahedron is likely too coarse a gridding on its own, even though the faces are guaranteed to be of equal area. The point of regular faces is that they can themselves be gridded, with squares for the cube, or triangles for the icosahedron. You can't grid a plane with pentagons, which is the problem for the dodecahedron. But it can be triangulated. So the inquiry now is to which solid gives equal area mapping, with an eye on the requirement that the mapping should also permit arbitrary points to be located in mapped cells. <br /><br />There are actually two ways to proceed. The way I used in the cube data structure I posted <a href="https://moyhu.blogspot.com.au/2017/06/cubing-sphere.html">here</a>, and the way Clive Best divided the icosahedron as linked above, is to successively bisect, in a way shown below, projecting at each stage onto the sphere. In gneral that is the better way; I'll describe some particular issues below. The second is to divide the polygon faces as planes, and then project. That leads to more unequal areas, but it can be remapped. The second method simplifies a basic requirement, that an arbitrary point on the surface can have its enclosing cell identified (providing the remapping can be inverted). <br /><h4>A special case - the cylinder</h4>Familiar flat earth projections are actually projections onto the cylinder. You can, at a stretch, regard this as a degenerate Platonic solid, in which the faces are two-sided polygons, ie lines (longitude). The defect is 360°. The aim is to map onto a finite sheet of paper, so some mapping is always subsequently applied to the y-axis (y=tan(θ), latitude). The famous one is <a href="https://en.wikipedia.org/wiki/Mercator_projection#Derivation_of_the_Mercator_projection">Mercator's</a>; here the y axis is mapped by y=log(tan(π/4+θ/2)). This makes the mapping conformal - locally each feature is in proportion. But the area ratio varies greatly, so Greenland is huge. An <a href="https://en.wikipedia.org/wiki/Cylindrical_equal-area_projection">equal area map</a> has y=sin(θ); this minimises distortion (of which there is much) near the equator. You can scale to move this minimum zone to convenient latitudes. <br /><br />I've actually written these as modified mappings of θ, but you can regard them as projections and then re-mappings. I've introduced the idea because it can be applied to mappings onto solids too, also to improve area ratio. It's a more natural way of looking at it there, because the projection is already a reasonable mapping. I'll come back to this. <br /><h4>First subdivision step</h4>I'm now thinking about cubes, dodecahedra and icosahedra. The following image shows how the bisection process described is initiated for the three solids one might consider: <br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/09/plato3.png" /><br /><br />The subdivision then proceeds by dividing triangles as shown for the icosahedron, or squares as shown for the cube. A key thing now is that while the subdivisions are equal area on the plane surface, they generally aren't as mapped onto the sphere. In the case of the icosahedron, in the first subdivision, the centre triangle projects to an area of 0.180 on the unit sphere, while the others are 0.149. Further subdivision creates further inequality, although the first step does about half the eventual damage. <br /><br />However, the cube and dodecahedron have an interesting feature which, I think, more than compensates for the reduced number of sides. In the first subdivision step, the new polygons are assured equal area by symmetry. So at that stage the square has 24 equal components, and the dodec has sixty. I'll list the pros and cons of each: <br /><ul><li> Icosahedron - starts with 20 equilateral cells. When first divided, the cells are still equilateral, but unequal area on the sphere, ratio about 6:5. </li><li> Cube - after first subdivision, 24 "squares" of equal area, although the nodes are not quite coplanar. Squares are better than triangles because they are more compact. That means that if you have a number of points within a cell of given area, they are generally closer to each other, which is what gridding is trying to ensure. It also means that a cell of given area sits closer to the sphere surface, so there is less distortion on projecting. </li><li> Dodecahedron - after first subdivision, there are 60 cells of equal area. They aren't quite equilateral; they are isosceles with angles on the surface of 72,60,60° ( they don't have to add to 180, and in fact the excess is the area). Thereafter subdivision proceeds much as for the icosahedron. </li></ul>On this basis, I think the dodec wins, followed by cube. However, next comes the possibility of re-mapping. <br /><h4>Re-mapping</h4>I have spent an inordinate amount of time on this in recent weeks. You can re-map after the bisection as above, but it might as well be done subdividing the plane surface, and then projecting at the end. The idea is that you divide, then develop a parametric scheme for re-mapping to equalise area. I use a Newton-Raphson (actually really secant) to do this. For the cube array, as I stored data and functions linked <a href="https://moyhu.blogspot.com.au/2017/06/cubing-sphere.html">here</a>, I used a polynomial form that preserved the required symmetries. For the icosahedron I reasoned thus. You can take just one face, and identify six axes of symmetry, dividing into six equal sections. What you do to the nodes in just one of these determines what happens everywhere, and so solving the permitted perturbations that nearly equalise area (least squares, basically Newton-Raphson), gives the required mapping. <br /><br />There is still the pesky issue of how to put a point on the sphere into its cell. With the polynomial mapping, that can be inverted, usually also requiring Newton-Raphson. But there is another, more general method. <br /><br />The more elementary question is, how do you locate a point in a regular grid. Say, (37.3, 124.4) in a 2x2 grid. It's just a matter of truncating each to the nearest below even integer. Or to the nearest odd integer, if you mark the cell by its mid-point. It's easier to think about it if you rescale to a unit grid. Triangles seem harder, but you can do the same in homogeneous coordinates. Simplest is to express everything in the homogeneous coords of one base cell. Then omitting the fractional parts tells you which cell it is in. <br /><br />If the grid is re-mapped, you can proceed iteratively. Choose a base cell, and locate the point relative to that. Then re-do, with that cell as base. If it is still in, we have it, otherwise repeat going to the revised cell as base. Because the re-mappings are relatively minor changes, this converges quickly. <br /><h4>Next</h4>This was all meant to be preparatory to posting about the icosahedron in the same way I did for cube <a href="https://moyhu.blogspot.com.au/2017/06/cubing-sphere.html">here</a>, with data structures for six or so stages of bisection. Then I realised that dodecahedra actually looked better, and even cubes. I'll probably still post the icosahedron data, and maybe dodec. <br /><br />I should add that, while one can seek to optimise equal area mapping, in practice any of these solids will be adequate for surface temperature averaging.<br /><br /><br /><br /><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com14tag:blogger.com,1999:blog-7729093380675162051.post-82877246317982374522017-09-08T11:01:00.000+10:002017-09-08T11:01:15.498+10:00August global surface temperature down 0.013°C<a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#mesh">TempLS mesh</a> anomaly (1961-90 base) was down from 0.69°C in July to 0.677°C in August. This very small drop compares with the <a href="https://moyhu.blogspot.com/2017/09/august-ncepncar-global-anomaly-up-0038.html"> small rise</a> of 0.038°C in the NCEP/NCAR index, and a <a href="http://www.drroyspencer.com/2017/09/uah-global-temperature-update-for-august-2017-0-41-deg-c/">bigger rise</a> (0.12) in the UAH LT satellite index. The August value is less than August 2015 or 2016, but higher than 2014. <br /><br />There was a moderate fall in Antarctica, which as usual affects TempLS mesh and GISS more than others. I'd expect NOAA and HADCRUT to show increases for August. Regionally, the Old World was mostly warm; US was cold cental and East, but N Canada was warm. S America mostly warm (still awaiting a few countries). : <br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/08/map.png" /><br /><a name='more'></a><div style="color: #aa0000;">This post is part of a series that has now run for six years. The TempLS mesh data is reported <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#mesh">here</a>, and the recent history of monthly readings is <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#L1">here</a>. Unadjusted GHCN is normally used, but if you click the TempLS button there, it will show data with adjusted, and also with different integration methods. There is an interactive graph using 1981-2010 base period <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#Drag">here</a> which you can use to show different periods, or compare with other indices. There is a general guide to TempLS <a href="https://moyhu.blogspot.com.au/p/a-guide-to-global-temperature-program.html">here</a>. <br /><br />The reporting cycle starts with a report of the daily reanalysis index on about the 4th of the month. The next post is this, the TempLS report, usually about the 8th. Then when the GISS result comes out, usually about the 15th, I discuss it and compare with TempLS. The TempLS graph uses a spherical harmonics to the TempLS mesh residuals; the residuals are displayed more directly using a triangular grid in a better resolved WebGL plot <a href="http://www.moyhu.blogspot.com.au/p/blog-page_24.html">here</a>. </div><br /><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com10tag:blogger.com,1999:blog-7729093380675162051.post-23445070132420721542017-09-07T05:04:00.002+10:002017-09-07T05:04:19.568+10:00 August NCEP/NCAR global anomaly up 0.038°CIn the <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#NCAR">Moyhu NCEP/NCAR index</a>, the monthly reanalysis average rose from 0.299°C in July to 0.337°C in August, 2017. The results were late this month; for a few days NCEP/NCAR was not posting new results. It was a very up and down month; a dip at at the start, then quite a long warm period, and then a steep dip at the end. Now that a few days in September are also available, there is some recovery from that late dip. August 2017 was a bit cooler than Aug 2016, but warmer than 2015. <br /><br />It was cool in Eastern US, but warm in the west and further north. Cool in Atlantic Europe, but warm further east. Mostly cool in Antarctica. <br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/data/freq/days.png" /><br /><br /><br /><br /><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com0tag:blogger.com,1999:blog-7729093380675162051.post-6888818767581484472017-08-30T12:51:00.000+10:002017-08-30T12:52:54.464+10:00Gulf SST - warm before Harvey, cool after.I maintain <a href="http://www.moyhu.blogspot.com.au/p/blog-page.html">a page </a> showing high resolution (1/4degree) AVHRR SST data from NOAA - in detail: <i>"NOAA High Resolution SST data provided by the NOAA/OAR/ESRL PSD, Boulder, Colorado, USA, from their Web site at http://www.esrl.noaa.gov/psd/"</i>. It renders it in WebGL and goes back with daily maps for a decade or so, then les frequently. It shows anomalies relative to 1971-2000. I have been tracking the effect of Hurricane Harvey. It was said to have grown rapidly because of warm Gulf waters; they were warm, but not exceptionally, as this extract from 15 August shows: <br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/08/aug15.png" /><br /><br />It remained much the same to 24th August, when Harvey grew rapidly, and gained Hurricane status late in the day (). But by 25th, there is some sign of cooling. 26th (not shown) was about the same. But by 27th, There was marked cooling, and by 28th more so. The cooling seems to show up rather belatedly alomg the path of the hurricane. <br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/08/harvey.png" /><br /><br />Here's is the latest day at higher resolution: <br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/08/aug28a.png" /><br /><br /><a href="https://moyhu.blogspot.com.au/2013/02/hurricanes-and-sst-movies.html">A few years ago</a>, I developed a set of movies based on recent hurricanes of the time, showing their locations and SST at the time. Some showed a big effect, some not so much. Harvey was interesting in that it covered a fairly confined area of ocean, and moved slowly. <br /><br /><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com22tag:blogger.com,1999:blog-7729093380675162051.post-10111027499379554092017-08-24T19:41:00.000+10:002017-08-26T15:45:29.461+10:00Surface temperature sparsity error modesThis post follows last week's on <a href="http://moyhu.blogspot.com/2017/08/temperature-averaging-and-integrtaion.html">temperature integration methods</a>. I described a general method of regression fitting of classes of integrable functions, of which the most used to date is <a href="https://moyhu.blogspot.com.au/2015/09/spherical-harmonics.html">spherical harmonics (SH)</a>. I noted that the regression involved inverting a matrix HH consisting of all the scalar product integrals of the functions in the class. With perfect integration this matrix would be a unit matrix, but as the SH functions become more oscillatory, the integration method loses resolution, and the representation degrades with the condition number of the matrix HH. The condition number is the ratio of largest eigenvalue to smallest, so what is happening is that some eigenvectors become small, and the matrix is near singular. That means that the corresponding eigenvector might have a large multiplier in the representation. <br><br>I also use fitted SH for plotting each month's temperature. I described some of the practicalities <a href="https://moyhu.blogspot.com.au/2010/04/ver-2-regional-spatial-variation.html">here</a> (using different functions). Increasing the number of functions improves resolution, but when HH becomes too ill-conditioned, artefacts intrude, which are multiples of these near null eigenvectors. <br><br>In the previous post, I discussed how the condition of HH depends on the scalar product integral. Since the SH are ideally orthogonal, better integration improves HH. I have been taking advantage of that in recent TempLS to increase the order of SH to 16, which implies 289 functions, using mesh integration. That might be overdoing it - I'm checking. <br><br>In this post, I will display those troublesome eigen modes. They are of interest because they are associated with regions of sparse coverage, and give a quantification of how much they matter. Another thing quantified is how much the integration method affects the condition number for a given order of SH. I'll develop that further in another post. <br><br>I took N=12 (169 functions), and looked at TempLS stations (GHCN+ERSST) which reported in May 2017. Considerations on choice of N are that if too low, the condition number is good, and the minimum modes don't show features emphasising sparsity. If the number is too high, each region like Antarctica can have several split modes, which confuses the issue. <br><br>The integration methods I chose were mostly described <a href="https://moyhu.blogspot.com.au/2015/10/new-integration-methods-for-global.html">here</a><ul><li>OLS - just the ordinary scalar product of the values <li>grid - integration by summing on a 5x5° latitude/longitude grid. This was the earliest TempLS method, and is used by HADCRUT. <li>infill - empty cells are infilled with an average of nearby values. Now the grid is a <a href="https://moyhu.blogspot.com.au/2017/06/cubing-sphere.html">cubed sphere</a> with 1536 cells <li>mesh - my generally preferred method using an irregular triangular grid (complex hull of stations) with linear interpolation. </ul>OLS sounds bad, but works quite well at moderate resolution, and was used in TempLS until very recently. <br><br>I'll show the plots of the modes as an active lat/lon plot below, and then the OLS versions in WebGL, which gives a much better idea of the shapes. But first I'll show a table of the tapering eigenvalues, numbering from smallest up. They are scaled so that the maximum is 1, so reciprocal of the lowest is the condition number. <br><table style="color:blue"><tr><td width=80> <td width=80> OLS<td width=80> grid<td width=80> infilled<td width=80> mesh <tr><td>Eigen1<td>0.0211<td>0.0147<td>0.0695<td>0.135 <tr><td>Eigen2<td>0.0369<td>0.0275<td>0.138<td>0.229 <tr><td>Eigen3<td>0.0423<td>0.0469<td>0.212<td>0.283 <tr><td>Eigen4<td>0.0572<td>0.0499<td>0.244<td>0.329 <tr><td>Eigen5<td>0.084<td>0.089<td>0.248<td>0.461 <tr><td>Eigen6<td>0.104<td>0.107<td>0.373<td>0.535 <tr><td>Eigen7<td>0.108<td>0.146<td>0.406<td>0.571 <tr><td>Eigen8<td>0.124<td>0.164<td>0.429<td>0.619 </table><div>And here is a graph of the whole sequence, now largest first: <br><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/08/eigens.png"></img><br><br>The hierarchy of condition numbers is interesting. I had expected that it would go in the order of the columns, and so it does until near the end. Then mesh drops below infilled grid, and OLS below grid, for the smallest eigenvalues. I think what determines this is the weighting of the nodes in the sparse areas. For grid, this is not high, because each just gets the area of its cell. For both infilled and mesh, the weight rises with the area, and apparently with infilled, more so. <br><br><a name='more'></a><br>Here is an active graph to show the errant modes. You can cycle through "Style", which means style of integration (grid, mesh etc) and mode, starting from 1 (buttons top right). <br><br><div id="Yxe1" style="position:absolute"></div><div style="height:550px;"></div><br></div><br>It's dominated by Antarctica; the lowest modes focus, with some Arctic activity too, and it isn't for a while that modes bob up in Africa, with some effect in S America. The weakest style (OLS) is almost all polar in the first 9 modes, while mesh starts showing Africa from about mode 4 up, and later Brazil shows up. <br><br>Here is the WebGL plot - I show just the mesh style. It gives a better proportion for the polar behaviour, and shows finer features elsewhere. It is the usual trackball, with radio buttons for the modes. Dots are the stations. <br><br><div id="PxBody" /></div><!--This is a generic world WebGL plotting facility, coded in JS by Nick Stokes, 11 Jan 2014.--><script type="text/javascript" src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/pages/webgl/MoyJSlib.js"></script><script type="text/javascript" src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/pages/webgl/Map.js"></script><script type="text/javascript" src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/08/eigen.js"></script><script type="text/javascript" src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/pages/webgl/MoyGLV2.1.js"></script></html><br><br>Next post will take this further. I'll do a more systematic look at which styles work best in which circumstances. The next main interest is whether I can get better resolution by restricting to a space without the problem nodes. In principle, one could take a very large collection of SH, and collect the eigenfunctions, which are truly orthogonal with respect to the integration style. A subset with moderate eigenvectors would still have a large orthogonal basis. <br><br><script type="text/javascript">function YxInit(){ var G={} MoyJSlib(G) //eval(G.var) var cr=G.cr var i,j,k,px,q,r,s,t,P,x=[0,0],dir="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/08/"; function C(event){ var i,m,n,t,y; t=event.target;y=[4,7]; m=t.m; n=m[0] x[n]+=m[1]+y[n]; x[n]=x[n]%y[n]; s=dir+"wm/wm"+["O","g","i","m"][x[0]]+(x[1]+1)+".png" px.src=s } P=document.getElementById("Yxe1") P.style="position:absolute;height:400px" px=cr(P,"img") px.src=dir+"wm/wmO1.png" px.style="width:800px;height:500px; border: 2px solid red" q=cr(P,"table") q.style="position:absolute;left:680px;top:0px;border: 2px solid blue" for(i=0;i<2;i++){r=cr(q,"tr"); for(j=0;j<3;j++){ s=cr(q,"td"); if(j==1){t=cr(s,"span");G.iH(t,["Style","Mode"][i])}else{ t=cr(s,"button");t.innerHTML=["<","",">"][j];t.m=[i,j-1]; t.onclick=C } } } } YxInit() </script><br>Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com9tag:blogger.com,1999:blog-7729093380675162051.post-79378877970188122822017-08-17T17:39:00.002+10:002017-08-17T17:49:10.256+10:00Temperature averaging and integration - the basics <meta charset="UTF-8">I write a lot about spatial integration, which is at the heart of global temperature averaging. I'll write more here about the principles involved. But I'm also writing to place in context methods I use in TempLS, which I think are an advance on what is currently usual. I last wrote a comparison of methods <a href="https://moyhu.blogspot.com.au/2015/10/new-integration-methods-for-global.html">in 2015 here</a>, which I plan to update in a sequel. Some issues here arose in a <a href="https://climateaudit.org/2017/05/18/how-dependent-are-gistemp-trends-on-the-gridding-radius-used/#comment-772475">discussion </a> at Climate Audit. <br><br>It's a long post, so I'll include a table of contents. I want to start from first principles and make some math connections. I'll use paler colors for the bits that are more mathy or that are just to make the logic connect, but which could be skipped. <br><ul><li> <a href="#I1">Basics - averaging temperature and integration</a><li> <a href="#I2">Averaging and integration.</a><li> <a href="#I3">Integration - theory</a><li> <a href="#I4">Numerical integration - sampling and cells</a><li> <a href="#I5">Triangular mesh</a><li> <a href="#I6">Integration by function fitting</a><li> <a href="#I7">Fitting - weighted least squares</a><li> <a href="#I8">Basis functions - grid</a><li> <a href="#I9">Basis functions - mesh</a><li> <a href="#I10">Basis functions - spherical harmonics</a><li> <a href="#I11">Basis functions - radial basis functions</a><li> <a href="#I12">Basis functions - higher order finite element</a><li> <a href="#I13">Next in the series.</a></ul><a name='more'></a><h4 id="I1">Basics - averaging temperature.</h4>Some major scientific organisations track the global temperature monthly, eg <a href="https://data.giss.nasa.gov/gistemp/">GISS</a>, <a href="https://www.ncdc.noaa.gov/climate-information/analyses/monthly-global-climate-reports">NOAA NCEI</a>, <a href="http://www.metoffice.gov.uk/hadobs/hadcrut4/">HADCRUT 4</a>, <a href="http://berkeleyearth.org/land-and-ocean-data/">BEST</a>. What they actually publish is global temperature average anomaly, made up of land station measurements and sea surface temperatures. I write often about anomalies, eg <a href="https://moyhu.blogspot.com.au/2017/01/global-anomaly-spatial-sampling-error.html">here</a>. They are data associated with locations, formed by subtracting an expected value for the time of year. Anomalies are averaged; it isn't an anomaly of an average - a vital distinction. Anomalies must be created before averaging. <br><br>I also do this, with <a href="https://moyhu.blogspot.com.au/p/a-guide-to-global-temperature-program.html">TempLS</a>. Global average anomaly uses two calculations - an average over time, to get normals, and an average of anomalies (using normals) over space (globe). It's important that the normals don't get confounded with global change; this is often handled by restricting the anomaly to a common base period, but TempLS uses an iterative process. <h4 id="I2">Averaging and integration.</h4>People think of averaging as just adding N numbers and dividing by N. I often talk of weighted averaging - Σ wₖxₖ / Σ wₖ (x data, w weights) Dividing by the weights ensures that the average of 1 is 1; a more general way of implementing is just to divide by trhe result of applying what you did to x to 1. <br><br>But the point of averaging is usually not to get just the average of the numbers you fed in, but an estimate of some population mean, using those numbers as a sample. So averaging some stations over Earth is intended to give the mean for Earth as a whole, including the points where you didn't measure. So you'd hope that if you chose a different set of points, you'd still get much the same answer. This needs some care. Just averaging a set of stations is usually not good enough. You need some kind of area weighting to even out representation. For example, most data has many samples in USA, and few in Africa. But you don't want the result to be just USA. Area-weighted averaging is better thought of as integration. <br><br>Let me give an example. Suppose you have a big pile of sand on a hard level surface, and you want to know how much you have - ie volume. Suppose you know the area - perhaps it is in a kind of sandpit. You have a graduated probe so that you can work out the depth at points. If you can work out the average depth, you would multiply that by the area to get the volume. Suppose you have a number of sample depths, perhaps at scattered locations, and you want to calculate the average. <br><br>The average depth you want is volume/area. That is what the calculation should reveal. The process to get it is numerical integration, which I'll describe. <h4 id="I3">Integration - theory</h4> <table style="color:#cc66aa"><tr><td>A rigorous approach to integration by <a href="https://en.wikipedia.org/wiki/Riemann_integral">Riemann</a> in about 1854 is often cited. He showed that if you approximated an integral on a line segment by dividing it into ever smaller segments, not necessarily of equal size, and estimated the function on each by a value within, then the sum of those contributions would tend to a unique limit, which is the integral. Here is a picture from Wiki that I usd to illustrate. <br><br> The idea of subdividing works in higher dimensions as well, basically because integration is additive. And the key thing is that there is that unique limit. Numerically, the task is to get as close to it as possible given a finite number of points. Riemann didn't have to worry about refined estimation, because he could postulatie an infinite process. But in practice, it pays to use a better estimate of the integral in each segment.</td><td><img src="https://upload.wikimedia.org/wikipedia/commons/c/cd/Riemann_integral_irregular.gif" style="width:300px"></img></td></tr></table><br><br> <div style="color:#cc66aa">Riemann dealt with analytic functions, that can be calculated at any prescribed point. With a field variable like temperature, we don't have that We have just a finite number of measurements. But the method is the same. The region should be divided into small areas, and an integral estimated on each using the data in it or nearby. </div><br>The key takeaway from Riemann theory is that there is a unique limit, no matter how the region is divided. That is the integral we seek, or the volume, in the case of the sand pile. <h4 id="I4">Numerical integration - sampling and cells</h4><div style="color:#aa4444">The idea of integation far predates Riemann; it was originally seen as "antiderivative". A basic and early family of formulae is called Newton-Cotes. Formulae like these use derivatives (perhaps implied) to improve the accuracy of integration within sub-intervals. "Quadrature points" can be used to improve accuracy (Gauss quadrature). But there is an alternative view more appropriate to empirical data like temperature anomaly which is of inferring from sample values. This acknowledges that you can't fully control the points where function values are known. </div><br>So one style of numerical integration, in the Riemann spirit, is to divide the region into small areas, and estimate the integral of each as that area times the average of the sample values within. This is the method used by most of the majors. They use a regular grid in latitude/longitude, and average the temperatures measured within. The cells aren't equal area because of the way longitudes gather toward the poles; that has to be allowed for. There is of course a problem when cells don't have any measures at all. They are normally just omitted, with consequences that I will describe. GISS uses a variant of this with cells of larger longitude range near the poles. For TempLS, it is what I call the <a href="https://moyhu.blogspot.com.au/2015/09/better-gridding-for-global-temperature.html">grid method</a>. It was my sole method for years, and I still publish the result, mainly because it aligns well with NOAA and HADCRUT, also gridded. <br><br>Whenever you average a set of data omitting numbers representing part of the data, you effectively assume that the omitted part has the same behaviour as the average of what you included. You can see this because if you replaced the missing data with that average, you'd get the same result. Then you can decide whether that is really what you believe. I have described <a href="https://moyhu.blogspot.com.au/2014/07/infilling-graphics-version.html">here</a> how it often isn't, and what you should do about it (infilling). Eere that means usually that empty cells should be estimated from some set of nearby stations, possibly already expressed as cell averages. <br><br>This method can be re-expressed as a weighted sum where the weights are just cell area divided by number of datapoints in cell, although it gets more complicated if you also use the data to fill empty cells (the stations used get upweighted). <br><br>While this method is usually implemented with a latitude/longitude grid, that isn't the best, because there is a big variation in size. I have explored better grids based on mapping the sphere onto a gridded Platonic polyhedron - <a href="https://moyhu.blogspot.com.au/2015/09/better-gridding-for-global-temperature.html">cube</a> or <a href="https://moyhu.blogspot.com.au/2017/04/icosahedral-earth.html">icosahedron</a>. But the principle is the same. <h4 id="I6">Triangular mesh</h4> My preferred integration has been by irregular triangular mesh. This <a href="https://moyhu.blogspot.com.au/2017/04/global-60-stations-and-coverage.html">recent post</a> shows various meshes and explores how many nodes are really needed, but the full mesh is what the usual TempLS reports are based on. The msh is formed as a convex hull, which is what you would get if you drew a thin membrane over the measuring points in space. For the resulting triangles, there is a node at each corner, and to the values there you can fit a planar surface. This can be integrated, and the result is the average of the three values times the area of the triangle. When you work out the algebra, the weight of each station reading is the area of all the triangles that it is a corner of. <br><br><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/08/usmesh.png" width=700></img><h4 id="I6">Integration by function fitting</h4> There is a more general way of thinking about integration, with more methods resulting. This is where the data is approximated by a function whose integral is known. The usual way of doing this is by taking a set of basis functions with known integral, and linearly combining them with coefficients that are optimised to fit. <br><br> You can think of gridding as a special case. The functions are just those that take constant value 1 on the designated areas. The combination is a piecewise constant function. <h4 id="I7">Fitting - weighted least squares</h4> <div style="color:#cc66aa"></div>Optimising the coefficients means finding ones that minimise s a sum of squares of some kind of the differences between the fit function and observed - called residuals. In math, that is <br> S = Σ wₖrₖrₖ where residual rₖ = yₖ-Σ bₘfₘ(xₖ)<br>Here yₖ and xₖ are the value and location; fₘ are the basis functions and bₘ the coefficients. Since there are usually a lot more locations than coefficients, this can't be made zero, but can be minimised. The weights w should again, as above, generally be area weights appropriate for integration. The reason is that S i a penalty on the size of residuals, and should generally be uniform, rather than varying with the distribution of sampling points. <br><br>By differentiation,<br>∂S/∂bₙ=0= Σₖ Aₖₙwₖ(yₖ-ΣₘAₖₘbₘ) where Aₖₙ=fₙ(xₖ) <br>or Σₘ AAₘₙ bₘ = Σₖ Aₖₙyₖ, where AAₘₙ = Σₖ AₖₘAₖₙ <br>In matrix notation, this is AA* b = A*y, where AA = A<sup>T</sup>A <br>So AA is symmetric positive definite, and has to be inverted to get the coefficients b. In fact, AA is the matrix of scalar products of the basis functions. This least squares fitting can also be seen as regression. It also goes by the name of a <a href="https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_pseudoinverse">pseudo-inverse</a>. <br><br> This then determines what sort of basis functions f should be sought. AA ia generally large, so for inversion should be as <a href="https://en.wikipedia.org/wiki/Condition_number">well-conditioned</a> as possible. This means that the rows should be nearly orthogonal. That is normally achieved by making AA as nearly as possible diagonal. Since AA is the matrix of scalar products, that means that the basis functions should be as best possibe orthogonal relative to the weights w. If these are appropriate for integration, that means that a good choice of functions f will be analytic functions orthogonal on the sphere. <br><br> I'll now review my various kinds of integration in terms of that orthogonality <br><br> <h4 id="I8">Basis functions - grid.</h4> As said above, the grid basis functions are just functions that are 1 on their cells, and zero elsewhere. "Cell" could be a more complex region, like combinations of cells. They are guaranteed to be orthogonal, since they don't overlap. And within each cell, the simple average crates no further interaction. So AA is actually diagonal, and inversion is trivial. <br><br>That sounds ideal, but the problem is that the basis is discontinuous, whereas there is every reason to expect the temperature anomaly to be continuous. You can see how this is a problem, along with missing areas, in this NOAA map for May 2017: <br><br><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/08/noaamap.gif" width=700></img><br><br>I described <a href="https://moyhu.blogspot.com.au/2015/09/better-gridding-for-global-temperature.html">here</a> a method for overcoming the empty cell problem by systematically assigning neighboring values. You can regard this also as just carving up those areas and adjoining them to cells that do have data, so it isn't really a different method in principle. The paper of <a href="https://moyhu.blogspot.com.au/2013/11/cowton-and-way-trends.html">Cowtan and Way</a> showed another way of overcoming the empty cell problem. <br><br><h4 id="I9">Basis functions - mesh.</h4> In this case, the sum squares minimisation is not needed, since there are exactly as many basis functions as data, and the coefficients are just the data values. This is standard finite element method. The basis functions are continuous - pyramids each with unit peak at a data point, sloping to zero on the opposite edges of the adjacent triangles. This combination of a continuous approximation with a diagonal AA is why I prefer this method, although the requirement to calculate the mesh is a cost. You can see the resulting visualisation at <a href="https://moyhu.blogspot.com.au/p/blog-page_24.html">this WebGL page.</a> It shows a map of the mesh approximation to TempLS residuals, for each month back to 1900. <h4 id="I10">Basis functions - spherical harmonics.</h4> This has been my next favorite method. Spherical harmonics (SH) are described <a href="https://moyhu.blogspot.com.au/2015/09/spherical-harmonics.html">here</a>, and visoalised <a href="https://moyhu.blogspot.com.au/2017/04/spherical-harmonics-movie.html">here</a>. I compared the various methods in a post <a href="https://moyhu.blogspot.com.au/2015/10/new-integration-methods-for-global.html">here</a>. At that stage I was using uniform weights w, and even then the method performed quite well. But it can be improved by using weights for any integration method. Even simple grid weights work very well. After fitting, the next requirement is to integrate the basis functions. In this case it is simple; they are all zero except for the first, which is a constant function. So for the integral, all you need is the first coefficient. <br><br>I also use the fit to display the temperature field with each temperature report. SH are smooth, and so is the fit. Here is a recent example: <br><br><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/06/map.png" width=700></img><br><br>This is the first case where we have a non-trivial AA to invert. I tend to use 100-200 functions, so the matrix can be inverted directly, and it is very quick in R. For a bigger matrix I would use a conjugate gradient method, which would be fine while the matrix is well-conditioned. <br><br>But it is conditioning that is the limitation. The basis functions have undulations, and at higher order, these start to be inadequately resolved by the spacing of data points. That means that, in terms of the approximate integration, they are no longer nearly orthogonal. Eventually AA becomes nearly singular, which means that there are combinations that are not penalised by the minimising of S. These grow and produce artefacts. SH have two integer parameters l and m, which roughly determine how many periods of oscillation you have in latitude and longitude. With uniform w, I can allow l up to 12, which means 169 functions. Recently with w right for grid integration, I can go to 16, with 289 functions, but with some indication of artefacts. I'll post a detailed analysis of this soon. Generally the artefacts are composed of higher order SH, so don't have much effcet on the integral. <h4 id="I11">Basis functions - radial basis functions.</h4> This is a new development. RBFs are functions that are radially symmetric about a centre point, and fade from 1 toward 0 over a distance scaled to the spacing of data. They are smooth - I am using gaussians (normal distribution), although it is more common to use functions that go to zero in finite distance. The idea is that these functions are close to orthogonal if spaced so only their tails overlap. <br><br>It's not necessarily much different to SH; the attraction is that there is flexibility to increase the spread of the functions where data is scarce. <h4 id="I12">Basis functions - higher order finite element.</h4> I used above the simplest element - linear triangles. There is a whole hierarchy that you can use with extra parameters, for example, quadratic functions on triangles with mid-side nodes. In these implementations, the mesh nodes will no longer be data nodes, so there is non-trivial fitting within elements. It is more complicated than, say, RBF. However there is a powerful technique in FEM called hp-fitting. Here the order of polynomial fit functions is increased as the size of the elements decreases. In theory there is a very rapid gain in accuracy, limited by discontinuity at boundaries, which we don't have. However, there is the problem of having to fit the polynomials within each element, so I don't know how that will work out. <h4 id="I13">Next in the series.</h4> I've tried to lay out the theory in this post for reference. I'll do a post on more detail of spherical harmonics, calculating condition numbers and trying to optimise the weights and number of bases. I'll also do an updated post on a comparison of the methods. If RBFs really prove successful, I'll write about that too. <br><br><br><br><br><br> Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com9tag:blogger.com,1999:blog-7729093380675162051.post-1257424088665169372017-08-16T04:10:00.000+10:002017-08-16T04:10:00.639+10:00GISS July up 0.15°C from June.GISS was up from 0.68°C in June to 0.83°C in July. It was the warmest July in the record, though the <a href="https://data.giss.nasa.gov/gistemp/news/20170815/">GISS report</a> says it "statistically tied" with 2016 (0.82). The increase was similar to the <a href="http://moyhu.blogspot.com/2017/07/june-global-surface-temperature-down-012.html">0.12°C rise in TempLS</a>. <br /><br />The overall pattern was similar to that in TempLS. Warm almost everywhere, with a big band across mid-latitude Eurasia and N Africa. Cool in parts of the Arctic, which may save some ice. <br /><br />I'll show the plot of recent months on the same 1981-2010 base, mainly because they are currently unusually unanimous. The group HADCRUT/NOAA/TempLS_grid tend to be less sensitive to the Antarctic variations that have dominated recent months, and I'd expect them to be not much changed in July also, which would leave them also in much the same place. <br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/08/sixmo.png" width="600" /><br /><br />Recently, August reanalysis has been <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#NCAR">unusually warm</a>. As usual here, I will compare the GISS and previous TempLS plots below the jump. <br /><a name='more'></a><br />Here is GISS<br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/08/GISSjul.png" /><br /><br />And here is the TempLS spherical harmonics plot <br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/08/map.png" /><br /><br /><div style="color: #aa0000;">This post is part of a series that has now run for six years. The TempLS mesh data is reported <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#mesh">here</a>, and the recent history of monthly readings is <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#L1">here</a>. Unadjusted GHCN is normally used, but if you click the TempLS button there, it will show data with adjusted, and also with different integration methods. There is an interactive graph using 1981-2010 base period <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#Drag">here</a> which you can use to show different periods, or compare with other indices. There is a general guide to TempLS <a href="https://moyhu.blogspot.com.au/p/a-guide-to-global-temperature-program.html">here</a>. <br /><br />The reporting cycle starts with a report of the daily reanalysis index on about the 4th of the month. The next post is this, the TempLS report, usually about the 8th. Then when the GISS result comes out, usually about the 15th, I discuss it and compare with TempLS. The TempLS graph uses a spherical harmonics to the TempLS mesh residuals; the residuals are displayed more directly using a triangular grid in a better resolved WebGL plot <a href="http://www.moyhu.blogspot.com.au/p/blog-page_24.html">here</a>. </div><br /><br /><br /><br /><br /><br /><br /><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com6tag:blogger.com,1999:blog-7729093380675162051.post-79581791086762206962017-08-08T04:04:00.000+10:002017-08-08T16:30:29.786+10:00July global surface temperature up 0.11°C<a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#mesh">TempLS mesh</a> anomaly (1961-90 base) was up from 0.568°C in June to 0.679°C in July. This follows the <a href="http://moyhu.blogspot.com/2017/08/july-ncepncar-up-0058.html"> smaller rise</a> of 0.06°C in the NCEP/NCAR index, and a <a href="http://www.drroyspencer.com/2017/08/uah-global-temperature-update-for-july-2017-0-28-deg-c/">similar rise</a> (0.07) in the UAH LT satellite index. The July value is just a whisker short of July 2016, which was a record warm month. With results for Mexico and Peru still to come, that could change.. <br /><br />Again the dominant change was in Antarctica, from very cold in June to just above average in July. On this basis, I'd expect GISS to also rise; NOAA and HADCRUT not so much. Otherwise as with the <a href="http://moyhu.blogspot.com/2017/08/july-ncepncar-up-0058.html"> reanalysis</a>, Middle East and around Mongolia were warm, also Australia and Western USA. Nowhere very hot or cold. Here is the map:<br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/08/map.png" /><br /><br /><a name='more'></a><div style="color: #aa0000;">This post is part of a series that has now run for six years. The TempLS mesh data is reported <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#mesh">here</a>, and the recent history of monthly readings is <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#L1">here</a>. Unadjusted GHCN is normally used, but if you click the TempLS button there, it will show data with adjusted, and also with different integration methods. There is an interactive graph using 1981-2010 base period <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#Drag">here</a> which you can use to show different periods, or compare with other indices. There is a general guide to TempLS <a href="https://moyhu.blogspot.com.au/p/a-guide-to-global-temperature-program.html">here</a>. <br /><br />The reporting cycle starts with a report of the daily reanalysis index on about the 4th of the month. The next post is this, the TempLS report, usually about the 8th. Then when the GISS result comes out, usually about the 15th, I discuss it and compare with TempLS. The TempLS graph uses a spherical harmonics to the TempLS mesh residuals; the residuals are displayed more directly using a triangular grid in a better resolved WebGL plot <a href="http://www.moyhu.blogspot.com.au/p/blog-page_24.html">here</a>. </div><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com3tag:blogger.com,1999:blog-7729093380675162051.post-66991687288977825502017-08-03T04:40:00.001+10:002017-08-03T04:40:13.259+10:00July NCEP/NCAR up 0.058°CIn the <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#NCAR">Moyhu NCEP/NCAR index</a>, the monthly reanalysis average rose from 0.241°C in June to 0.299°C in July, 2017. This is lower than July 2016 but considerably higher than July 2015. The interesting point was a sudden rise on about July 24, which is responsible for all the increase since June. It may be tapering off now. <br /><br />It was generally warm in temperate Asia and the Middle East, and even Australia. Antarctica was mixed, not as cold as June. The Arctic has been fairly cool. <br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/data/freq/days.png" /><br /><br /><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com0tag:blogger.com,1999:blog-7729093380675162051.post-26432620540633279082017-07-22T02:32:00.000+10:002017-07-22T02:32:27.690+10:00NOAA's new ERSST V5 Sea surface temperature and TempLSThe paper describing the new version V5 of ERSST has been <a href="https://doi.org/10.1175/JCLI-D-16-0836.1">published</a> in the Journal of Climate. The data is posted, and there is a NOAA descriptive page <a href="https://www.ncdc.noaa.gov/data-access/marineocean-data/extended-reconstructed-sea-surface-temperature-ersst-v5">here</a>. From the abstract of the (paywalled) paper, by Huang et al: <br /><blockquote>This update incorporates a new release of ICOADS R3.0, a decade of near-surface data from Argo floats, and a new estimate of centennial sea-ice from HadISST2. A number of choices in aspects of quality control, bias adjustment and interpolation have been substantively revised. The resulting ERSST estimates have more realistic spatio-temporal variations, better representation of high latitude SSTs, and ship SST biases are now calculated relative to more accurate buoy measurements, while the global long-term trend remains about the same. </blockquote>A lot of people have asked about including ARGO data, but it may be less significant than it seems. ARGO floats only come to the surface once every ten days, while the more numerous drifter buoys are returning data all the time. There was a clamor for the biases to be calculated relative to the more accurate buoys, but as I frequently argued, as a matter of simple arithmetic it makes absolutely no difference to the anomaly result. And sure enough, they report that it just reduces all readings by 0.077°C. That can't affect trends, spatial patterns etc. <br /><br />The new data was not used for the June NOAA global index, nor for any other indices that I know of. But I'm sure it will be soon. So I have downloaded it and tried it out in TempLS. I have incorporated it in place of the old V3b. So how much difference does it make? The abstract says<br /><blockquote>Furthermore, high latitude SSTs are decreased by 0.1°–0.2°C by using sea-ice concentration from HadISST2 over HadISST1. Changes arising from remaining innovations are mostly important at small space and time scales, primarily having an impact where and when input observations are sparse. Cross-validations and verifications with independent modern observations show that the updates incorporated in ERSSTv5 have improved the representation of spatial variability over the global oceans, the magnitude of El Niño and La Niña events, and the decadal nature of SST changes over 1930s–40s when observation instruments changed rapidly. Both long (1900–2015) and short (2000–2015) term SST trends in ERSSTv5 remain significant as in ERSSTv4. </blockquote>The sea ice difference may matter most - this is a long standing problem area in incorporating SST in global measures. On the <a href="https://www.ncdc.noaa.gov/data-access/marineocean-data/extended-reconstructed-sea-surface-temperature-ersst-v5">NOAA page</a>, they show a comparison graph: <br /><br /><img src="https://www.ncdc.noaa.gov/sites/default/files/styles/716px_width/public/Globally_annually_avg_SSTA_0.jpg?itok=c855stMq" /><br /><br />There are no obvious systematic trend differences. The most noticeable change is around WWII, which is a bit of a black spot for SST data. A marked and often suspected peak around 1944 has diminished, with a deeper dip around 1942. <br /><br />TempLS would be expected to reflect this, since most of its data is SST. Here is the corresponding series for TempLS mesh plotted: <br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/07/v4v5.png" /><br /><br />Global trends (in °C/century) are barely affected. Reduced slightly in recent decades, increased slightly since 1900:<br /><br /><table><tbody><tr><td>start year</td><td>end year</td><td>TempLS with V4</td><td>TempLS with V5 </td></tr><tr><td>1900</td><td>2016</td><td>0.769</td><td>0.791 </td></tr><tr><td>1940</td><td>2016</td><td>0.978</td><td>0.974 </td></tr><tr><td>1960</td><td>2016</td><td>1.489</td><td>1.465 </td></tr><tr><td>1980</td><td>2016</td><td>1.631</td><td>1.607 </td></tr></tbody></table><br />Almost identical behaviour is seen with TempLS grid. <br /><br /><br /><br /><br /><br /><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com17tag:blogger.com,1999:blog-7729093380675162051.post-22985136832384977162017-07-20T12:29:00.003+10:002017-07-20T19:59:00.135+10:00NOAA global surface temperature down just 0.01°CDown from 0.83°C in May to 0.82C in June (report <a href="https://www.ncdc.noaa.gov/sotc/global/201706">here</a>). I don't normally post separately about NOAA, but here I think the striking difference from GISS/TempLS mesh is significant. GISS <a href="https://moyhu.blogspot.com.au/2017/07/giss-june-down-019-from-may.html">went down</a> 0.19°C, and <a href="https://www.blogger.com/went%20down">TempLS mesh</a> by 0.12°C. But TempLS grid actually rose, very slightly. I have <a href="http://www.moyhu.blogspot.com.au/2014/08/templs-and-noaa-are-converging.html">often noted</a> the close correspondence between NOAA and TempLS grid (and the looser one between TempLS mesh and GISS) and attributed the difference to GISS etc better coverage of the poles. <br /><br />This month, the cause of that difference is clear, as is the relative coolness of June in GISS. With <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#mesh">TempLS reports</a>, I post a breakdown of the regional contributions. These are actual contributions, not just average temperature. So in the following: <br /><br /><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/07/break.png" /><br /><br />you see that the total dropped by about 0.12°C, while Antarctica dropped from conributing 0.07C to -0.07C, a difference that slightly exceeded the global total drop of 0.12C. <br /><br />That doesn't mean that, but for Antarctica, there would have been no cooling. May had been held up by the relative Antarctic warmth. But it is a further illustration of the difference between the interpolative procedures of GISS and TempLS and the cruder grid-based processes of NOAA and TempLS grid. I would probably have abandoned TempLS grid, or at least replaced it with a more interpolative version (post coming soon), if it were not for the correspondence with NOAA and HADCRUT. <br /><br /><span style="color: red;"><b>Update:</b> I see that the <a href="http://journals.ametsoc.org/doi/10.1175/JCLI-D-16-0836.1">paper for ERSST V5</a> has just been published in J Climate. I'll post about that very soon, and also, maybe separately, give an analysis of its effect in TempLS. I see also that NOAA was still using V4 for June; I assume they will use V5 for July, as I expect I will. The NOAA ERSST V5 page is <a href="https://www.ncdc.noaa.gov/data-access/marineocean-data/extended-reconstructed-sea-surface-temperature-ersst-v5">here</a>. </span><br /><br />Here is the NOAA map for the month. You can see how the poles are missing.<br /><br /><img border="0" src="https://www.ncdc.noaa.gov/sotc/service/global/map-blended-mntp/201706.gif" width="700" /><br /><br /><br /><br />Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com8tag:blogger.com,1999:blog-7729093380675162051.post-22610190069458026932017-07-15T10:12:00.002+10:002017-07-15T10:12:17.446+10:00GISS June down 0.19°C from May. GISS was down from 0.88°C in May to 0.69°C in June.The GISS report is <a href="https://data.giss.nasa.gov/gistemp/news/20170714/">here</a>; they say it was the fourth warmest June on record. The drop was somewhat more than the <a href="http://moyhu.blogspot.com/2017/07/june-global-surface-temperature-down-012.html">0.12°C in TempLS</a>. The most recent month that was cooler than that was November 2014. <br><br>The overall pattern was similar to that in TempLS. The big feature was cold in Antarctica, to which both GISS and TempLS msh are sensitive, more so than HADCRUT or NOAA. Otherwise, as with TempLS, it was warm in Europe, extending through Africa and the Middle East, and also through the Americas. Apart from Antarctica, the main cold spot was NW Russia. <br><br>So far, July is <a href="https://moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#NCAR">also cold</a>, although with some signs of warming a little from June. As usual, I will compare the GISS and previous TempLS plots below the jump. <br><a name='more'></a><br>Here is GISS<br><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/07/GISSjun.jpg"></img><br><br>And here is the TempLS spherical harmonics plot <br><br><img src="https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/07/map.png"></img><br><br><br><br><br><br><br><br> Nick Stokeshttps://plus.google.com/103029875534779648576noreply@blogger.com7