Saturday, November 23, 2019

A good faith article by a recovering sceptic, but needs care with sources.

There is an interesting article in Reason by Ron Bailey, titled "What Climate Science Tells Us About Temperature Trends" (h/t Judith Curry). It is lukewarmish, but, as it's author notes, that is movement from a more sceptical view. It covers a range of issues.

His history shows up, though, in prominence given to sceptic sources that are not necessarily in such good faith. True, he seems to reach a balance, but needs to be more sceptical of scepticism. An example is this:

"A recent example is the June 2019 claim by geologist Tony Heller, who runs the contrarian website Real Climate Science, that he had identified "yet another round of spectacular data tampering by NASA and NOAA. Cooling the past and warming the present." Heller focused particularly on the adjustments made to NASA Goddard Institute for Space Studies (GISS) global land surface temperature trends. "

He concludes that "Adjustments that overall reduce the amount of warming seen in the past suggest that climatologists are not fiddling with temperature data in order to create or exaggerate global warming" so he wasn't convinced by Heller's case. But as I noted here, that source should be completely rejected. It compares one dataset (a land average) in 2017 with something different (Ts, a land/ocean average based on land data) in 2019 and claims the difference is due to "tampering". Although I raised that at the source, no correction or retraction was ever made, and so it still pollutes the discourse.

A different kind of example is the undue prominence accorded to Christy, and Michaels and Rossiter. He does give the counter arguments, and seems to favor those counters. Since the weighting he gives to those sources probably reflects the orientation of his audience, that may in the end be a good thing, but I hope we will get to a state where these recede to their rightful place.

His conclusion is:
"Continued economic growth and technological progress would surely help future generations to handle many—even most—of the problems caused by climate change. At the same time, the speed and severity at which the earth now appears to be warming makes the wait-and-see approach increasingly risky. Will climate change be apocalyptic? Probably not, but the possibility is not zero. So just how lucky do you feel? Frankly, after reviewing recent scientific evidence, I'm not feeling nearly as lucky as I once did."

Some might see that as still overrating his luck, but it is an article worth reading.


  1. Hi Nick, slightly off-topic but the latest posting by Willis on WUWT strikes me as being very interesting. While I have a basic understanding of the chem/phys processes used in the models, I dont really know how they actually function. My interest is in your opinion re the effect of this on the model ensemble predictions. Intuitively it seems to me that this would lead to models running hotter than reality. Your thoughts.....
    Rgds Terry

    1. Terry,
      One thing to remember is that they are climate models, not weather. They don't predict the weather that happens, because of lack of synchronisation. But they predict a lot of weather that won't happen at the specified time, but determines the average behaviour that leads to climate. The same is true of various other properties. Willis looks at a paper that matches observed and model values of a particular variable - direct sunlight at surface. He draws negative conclusions that the paper correctly does not.

      What counts with insolation is not direct sunlight as such, but total heat arriving. This is made up of SW (actual sunlight) and LW (IR). Some sunlight is intercepted by clouds, but is not lost; much is converted to LW and still reaches the surface. It is the total that affects the climate. If you look at the figures for CMIP in the paper, the SW is quite variable, but the total of SW and LW arriving is much less so. If a model has too much cloud there, SW is converted to LW, but not lost.

      Another aspect relevant to the weather/climate issue for GCMs is location. It is quite likely that a GCM with predict something happening, but get the place slightly wrong. For example, a site might be near a mountain range, and the GCM with say something about orographic cloud formation. They may get the amount of cloud about right, but not the speed of formation, so the cloud isn't in quite the right place. Again this doesn't matter much for climate prediction, but will create a discrepancy with observation at specific points. Getting the seasonal cycle of cloud out of kilter would be somewhat similar.

      Plus, of course, observations aren't perfect. However, the difference between models is not affected by that.

    2. Wonderin Willis is quite ignorant about stochastic math. If one doesn't know the higher-order statistics of the distribution but does know the average, then a maximum entropy estimate is appropriate. This will generate a large "uncertainty" about the value which reasonably matches the 50% level shown on that chart.

      Willis probably should't be posting on stuff he was never educated on, but I guess that's why they call him Wonderin

  2. Thanks for that. I understand the SW/LW etc stuff (PhD in Phys/Chem spectroscopy and kinetics but heading to retirement having spent career more in engineering and boundary layer dispersion with modest meteorology). I have the paper now and will have a good read of it. I hope you dont mind if I come back to you with more clarification requests. PS I am more at the Curry/Spencer/Christy end of the skeptic scale. Rgds

  3. Everyone knows that climate models are imperfect. But there are wildly different views on what those imperfections mean.

    From my point of view, there are a few takeaways:
    1) climate models should be viewed as "alternate Earth" simulations, where slightly different physics as well as much coarser resolution (100 km vs. planck-scale) lead to different behaviors.
    2) the level to which climate models produce earth-like weather patterns (currents, hadley cells, etc.) is actually amazingly impressive.
    3) some of the model errors actually in some ways add to just how impressive the models are doing - e.g., the double-ITCZ problem demonstrates that climate models aren't just some glorified fitting routine, but really are figuring these things out from the ground up.
    4) it isn't necessary to have a perfect ground state reproduction to get useful information about perturbations.
    5) In many cases where models differ from each other, it is important to remember that observations aren't perfect. Similarly, some observations are sensitive to chaotic behavior, such that they would differ from even a "perfect" model (and different runs of that "perfect" model would differ from each other). A good model-observation intercomparison should discuss both those sources of potential mismatches, and compare model-observation error to "climate noise" error and observational uncertainty.
    6) An analogy I like: if you turn up the heat dial for a pot of water on a stove, there are a number of unknowns: the mass of the water, the current heat-rate of the stove-top, the conduction between the stove-top and the pot, turbulent characteristics of the water, what the settings on the heat dial actually mean... so you have a lot of different researchers attempt to estimate all those factors, "ground-truthed" by a thermometer in the pot, and then they each use their individual models to project the future warming of the water in the pot. A number of different physical solutions can reproduce the observed quantities to within observational error, and each will produce a slightly different future projection, but looking at the range of those futures will give a lot of insight into what the real system will do.

    So when Willis says, "One thing is obvious. Since they can all hindcast quite well, this means that they must have counteracting errors that are canceling each other out": well, sort of. First, he hasn't actually demonstrated that this particular kind of error will necessarily matter for response of global average surface temperature to forcing (which is the main factor for both hindcasts & projections). But second, the whole point of having an ensemble is that even if you don't know all the physics and parameters, you can see what responses are robust to your different assumptions - and the thing is that pretty much EVERY model that does okay at representing current observed climate shows substantial warming when forced by increasing GHG concentrations.


    ps. Btw, I might think though your "models running hotter than reality" intuition... to use the stove analogy, if everyone thinks that the stove-top is at 60 degrees C when it is really only 50 degrees C, and turning up the dial adds 5 degrees in either case... I think the pot will warm less in response to the dial turning up for the "hot" models. But, as Nick explained above, it isn't clear that the models with high surface sunlight are actually getting any more net energy than the other models, so we'd need a more complex analogy to actually develop an intuition on this.