Thursday, April 29, 2021

No, global warming is not 50% of what CMIP6 models predict.

The title is a reference to a post by Dr Roy Spencer titled
"An Earth Day Reminder: “Global Warming” is Only ~50% of What Models Predict"
It was reposted at WUWT here. I commented at both places.

The title is somewhat misleading; the post treats a subset of SST (sea surface temperature), not global warming, and does not deal with a supposed failure of a prediction, but a possible discrepancy in SST in the years 1979-2020, as modelled in the latest round (CMIP6). Here is Dr Roy's main plot:





It isn't true, and the reasons are to do with the fact that only a small subset of CMIP6 results have yet appeared. Roy (and I) use the results posted by KNMI Explorer. The two key data problems are
  • Roy is using  near surface air temperatures (TAS) restricted to ocean. The CMIP version of SST is TOS, not yet posted for CMIP6. Both were posted for CMIP5 (see below), but TOS came later. I'll look at the CMIP5 results, and show how TOS and TAS differ considerably. CMIP5 TOS is quite close to observation in 1979-2020.
  • Although 68 simulations have been published for TAS, as used by Roy, 50 of those came from one model, CanESM5. This does weight the results toward high warming in that period (and others); if removed, the CMIP6 results are fairly consistent with CMIP5.

CMIP5 results for TOS and TAS, and compared with observations.

I have used the CMIP5 results for RCP4.5. The scenario should not matter for historic data, where the forcings are known. They have some influence in the years since about 2011.

Here is a graph of results:



I have given both the 60S-60N and 90S-90N results from CMIP5; Roy uses the former, which does avoid sea-ice complications. Like Roy, I have normalised so that the 1979-1983 average of all series are zero. For the trends, I have used an AR(1) model, since the residuals are clearly highly autocorrelated. HADSST3 is from here and NOAA SST from here. The multi-model means are as supplied by KNMI. There are a couple of things to unpack here
  • The correct CMIP5 set to compare is TOS 90S-90N, and that is quite close to HADSST3; the trend is 0.14°C/decade, vs 0.135 for HADSST3, with the difference much smaller than the σ's. NOAA SST is lower, but still well within the CI range.
  • TAS, as used by Roy for CMIP6, is substantially higher at 0.197°C/dec. However, Roy used the latitude range 60S-60N, for which TOS and TAS are closer.
So CMIP5 matched observed SST very well in this time range.

CMIP6 TAS data limitations

As said, posting of CMIP6 results is at a very early stage. For each scenario of TAS there are 68 simulations, but 50 are from the model CanESM5. In the following plot, analogous to Roy's above, I have shown the ssp585 Can results in faint blue, with other CMIP6 TAS (ocean 60S-60N) results in red. I have shown HADSST3 and NOAA SST for comparison.



Removing CaanESM5 does not change the profile very much, but does radically change the multi-model mean (not shown). But again, these are TAS results, and in CMIP5, there was a 0.022°C/decade trend differential for TOS. If that is subtracted here, we get



It is now clear that the observed results are well within the model range, being only a little below average.

Conclusion

The difference between modelled and observed is far less than Roy Spencer claims. For CMIP5, using the right model variable TOS, there is very little difference. And adjusting for the TOS-TAS difference in CMIP6, and being wary of the excessive weight given to one model, the observed SST is well within the model range.

















8 comments:

  1. Thanks you for a very interesting comment regarding Roy Spencer's post!
    As far as I can see, you both agree that the observed SST decadal trend is about 0.12-0.13 C. (ref Fig. 2 in Roys post http://www.drroyspencer.com/2021/04/an-earth-day-reminder-global-warming-is-only-50-of-what-models-predict/ ).
    Hence, your disagreement regards what the models have predicted, and some of that relates to whether 90-90 is a better area to evaluate compared to 60-60.
    But further, you state that you have used the RCP 4.5 scenario, while it is not clear which scenario underlies Spencers calculations. Up til 2015-2020, you may perhaps say that RCP 6.0 (or even 8.5) is as close to what have actually taken place, as 4.5. Does the choice of RCP matter in this regard?
    T. Klemsdal, Oslo

    ReplyDelete
    Replies
    1. Thanks, Tor
      On the scenario, the choice shouldn't matter much, since most of the data is historic, with known forcings. CMIP5 would be relying on predicted scenario from about 2006 onwards, but scenario differences are small in the early years. There is even less effect with CMIP6.

      On 60-60 vs 90-90, there is some merit in using 60-60 to avoid issues handling ice. However, if you want to compare with HADSST etc, really 90-90 should be used. I think Roy may have calculated ERSST for 60-60, but his source isn't clear.

      Gavin has pointed to a new source of CMIP6 data which has TOS, so I'll soon be able to complete the analysis there.

      Delete
    2. Just to put some numbers on scenario choice, the RCP database shows less than 0.1W/m2 difference in total 2020 forcing, relative to pre-industrial, between RCP8.5 and 4.5. Relative to 1979 that amounts to about 5% difference. All of that difference occurs after 2005 so, given the long equilibrium response of the Earth system, we can be sure the difference in warming trend since 1979 will be less than 5%. The one-model-per-member CMIP5 means at Climate Explorer show a 3% difference, though there is a small discrepancy in the set of contributing models to each scenario mean.

      Delete
  2. It still shows all models projecting more warming than observations. All you've done is cherry pick model runs that best fit the observations.

    ReplyDelete
    Replies
    1. "All you've done is cherry pick model runs that best fit the observations."

      Nick discussed, at length, and repeatedly, his rationale. Do you have any technical disputation of it?

      Delete
  3. You just validated his point. Nick Stokes simply rationalized his cherrypicking to get a 'best fit'fot the models to make the models *appear* to be less of a failure compared to observations,than they really were.
    Nick Stokes displayed what's known as "confirmation bias"
    The scientific method is, used to try to disprove a hypothesis. Confirmation bias is used to try to validate the hypothesis -- and it's anti-science

    ReplyDelete
    Replies
    1. You should read the article before posting. Nick has several good technical reasons for his choices. Do you have any technical rebuttal(s)?

      Delete
    2. I recommend you read his subsequent post, and the comments. If WUWT was as assiduous on avoiding cherry picking, most of their most frequent posters would fall away.

      Delete