Friday, September 15, 2023

GISS August global temperature up by 0.05°C from July.

The GISS V4 land/ocean temperature anomaly was 1.24°C in August 2023, up from 1.19°C in July. This rise is nearly the same as the 0.056°C rise reported for TempLS.

As with TempLS, August was by a large margin the warmest August in the record - next was 1.02°C in 2016. As GISS emphasises in another tweet, the three months in a row make it by far the warmest summer in the record.

As usual here, I will compare the GISS and earlier TempLS plots below the jump.

Here is GISS V4


And here is the TempLS V4 FEM-based plot


This post is part of a series that has now run for twelve years. The GISS data completes the month cycle, and is compared with the TempLS result and map. GISS lists its reports here, and I post the monthly averages here.
The TempLS mesh data is reported here, and the recent history of monthly readings is here. Unadjusted GHCN is normally used, but if you click the TempLS button there, it will show data with adjusted, and also with different integration methods. There is an interactive graph using 1981-2010 base period here which you can use to show different periods, or compare with other indices. There is a general guide to TempLS here.

The reporting cycle starts with the TempLS report, usually about the 8th of the month. Then when the GISS result comes out, usually about the 15th, I discuss it and compare with TempLS. The TempLS graph now comes from a high resolution regular grid on the sphere; the residuals are displayed more directly using a triangular grid in a WebGL plot here.

A list of earlier monthly reports of each series in date order is here:

  1. NCEP/NCAR Reanalysis report
  2. TempLS report
  3. GISS report and comparison with TempLS



5 comments:

  1. The whole Scafetta issue is... ridiculous. Whenever evaluating models v. reality, an important thought experiment is always to see how well the evaluation would work if you picked an arbitrary model with an ensemble of starting condition options to be "truth", where one of those ensemble members is considered to be "reality". Then, the evaluation should. on average, not exclude more than a small fraction of the other ensemble members. If we applied Scafetta's method to this thought experiment, and assumed perfect information about temperature (which we have, because it is a model!), then his method would exclude _all_ the other ensemble members. Which is a fail.

    -MMM

    ReplyDelete
    Replies
    1. Yes, I agree. I was trying to say something like that with my planet A, planet B comment (first one).

      Delete
  2. And JC keeps hosting junk... the Christofides paper _almost_ does the right experiment, and then ruins it at the last minute. They apply their approach to the Earth, and then apply their approach to climate models. Which is basically what I recommended in my prior comment. The problem is this sentence: " Here we used the mean (CMIP6 mean) of the output series of the Coupled Model Intercomparison Project (CMIP6) averaged over the globe". Using the mean of the CMIP6 models totally breaks the comparison! The Earth is one instantiation of a possible Earth, not a mean of all possible Earths. Similarly, they need to apply their approach to a single model run at a time, not to the mean of all model runs. Sigh...

    -MMM

    ReplyDelete
    Replies
    1. To be clear, it is obvious to me that Christofides et al. are falling into the Murry Salby trap of the ENSO cycle/CO2 correlation swamping the more gradual and accumulating CO2 radiative forcing influence. Therefore, my prior is that if Christofides et al. were to apply their approach to a single model realization, then they would find that their approach would find no influence of CO2 on temperature just like what they've found with the historical data. And since that is inconsistent with our understanding of the model in which we know that CO2 does have an influence, that would be proof that their approach is incapable of actually detecting an influence of CO2 on climate.

      -MMM

      Delete
    2. Turns out I was wrong - the real problem with their model comparison is that they were comparing against concentration driven climate models!

      Delete