Saturday, November 8, 2014

GCM's are models

I'd like to bring together some things I expound from time to time about GCM's and predictions. It's a response to why didn't GCMs predict the pause? Or why can't they get the temperature right in Alice Springs?

GCM's are actually models. Suppose you were designing the Titanic. You might make a scale model, which, with suitably scaled dimensions (Reynolds number etc) could be a good model indeed. It would respond to various forcings (propellor thrust, wind, wave motion) just like the real boat. You would test it with various scenarios. Hurricanes, maybe listing, maybe even icebergs. It can tell you many useful things. But it won't tell you whether the Titanic will hit an iceberg. It just doesn't have that sort of information.

So it is with GCM's. They too will tell you how the Earth's climate will respond to forcings. You can subject them to scenarios. But they won't predict weather. They aren't initialized to do that. And, famously, weather is chaotic. You can't actually predict it for very long from initial conditions. If models are doing their job, they will be chaotic too. You can't use them to solve an initial value problem.


How GCMs are used

So I'd better say a bit more about how GCM's are traditionally used - then I'll get onto a recent alternative, which is part of the motive for this post. Because GCM's can't reliably work from initial conditions, they are usually started well back in time from the period of interest. This is also common in CFD (computational fluid dynamics). Flows in CFD are also chaotic, and not usually solved as initial value problems. As with GCM's, a forceful reason is that an initial state just isn't known.

So people "wind back". The idea is to start with conditions that aren't expected to correspond to a measured state. rather, they strive for physical consistency. If you are modelling a near-incompressible fluid (explicitly), you'll want to make sure that the density is what it should be, and the velocity is divergence-free. Otherwise, explosion.

In fact, even that won't be right, but the artificial initial effects will work out over time, and the flow will come into some sort of balance with the forcings. Then you can look at variations.

So that is why a GCM, in its normal running, can't predict an El Nino. Good ones do El Nino's well, but with no synchronicity with events on Earth. A good illustration is this video from GFDL:



Note that the months are numbered, but no year is given. It's a depiction of an El Nino, but not for any particular time. Though the months are significant - El Nino is weakly coupled to the annual cycle.

Here is another SST Movie from GFDL



Again it shows all sorts of interesting motions. These are not predictions of actual events, and they are not the consequence of any initial conditions. They are the results of forcings, fluid properties and topography. But it is actually telling you a lot about real world currents (and SST).

Decadal forecasting

This is the recent alternative that I mentioned. People are trying to get useful information for years in advance from initial conditions. Meehl wrote a paper in 2009, basically with a prospectus, and wrote a review earlier this year, subtitled "an update from the trenches".
Are we there yet? No, and I think success is still uncertain. For details, I can only suggest reading the paper, but here is a summary quote:
"It remains an important question as to whether or not decadal climate predictions will end up providing useful information to a wide group of stakeholders. Indications now are that temperature, with a greater signal-to-noise ratio, shows the most promise, with precipitation being more challenging. These two quantities are typically the ones that have been addressed so far in the literature. Since sources of skill are time dependent, it is important to emphasize that for the first 5 or so years of a decadal prediction, skill could come from the initial state, and after that skill arises because of the external forcing, with some regions having potentially greater skill than others. Further quantification with other variables needs to be done and applied in reliability studies, which are just now beginning, in order to demonstrate usefulness of decadal climate predictions."

So it sounds like five years max, at the moment. We'll see.













23 comments:

  1. The underlying oscillations of ENSO are likely highly predictable. The quasi-biennial oscillations (QBO) of stratospheric winds are very predictable and it's apparently the case that these are the main driving force for the Southern Oscillation. Put that together with the other forcing functions due to angular momentum changes and variations in solar insolation, and a predictive model for ENSO is possible.

    http://arxiv.org/abs/1411.0815

    Someone has to convince me that GCMs are even necessary any longer to capture the first order effects. The variations due to smaller-scale turbulence are miniscule in comparison to the huge impact that ENSO has on the earth's average climate has from year-to-year. Toss out the chaos and red noise arguments and concentrate on the determinism, because it is there.

    So what this is accomplishing is turning the problem from an initial-conditions situation to a boundary-condition scenario. The periodic forcing functions are synchronizing the ENSO to a predictable state -- these are the guide rails that are bounding the excursions and coaxing the oscillations to maintain a certain phase. The SOI only looks chaotic because the response function is not ideal and the additional cyclic forcing conditions are creating mixed beat frequencies.

    Do the sloshing differential equations and check it against the data. Alignment and fit this good does not come about by chance.


    ReplyDelete
  2. Interesting post. I've wondered if the desire to make decadal predictions is actually motivated by a genuine scientific interest, or is really just a response to the criticism that GCMs didn't predict the pause. As you mention, they're really being used to try and understand how our climate responds to changes in forcings, rather than as genuine predictors. On the other hand, they are the same basic models as are used for weather prediction, so it's not as if using them for actual predictions is unprecedented.

    ReplyDelete
    Replies
    1. Meehl's earlier paper, which could be seen as the start, was in 2009, before there was very much pause talk. Both then and now, he is enthusiastic about how useful skillful decadal forecasting would be.

      I've been a bit sceptical. The chaos issue, which limits NWP, is hard to get past. They seem to be looking for patterns on decadal timescale - like ENSO, with some success, but whether they will fill in enough of the picture is a question.

      Delete
    2. They will never be entirely successful in prediction unless they can predict volcanic disturbances and the long-term modulation in temperature. The latter curiously correlates with changes in the earth's angular momentum [1]. As we can predict ENSO and CO2, if it turns out that we can predict volcanoes and when the future leap seconds occur, then we may be able to predict temperature.

      I find the pause question interesting. Why don't people complain that we can't figure out when all future leap seconds will occur? We haven't had a leap second since June 30, 2012, so will we have another one again? The deniers may deny further leap seconds. Why don't they complain when a pause in volcanic activity occurs? Do they think that volcanoes will then cease to occur after a pause in activity happens?

      The whole attitude that the deniers display is nuts. The only thing nuttier is that scientists are giving their best shot at doing these kinds of predictions.


      [1] Dickey, Jean O, Steven L Marcus, and Olivier de Viron. “Air Temperature and Anthropogenic Forcing: Insights from the Solid Earth.” Journal of Climate 24, no. 2 (2011): 569–74.

      Delete
    3. My problem with all observed correlations of this type is in the difficulty of figuring out, when they should be considered as really significant. In all these cases it's impossible to do proper statistical testing based on objective criteria of significance, because the amount of effective number of freedoms consumed in selecting the explaining variables among a very large number of possible variables cannot be determined. Due to the strong but poorly understood autocorrelations the effective number of freedoms in the data is also difficult or impossible to estimate. When we are considering long term variability, we are very close to the point, where the most plausible conclusion is that observing a good fit proves almost nothing.

      We have many examples of, how good fits have been obtained using sets of explaining variables that are virtually certain to be unconnected, meaning that the correlations are almost certainly spurious. When that's the case, it's very difficult to give much weight also to correlations that can be supported by more plausible but still highly vague arguments about connecting mechanisms.

      When strong correlations are observed, it's certainly reasonable to spend some effort in trying to figure out, whether we can really propose a causal connection, and then perform further tests to confirm or weaken the idea. We should, however, not declare that something highly significant has been found, when the evidence is too weak to confirm that or when we cannot present an almost objective quantitative measure that supports claims of statistical significance.

      Delete
    4. Pekka,
      Could what you say above be a reason why modelers sometimes write that they feel that their models are more reliable than observations? They could certainly be tidier and the auto-correlations avoided. But then comes the question as to whether their simplicity is actually instructive because they cannot be tied to observations for the reasons you suggest above.

      This is very very difficult.

      Delete
    5. ATTP: I've wondered if the desire to make decadal predictions is actually motivated by a genuine scientific interest, or is really just a response to the criticism that GCMs didn't predict the pause. As you mention, they're really being used to try and understand how our climate responds to changes in forcings, rather than as genuine predictors.

      Decadal predictions are a natural extension of seasonal prediction. Both produce economic advantages, it helps to better plan production and logistics. Even a quite poor prediction can already give large economic profits. Planning would also include the prioritisation of adaptation measures.

      For adaptation climate change impact studies are needed. A problem in impact studies for the shorter term is that there can be quite a difference between the historical climate data and the free (without initialisation) climate model run used for future projections, if only because of the chaos mentioned. It is not obvious how such a an artificial jump should be handled, thus decadal prediction runs without such problems would be an elegant way to avoid such jumps.

      I do not think that it is a big consideration, but yes the current slowdown in the global surface temperature rise did make studying natural variability more interesting. Framing this as a near-term predictability issue may make it easier to find funding for this type of research. That the mitigation sceptics find the slowdown interesting is not a criterion for funding agencies, a research proposal should be founded in science.

      ATTP: On the other hand, they are the same basic models as are used for weather prediction, so it's not as if using them for actual predictions is unprecedented.

      For weather prediction, the predictability comes from the state of the atmosphere and the current (fixed) sea surface temperature. In seasonal and decadal prediction, the predictability comes from the full ocean, cryosphere (ice), vegetation and hydrosphere (soil moisture). In that respect, the problem is quite different and methods to initialise the other systems are necessary and we need to understand which of these parts of the climate system is important for (which parts of) decadal prediction.

      Volcanoes cannot be predicted, but there are people working on estimating whether after an eruption, we would need to rerun the decadal prediction or whether the eruption was too small and a new prediction is not necessary.

      Delete
    6. I can never figure out what Pekka is talking about. I am assuming he is referring to the GCMs, which are these huge computational beasts that are thousands of line of code, populated with more adjustable parameters than you can shake a stick at.

      He obviously can't be referring to my model, which is a simplification of a GCM applying first-order hydrodynamics.

      Plus Pekka does not even seem to understand what auto-correlation means. If I force an electrical circuit with a current and the output resonates as a sine-wave, of course that signal is auto-correlated. You don't want to remove that auto-correlation -- that is what you are interested in understanding !


      Delete
    7. WHT,

      Your models are a prime examples of what I cannot tale nearly as seriously as your statements about them seem to imply. You show fits that look superficially quite good, but I have no way of deciding, whether that supports your claims at all.

      The paper of Dickey, Marcus and de Viron has also this problem. It's impossible to say, how much evidence does an agreement as good as their Fig. 1 and their other results shows really convey. They present significance levels from Student's test, but it's impossible to satisfy all the requirements that a fully proper statistical test requires to be satisfied. For a test to be formally valid nothing in the data used in statistical test is allowed to have influenced the model or hypothesis that's being tested. It's really common to think that insisting on that is nitpicking, but it isn't. It's really amazing, how many totally spurious results can be shown to be statistically highly significant, when these principles are not followed. As I wrote in my previous comment, we have seen really many examples of that also in connection of climate, the most certainly spurious come surely from some "skeptics", but the problem is a serious difficulty also for the proper climate science.

      I agree that the signal that your analyses and many other analyses try to extract contribute to autocorrelations, but trying to extract the signal out of the noise requires that assumptions are made about the noise model including the autocorrelations of the noise. In applications, where the amount of data is not an issue (like in much that's done with electrical circuits) more refined methods can be used, but when we are discussing decadal or even multidecadal variability in data that spans less than 200 years, and the most accurate data much less, then very little can be extracted without making rather strong explicit or implicit assumptions. Finding a good fit under such conditions proves in many cases almost nothing.

      Delete
    8. Pekka, Do you even follow what anyone is trying to say, or you one of those typical internet commenters that think the speed of your response is equivalent to quality?

      You said "In applications, where the amount of data is not an issue (like in much that's done with electrical circuits) more refined methods can be used, but when we are discussing decadal or even multidecadal variability in data that spans less than 200 years, and the most accurate data much less, then very little can be extracted without making rather strong explicit or implicit assumptions. Finding a good fit under such conditions proves in many cases almost nothing."

      There obviously is some noise in an ENSO measure such as SOI, but the fact that they refer to it as an oscillation indicates that it is composed of by a real signal and not by noise. The way that oscillations come about is the result of a resonance condition. Even in the extreme of approaching randomness, a red noise signal is best modeled as a random walker that is weakly oscillating within a potential well.

      So until you come up with some valid arguments to support your claim that ENSO is more noise than signal, you really aren't providing any value. Everyone seems to realize that ENSO is definitely more deterministic than stochastic, and modeling it that way is the obvious way forward.



      Delete
    9. "Pekka,
      Could what you say above be a reason why modelers sometimes write that they feel that their models are more reliable than observations?"


      They are better observed. And you can dig into them and find out why things are happening. Otherwise, for ships or climate, you wouldn't bother with them.

      They won't reliably follow the actual trajectory of Earth's climate. But for the future, you don't have observations either. GCM's won't predict weather, but the understanding they bring is a big help.

      Delete
  3. WHT,

    Where have I claimed that ENSO is more noise than signal? I do believe that ENSO is basically a quasiperodic oscillation, that seems obvious enough.

    I have no complaint about doing analyses of the kind you have done. It's quite possible that some insight is obtained through that, but such analyses are explorative and must be supported by further and much stronger analysis, before real conclusions can be drawn.

    What I do not agree with are such blanket statements you have often made. You have told that ENSO is highly predictable. You have also made statement telling that there's very little noise in the time series. These statements deviate strongly from, what climate scientists, who have studied same phenomena for long have made (and I'm not referring tho any of the "skeptics" among scientists). When you claim that you have reached results that are far stronger than any scientist has reached, you must justify that well. A manuscript like the one you have linked to recently is not nearly at the required level in my view. (If PRL refers to Physical Review Letters, I will be really surprised, if the paper will be accepted.)

    ReplyDelete
  4. Pekka, You should look up the word pedantic.

    Yes, I submitted to Physical Review Letters. If it doesn't get accepted, no skin off my nose. The point I am trying to make is that very constrained forcings and characteristics are required to fit the seemingly erratic SOI time series. If you don't use the precise characteristic frequency that Allan Clarke established for the hydrodynamic response [1], and the exact frequency for QBO, the exact frequency for the Chandler wobble, and the TSI anomaly for forcing, then you won't be able to fit a model to the SOI.

    All these factors have been previously asserted as important in establishing the Southern Oscillation, but apparently no one ever tried solving the rather simple equation for the response. I am simply filling in the big hole that other scientists have left for us as they went ahead and created ever more elaborate GCMs.

    So if the paper doesn't get accepted, somebody else will eventually experiment with these factors and find the same result. It really is no different that if someone gave you R, L, and C values for a resonant electrical circuit and an input waveform , and then you plugged in the values and came up with an agreement to the experimental measurements. In that case, it is not publishable material, but you would get an A-grade for your work.

    [1] Clarke, Allan J, Stephen Van Gorder, and Giuseppe Colantuono. “Wind Stress Curl and ENSO Discharge/recharge in the Equatorial Pacific.” Journal of Physical Oceanography 37, no. 4 (2007): 1077–91.




    ReplyDelete
  5. WHT,

    One more example. In an earlier thread here I wrote that the interaction between the atmosphere and the oceans is not one way. You responded writing

    >> You don't like to face the evidence, eh? Calling on first-order physics such as a wave equation driven by the exact QBO time-series as a forcing is incontrovertible as an approach.

    I have since looked a little on the literature finding nothing on the influence of QBO on ENSO (I don't claim that no such papers exist, only that I didn't find one). What I did find is a recent paper that discusses how ENSO affects QBO.

    http://onlinelibrary.wiley.com/doi/10.1002/qj.2247/abstract

    I found also other papers that discussed previously observed clear correlations between QBO and some other phenomena, which disappeared in later analyses.

    The paper of Clarke et al is looking at the mechanisms of ENSO and makes conclusions at the level their research can support referring to approx. 2 pi / 4.25 yr as a reasonable interannual frequency, and stating also that "The above idealized model errs in several respects". They present justification for their approach, but consider many details only illustrative. A very different style than what I have seen from you.

    ReplyDelete
    Replies
    1. Nothing on the influence of QBO on ENSO? The renowned hurricane authority (and climate change denier) William Gray was one of the first to spot the relationship
      http://scholar.google.com/scholar?q="william+gray"+ENSO+QBO

      After that time there have been many other articles mentioning the QBO effect on ENSO. The forcing is always described as a wind shear as the stratospheric QBO makes its way down to the ocean surface. The reason that researchers haven't been able to find a direct correlation, ala matching time-series one-for-one, is that it is the response that matters, and that may not be in synch with the input forcing. It obviously passes through the ocean's transfer function before emerging as a response. That is the key. The researchers that study sloshing response functions seem to understand this.

      The causation is clearly not that ENSO impacts QBO, except in a trivial way. The QBO is very periodic compared to ENSO, and the causality arrow will never result in a periodic time-series from a quasi-periodic time-series. That is an absurd notion.







      Delete
  6. WHT,

    I don't feel that we should continue this argument here. My main point is rather general, and should be understandable from what I have written above. We continue to disagree as much as before. If you wish to argue on these things more, you may contact me by email (finding my email address should be easy).

    ReplyDelete
    Replies
    1. Interesting how Pekka can pick and choose. As a tastemaker he can deem what is most appropriate.

      He said:
      "Excluding complexity, when it’s essential leads to an erroneous model, but including complexity, when it’s not possible to do it correctly, is likely to make the model worse.
      Most robust and reliable results are typically obtained, when the model is the simplest one that agrees the with the known relevant facts essentially as well as the more complex ones do, and that’s built to avoid overfitting."


      See the way this works. Kind of the story behind Goldilocks, not too hot, not too cold. The model has to be just right according to Pekka's subjective evaluation.

      Delete
  7. The mark of a good model is that it can expose other characteristics. In this case the sloshing model clearly reveals the regime shift that occurred around 1980.

    I didn't even mention that a machine learning algorithm that I applied to the data essentially "discovered" the same formulation independently derived from the wave equation.

    The point is that there are many modes of analysis that haven't yet been applied by climate scientists. Consider that sea-level height (SSH) data taken from tidal records that also shows excellent correlation with SOI. That is an analysis that has been barely tapped.




    ReplyDelete
  8. WHT,
    Without going again into my points of disagreement, which remain, I can add that I agree on some level on two central features of your model:
    - ENSO is very strongly correlated with many other observed measures
    - ENSO has some similarity with sloshing.

    These features are surely present in all circulation models that have been used to study ENSO. It's also likely that most, if not all, models that describe even roughly the history see the same break, when it's clear enough.

    ReplyDelete
    Replies
    1. What the sloshing model is doing is taking the salient characteristics of the GCM, in the form of the shallow-water wave equation, and bringing those to the surface (so to speak) as a first-order formulation. One can see a hint of this in Allan Clarke's paper. One thing I learned in all the physics courses I have taken, is that there is almost always an approximation that one can make to simplify the complexities of a phenomenon. The breakthrough in this is via the sloshing approximations that recent research has revealed [1][2][3]. The last paper shows the generality of the approach to various configurations and how the root wave equation manifests itself.

      As the final step, the key is to intuit the correct forcing functions that can then guide the boundary-conditions of the system. That is then a simplified model of a GCM, demonstrating how this limited region of the world -- the Southern Oscillation of the equatorial Pacific ocean -- can have far-reaching consequences on the climate. In other words, everything else is second-order.


      [1] J. B. Frandsen, “Sloshing motions in excited tanks,” Journal of Computational Physics, vol. 196, no. 1, pp. 53–87, 2004.
      [2] O. M. Faltinsen and A. N. Timokha, Sloshing. Cambridge University Press, 2009.
      [3] F. Dubois and D. Stoliaroff, “Coupling Linear Sloshing with Six Degrees of Freedom Rigid Body Dynamics,” arXiv preprint arXiv:1407.1829, 2014.





      Delete
    2. WHT,

      I have not criticized your calculations, only the way you have presented your conclusions.

      One interesting recent review is written by Allan Clarke. For some reason the 2007 Clarje et al paper gets only little weight in this review. Unfortunately the review paper is not openly accessible as far as I can see.

      Delete
    3. "...only the way you have presented your conclusions."

      Not really my concern on a blog commenting section. WUWT anyone?

      Delete
  9. Pekka confuses sufficient for the purpose with statistically proven. WHT if he submitted to PRL picked the wrong journal. Excellent post Nick, up to your best

    ReplyDelete