Sunday, November 19, 2017

Pat Frank and Error Propagation in GCMs

This post reviews recent controversy over some strange theories of Pat Frank. It reviews his WUWT posts, blog discussion, some obvious errors, and more substantively, how error propagation really works in solution of numerical partial differential equations (pde, as GCM climate models are), why it is important, and why it is well understood. It's a bit long so I'll start with a TOC.


The story

There has been a long running soap opera about attempts by Pat Frank to publish a paper which claims that climate modellers are ignoring something he calls "propagation of errors", which he claims would yield extraordinarily large error bars, and invalidate GCM results. These attempts have been unsuccessful, about which he has been becoming increasingly strident. He says the six journals and many referees that have rejected his papers, often with scathing reviews, are motivated only by conflict of interest, desire to preserve their funding etc. Publication of Pat Frank's papers would bring that all down. The most recent outburst at WUWT was here, where a seventh journal promptly rejected it. James Annan and Julie H were involved.

What he means by "propagation of errors" is just the process of forming an expression combining variables with associated error, and expressing this in terms of a total derivative, with the error magnitudes then combining in quadrature (they are assumed independent). That is certainly not something scientists are ignorant of, but it doesn't express what is happening here. There is a lot more involved in differential equation solution.


Far too much time has been taked up with this nuttiness. First of course by the journals, who have to deal with not only normal reviewing, but a stream of corrosive correspondence. ATTP has had, I think, several threads, the latest here. James Annan expands on his cameo here. And Patrick Brown even put together a video. Further back, similar stuff made the rounds of skeptic sites, eg here. The more mathematical folks there thought it was pretty nutty too.

As reference materials, Pat Frank posted his paper and also a listing of the journal reviews and correspondence here at WUWT. His link to the paper is here. It's not a short paper - 13.4Mb. His link to the zipfile of reviews (44 Mb) is here.

ad Absurdum

As you might expect, I have been joining in at WUWT too. My view, which I'll expand on below, is that propagation of error in numerical partial differential equations is very important, necessarily gets much attention, and is nothing like what he describes. But when folks like Eric Worrall at WUWT are solemnly chiding me for not understanding that propagation is a random walk, I can see expounding the correct theory there would be a waste of time. So I instead try reductio ad absurdum on a few rather obvious points. But the audience at WUWT is not sensitive to absurdum.

In the first recent post I picked up on a criticism of a reviewer that Pat Frank had insisted on a change of units when he averaged quantities over 20 years. He said that the units became quantity/year. Others had raised this, so in the text he had doubled down. Surely this qualifies as absurd?

I again tried a somewhat peripheral approach, saying, why years? Why not regard it as a period of 240 months, and say quantity/month. He said, well the quantity would change. 4 W/m2/year would become 1/3 W/m2/month. But the problem there is that he was using data from someone else's paper (Lauer and Hamilton, 2013) who simply said the average was 4 W/m2. No time interval specified.

It actually relates to the objection many referees raised, including James Annan. If you're going to say the error adds like a random walk, what is the time interval? Why per year? It obviously affects the result how long between steps. He gave some strange replies. Here he expounds the difference between a "measurement average" and a "statistical average":

Several measurements of the height of one person: meters. A measurement average.
Average height of people in a room: meters per person. A statistical average.
Middle school math, and a numerical methods PhD [moi] stumbles over it.

So, what if you sample in Eurpoe to see if Dutchmen are taller than Greeks. You get 1.8 m/Dutch and 1.7 m/Greek. Can you compare if they are in different units? (h/t JA)

On the second thread, I picked up on a claim from PF's review of JA's review:

He wrote, “… ~4W/m^2 error in cloud forcing…” except it is ±4 W/m^2 not Dr. Annan’s positive sign +4 W/m^2. Apparently for Dr. Annan, ± = +.
How does it happen that a PhD in mathematics does not understand rms (root-mean-square) and cannot distinguish a “±” from a “+”?

Thius was a less central point, but it throws down the gauntlet. Either JA or PF doesn't have a clue. So I thought, surely there are some there who are familiar with RMS, or the particular case of standard deviation? Surely they know they are always written as positive numbers, with good reason. But to reinforce it, I challenged them to find somewhere, anywhere, where an RMS quantity was written with a ±. But no-one could. PF did come up with a number of locations, which wrote the number as positive, but used it in an expression like a±σ. I couldn't get across the distinction. Yes, 3±2 is an interval, but 2 is a positive number. It isn't JA who is clueless.

The depressing thing to me was that there were people there who seemed to have some acquaintance with statistics or math, and would never before have thought of adding a ± to express a RMS. But now it was all so obvious to them that you should. The same conversion happened with the average units.

How error really propagates in solving differential equations

It's actually a very important practical issue for anyone who solved large systems of differential equations (de's), as I do. Because you always have error, if only from rounding. And if it can grow without limit, the solution is unstable.

A system of partial de's, as with GCMs, can be first discretised in space, to give a large but finite set of nodal values (and perhaps derivatives, or other quantities) with relations between them. There is a simplification that usually only nearby locations are related, which means the resulting set of ordinary de's (in time) are sparse. If the quantities are A (big vector) and the drivers F, the de could be written linearised as
dA/dt = -S*A + F
where S is the big sparse matrix. The time step will need to be taken short enough so that S varies slowly between steps, and it is enough to take it as constant. It can be expressed in terms of eigenvectors. So the driver, which could include an error term, is then also expressed in terms of the eigenvectors, and you can think of the equation in each direction as just being a scalar equation
da/dt = -s*a + f
The solution of this is a = exp(-s*t) ∫ exp(s*τ)*f dτ
where the range of integration is from arbitrary t0 to t.
As you go backward in time from t, the multiplier of f dies away exponentially, if s>0. So a "propagates" f to only a limited extent. It's actually an exponentially smoothed version of f. S may not be symmetric, so the eigenvalues could be complex, and the criterion then is real part Re(s)>0.

The same thing persists after you discretise in time, so integration is replaced by summation in discrete steps. This is actually where care is needed to ensure that a stable de converts to a stable recurrence. But that can be done.

So to prevent instability, it must certainly be true that eigenvalues s≥0. The interesting cases are when Re(s) is small. I'll look at the possible spectrum.


In the recurrences that result from a GCM, there are millions of eigenvalues. The salvation is that most of them have relatively large real part, and correspond to rapidly decaying processes (typically, hours for GCM). They are all the local flows that could be generated on the element scale, which lose kinetic energy rapidly through shearing. That is why the solution process can make some sense, because one can focus on a much smaller eigenspace where decay is either absent or slow on the scale, perhaps decades, of the solution being sought. I'll look at three classes:
  • Waves. Here s has zero (or near) real part, but large imaginary part, so decay is slow and period is comparable to the time step. In classical CFD, these would be sound waves. They are important, because they create action at a distance, propagating the pressure field so the medium behaves like a not very compressible fluid (elasticity). In GCM's, they merge into gravity waves, because a pressure peak can move the air vertically. That actually reduces the propagation velocity. The waves do have some dissipation (eg surface shear) so the ideal s has small positive real part.

    These waves are important, because to the extent they aren't well resolved in time after time discretisation, s can have negative real part, and if it outweighs dissipation, the wave will grow and the solution fail. That is why GCMs have a maximum time step of about half an hour, which demands powerful computers.
  • Secular motion and slow decay. The secular part is the desired solution responding to the driver F; I've bundled in the slow decay processes because they are hard to separate. The usual way is to wind back, so that only the slowest remain from the initial perturbation. But some does remain, and that is why GCMs tend to have apparently similar solutions which may differ by a degree or two in temperature, say. There are two remedies for this
    • use of anomalies, which subtracts out the very slow decay, leaving he secular variation
    • Increased care with initial conditions. The smaller the component of these modes that is present initially, the smaller the later problem.
  • Conserved quantities - energy and mass (of each component). These totalled correspond to zero eigenvalues; they shouldn't change. But in this case small changes do indeed accumulate, and that is a problem. GCMs include energy and mass "fixers" (see CAM 3 for examples). The totals are monitored, discrepancies noted, and then corrected by adding in distributed amounts as required.

    I remember a post at WUWT where Willis Eschenbach discovered this and cried malpractice. But it is perfectly legitimate. GCMs are models which incorporate all the math that the solution should conform to, and conservation is one such. It is true that as an extra constraint it appears to make more equations than variables. But large systems of equations are always ill-conditioned in some way; there aren't as many effective equations as it seems. That is why the conservations were able to erode, and the fixers just restore that. "Fixing" creates errors in the first class of eigenvalues, because while care should be taken to do it without introducing perturbations in their space, this can't be done perfectly. But again the salvation is that such errors rapidly decay.


So what do we have with error propagation, and why is Pat Frank so wrong?
  • Inhomogeneous equation systems are driven by a mix of intended drivers and error.
  • The effect of each is generally subject to exponential decay. The resulting solution is not a growing function of past errors, but in effect, like an exponential smooth of them. This may increase by a steady factor, but will also attenuate noisy error.
  • Propagation is not something that is ignored, as Pat Frank claims, but is a central and much studied part of numerical pde.


  1. Thanks Nick.

    I did a Google search on ... error propagation uncertainty numerical model ...

    There is definitely much prior art, so much so, that I am but a peon, and I only have one lifetime to boot.

    I mentioned this over at James Annan's site (for a 1D random walk) ...

    y = a*sqrt(t) and y' = a/(2*sqrt(t)) (where y = standard deviation and y forms a normal distribution for N trials >> 1)

    Of course y' = infinity at t = 0 (largest propagation of error occurs for small t).

    1. Everett,
      I'm trying to get away from random walk ideas here - emphasising that differential equations characteristically have exponential decay, which has an exponential smoothing effect. Within that, noise is attenuated, and sqrt(t) coms into play with slow decay modes.

    2. Nick,

      Some concrete examples would seem to be in order. I've read what you wrote, but most of it is really beyond my comprehension (or not, I've coded and run FE and FD models, there are always stability criteria (dt or dx/dy/dz) and I've run both types into unstable ranges).

      I'd be interested in the uncertainty structure of a solution to PDE's that show the basic shapes of known analytical with numerical solutions.

      I think, given the time, I'll look into this elsewhere (elementary (or not) texts and classic (or not) papers. If you have some references to share then please do so. Thanks.

  2. Nick, Your post here is good as far as it goes, but seems to me to focus on the trees rather than the forest.

    Error control of time dependent numerical simulations is completely dependent on the underlying system being well-posed. For any chaotic system, the initial value problem is ill-posed and so errors in the initial conditions increase exponentially in the pointwise sense. Further, as a consequence the adjoint of such a system diverges and so traditional numerical error control is not really possible. So why don’t these simulations “blow up?” The answer is that in many cases the attractor is bounded so the numerical solutions are also bounded. But what do these simulations mean? It’s not clear. Traditional ways of measuring “convergence” like cutting the time step or increasing grid size may yield confusing results and there are some very recent results showing lack of grid convergence for Large Eddy Simulations. I believe that in the past, no-one really looked at this very carefully.

    The problem here is that we need a theoretical breakthrough on ways to characterize the attractor(s) and their properties especially their stability and quantifying their attractiveness (mathematically speaking). There is some promising work by Wang at MIT on shadowing but that is at least 30 years off as a tool for real systems.

    It has been one of my great disappointments over the last 20 years as I began to investigate CFD codes and formulations and really compare results in detail, that the community is so unwilling to acknowledge these limitations and start the process of more risky fundamental research. I continue to see reams of “colorful fluid dynamics” often presented as if it is accurate and validated and can be used in ensuring public safety. Academics at top Universities have over the last 30 years set a terrible example and are among the worst. This is due to the new environment of entrepreneurial scholarship (an oxymoron). Academics are seeking a constant stream of soft money to constantly grow their salaries and their hordes of graduate students. It is easy to succumb to the seeming necessity of producing glowing and misleading presentations to outsiders often devoid of the qualifier that we really don’t know much about what these simulations mean.

    In the last couple of years, there was a NASA CFD 2030 vision effort that reluctantly embraced some limitations in order to make the case for more research. But its always cloaked in very limited terms and in my view is shallow and not fully accurate.

    If I’ve missed something along these lines, I’d appreciate knowing about it.

    1. David,
      "trees rather than the forest"
      I'm looking at one particular tree, which is linear stability analysis. Essentially von Neumann analysis. I should have mentioned that at WUWT; they like von Neumann because he draws elephants. I think it's a long way from the tree that you see, which is the difficulty associated with phenomena like flow separation and trailing edges.

      "so errors in the initial conditions increase exponentially in the pointwise sense"
      They separate from each other exponentially. But when expressed in the eigenspaces of S, most of them decay (dissipate). That is why they are bounded, and why it pays to go way back in time, where you know less about the initial state, but most of the effects of your ignorance will wash away (provided that you create no explosions).

      I'm seeing more and more situations in de solution where you have a fast decaying process that you don't want to spend time resolving, because all you care about is that it does decay. Stiff equations. So you replace the true process with something that doesn't get it right, but goes to zro fast enough. Backward Euler is the simplest, but the idea is pervasive. We use it with explicit particle methods (SPH). You have to resolve sound waves, which is a pain for a low Mach number flow. So you reduce the speed of sound, and solve a problem with higher Mach number, but still low enough. You care about transmitting pressure, but don't mind it being a bit more sluggish. I now tend to think of turbulence modelling in the same way. You bring the speed of diffusing momentum closer to the scale of solutions you are interested in.

      So I'm not nearly as gloomy as you. I see all sorts of flows that we deal with, and GCMs are similar, where there just aren't hard problems like flow separation to deal with, and you can progress fairly smoothly to deal with the (weather) time scale that you are interested in.

    2. Nick, This is just hand waving though, isn't it? You mention Rossby waves which involve small pressure gradients. But they are only a small part of climate. Tropical convection is much less well modeled. One would think that if Rossby waves are "right" regional climate would be better modeled.

      And that's the heart of my point, we have here a lot of qualititative "colorful fluid dynamics" and heuristic arguments about eigenvalues. We don't really have much to back it up. What would you say if the simulations didn't converge as the grid is refined?

    3. "Tropical convection is much less well modeled"
      Does that refer to the missing (weak) tropospheric hotspot?

      (I believe that there are other explanations, i e the Pacific trade winds have been stronger and tropical SST cooler than in the models)

    4. Well, Thats one issue. Isaac Held has a nice post on this from a couple years back showing some restricted domain conviction modeling. There is a strong dependency on the size of the domain. So there is computation evidence for what is really settled science, viz., this is a classical ill-posed problem. It is a testament to the denial in the "colorful fluid dynamics" ether that is is necessary to rehash such classical results.

    5. David,
      "You mention Rossby waves which involve small pressure gradients."
      I'm not talking about Rossby waves. Just ordinary atmospheric gravity waves, which normally don't get much attention because they don't do much, unless there is a breaking phenomenon or some such. In an acoustic wave, energy oscillates between elastic, when a converging flow increases pressure, and kinetic, when the resulting pressure gradient accelerates fluid. For long wavelengths (order horiz grid length) in the atmosphere, converging flow horizontal also creates potential energy as the air rises. The waves may not have much climate effect but they are important computationally, because they have to be resolved adequately in time, else they will not converge energy and may grow. The need to resolve them bounds the maximum timestep (Courant). Because the air can respond to converging flow by rising as well as compressing, it isn't as stiff so the wavespeed is less than acoustic.

    6. For a behavior like ENSO, there is absolutely no dependence on initial conditions. The behavior is completely forced by lunisolar cycles applied to Laplace's tidal equations.

      "this is a classical ill-posed problem."

      No it's not. So much for David Young's continual harping on the difficulty of modeling climate because the math is too difficult. I have often noticed over the years that people will use the "ill-posed problem" canard when they assume since they themselves can't solve a particular problem, then nobody can.

  3. DY -- the problems that scientific modelers encounter have nothing to do with any of the issues that you bring up. Go back to your drawing board.

  4. The argument I like the best is your "but the answer would be different if you used units other than years" as it doesn't require examining anything about the system other than the problem as formulated by Frank, but here is how I thought about this:

    The intuition is whether a baseline error compounds or stays constant. Here are 2 very simple physical examples of each:
    1) I want to estimate the height of a lego tower that is being constructed on a table. I know the height of a single brick, but I don't know how high the table is. It is obvious that the error of the table height stays constant regardless of how many bricks I add.
    2) I want to estimate the location of a car. I know how much the car speeds up or slows down, but I don't know its initial velocity. Here, the initial uncertainty (in m/s) is multiplied by the number of seconds at which I make my calculation in order to provide uncertainty in car location at that time.

    Pat Frank thinks he is living in the 2nd example. To me, it seems fairly clear that we are living in a world more like the 1st example. And the proof in the pudding is that we can construct different models, with different assumptions about table height, and yet they calculate very similar lego tower heights. If we were in world 2, then different models with different assumptions about car velocity would, by definition, get different answers about car location.

    Now, an initial error in W/m2 in long wave cloud forcing is a bit more complicated because for the model to approximate an earth-like climate while getting cloud forcing wrong, it must also be getting other parameters wrong to produce compensating errors, and some of these errors likely have an influence on climate sensitivity to forcing. So, unlike the table example, uncertainty in long-wave cloud forcing does not produce zero uncertainty in lego tower height... but, again, an obvious way to measure the impact of that uncertainty is to examine a group of models with forcings that span the uncertainty range. I'd argue that the uncertainty propagated as a result of long-wave cloud error and its accompanying compensating errors is almost certainly smaller than the range of model results - I have trouble imagining a scenario where it could be larger. It certainly doesn't blow up to a 40 degree error after 100 years.


  5. Nick, I can't seem to get the reply button to work so I'll put this at the end. I'm not sure that gravity waves are an important issue here. I'm assuming they can be filtered out just as acoustic waves are.

    I'm more focused on how to really control numerical error when the adjoint diverges. Classical methods will fail and results can be very inconsistent. You can of course apply gross approximations like error per unit step for time step size control and use implicit methods that are LINEARLY stable, but linearly is the key word there.

    How do you propose to actually control error?

    1. DY said:

      "I'm more focused on how to really control numerical error when the adjoint diverges. Classical methods will fail and results can be very inconsistent. You can of course apply gross approximations like error per unit step for time step size control and use implicit methods that are LINEARLY stable, but linearly is the key word there.

      How do you propose to actually control error?"

      Why is this AGW denier so interested in working weather simulations, when the climate simulations are much more important?

      ENSO is one of the primary behaviors that controls natural variation in climate, yet can be straightforwardly modeled without having to resort to overtly complex models. All that is required is a near equivalent tidal forcing model, which is akin to working a conventional tidal analysis.

  6. Nick, Thanks for providing an account of this strange business with a far higher signal to noise ratio than can be found on WUWT. I did come across the post attacking Annan on one of my rare visits to WUWT, but lost interest over some weird argument as to why Annan should not be taken seriously as a scientist because express he expressed an uncertainty as a magnitude without a preceding "+/-". Having worked in metrology myself I know it is standard to drop the "+/-" term when talking about uncertainty.

    As for the weird temperature/year units for averaging annual temperatures isn't the obvious refutation that taking an average involves summing a series of temperatures, units celsius, giving a result also with units of celsius, then dividing by the (dimensionless) number of years to give an average also with units of years?

    1. Bill
      "then dividing by the (dimensionless) number of years"
      Yes. Pat insists that instead you divide by years (or Greeks) and the dimensions change accordingly. That leads to ridiculous results; for example, standard deviation would have units m/sqrt(Greek).

  7. oops, the last word in my previous post should be "celsius", not "years"

  8. (Reading Frank's paper)

    It looks like Frank uses a linearized connection between temperature and forcing. E.g., this is equation #6, on page 15.

    T_i = T0 + SUM( alpha*F_i )

    where alpha is some constant I'm using to simplify his equation.

    Certainly if you have a linear iterative equation, and the input forcing at a timestep, F_i, is subject to a Gaussian variation at each timestep, then the end result will be an ever-expanding range of possibilities for the temperature. This is his equation #8, on page 31:

    T_i +/- (temp uncertainty) = T0 + alpha*(F_i +/- (forcing uncertainty) )

    Note that he dropped the integration here (seriously, wtf), but let's pretend he didn't.

    I don't think the point about changing units is a fundamental issue. You can resolve that. If integrating properly, a forcing over a given period of time becomes heat retained, then that heat retained becomes temperature via the heat capacity. All of this ends up being encapsulated in that constant. The forcing is already time-sensitive (in units of energy/area/second), so the size of the timestep is generally irrelevant for the ultimate result. You get an ever-expanding uncertainty of the temperature.

    *If* you use a linear-integrative model for the forcing-temperature relationship, then yes, an uncertainty in the forcing propagates through as he claims.

    ...So... the problem is really that the models don't use iterative linear relationships between the forcing and temperature.

    1. I wrote a response to this and published it here, but apparently it got swallowed up by the computer. (Yours or mine, I don't know).

      Reading the paper a bit more thoroughly, I have to recant my original statement. If you actually look at the equations, there's more wrong with the paper than what I originally said.

      His "time-stepping" equation #6 doesn't actually deal with any kind of real time; it assumes equilibrium at each step. When you reach a new forcing, you automatically reach the new equilibrium temperature for that. While, yes, there is an "integral" in this step, it's just integrating over past forcing changes to get the current instantaneous forcing value. It's not integrating over past time to get the heat retained.

      Then, in equation #8, he adds the concept of an uncertainty in the forcing. Which is still fine, but then he tries to integrate over that uncertainty *as a function of time*, as if this was-always-at-equilibrium model suddenly has a real time aspect to it.

      Basically, he's mixing two different kinds of models; one that's at equilibrium at every step, and one that's not. From that, he gets mixed up about whether or not you can integrate over the uncertainty with respect to time. In a real time-sensitive model you can; in his linear always-at-equilibrium model, you can't.

  9. Nick, I really think you are not being fully scientific in this post. Franks is surely wrong and unfortunately quite persistent. But the more fundamental question I asked that is raised by Franks work remains unanswered. I've spent a lot of time and talked to a lot of very smart people about it and I believe there is no convincing answer given the limitations of our current knowledge and numerical methods.

    1. Google ... control numerical error when the adjoint diverges ...

      David Young portends the death of numerical methods and modelling? I think not.

      Enough with the Debbie Downer and Negative Nancy (and Pat Frank) POV already.

      How do you propose to actually define error?

      The AOGCM's/ESM's only have to be good enough, they were never meant to be perfect, setting the bar too high, because you think so, only means that you are fated to fail.

      David, your reply here is good as far as it goes, but seems to me to focus on the trees rather than the forest.

    2. Everett, I have already investigated the issue. There is the shadowing method of Wang from MIT, but it is computationally infeasible for at least a couple of decades. I do not portend the "death of numerical methods." That's an ignorant misrepresentation. There are very well developed numerical methods for well-posed problems that really work. Steady state CFD is very useful. However, for time accurate chaotic calculations, there is really nothing that gives much confidence. You are smart enough to do some work. Start with Wang's paper and then move on to the new LES paper on "alarming lack of grid convergence."

    3. These guys are a bit nutty when it comes to understanding the physics. These are not initial condition problems but boundary value (i.e. forced response) problems. Any accumulation of error is compensated by an over-riding forcing or driving stimulus that will get the response back on track.

      For modeling, the real issue is not a butterfly effect but a hawkmoth effect, whereby mst of the effort involved is in determining the structural parameters and forcing that produces the observed result.

      Hawkmoth problems: ENSO, QBO

    4. Paul, Your comment is just a talking point. It is a totally unsupported assertion about the strength of the attractor. In the case of non-stationary systems such as the climate, such assertions are not really scientific. They are instances of "everytime I run the code, I get something reasonable." Every time I've investigated such claims, they prove to be false or due to selection bias. This is true even for steady state turbulent flows.

      I'm not going to spend time on your ENSO models until you do the work to validate them including a careful sensitivity study of all parameters. You should have no trouble publishing it in a good journal if you have done the work correctly. There are thousands of such theories out there. It is preferable to spend time on real turbulence models where the scientific basis (while not rigorous) is at least grounded in a hundred years of careful experiments for boundary layers.

    5. Not at all a talking point, as it has everything to do with applying realistic geophysics to solving a problem. Over the centuries, no one ever needed to apply the concept of an "attractor", when it was straightforward to apply Newton's laws of gravitation to figure out the dynamics of a behavior such as the ocean tides. And this has turned out to be a stationary system. Same idea applies to ENSO, as the longer period tides impact the thermocline. There are no parameters to adjust as tidal periods are fixed, and as Lindzen pointed out periods that match to tidal periods must be due to tidal forcing

      Hundreds of years of applying the lunar gravitational forcing to tidal analysis, yet Lindzen never thought to apply it to ENSO. And of course, a turbulence modeler stuck in the trees isn't going to see the forest either.

    6. Well Paul, I would encourage you to publish it somewhere so we don't have to take just your word for it. It is odd you say that no-one needs the concept of an attractor, yet chaotic systems all have at least one strange attractor and its the basis for them not "blowing up." In any case, you are talking about something that is off topic in this thread. Nick was talking about numerical error in time accurate chaotic systems.

    7. It is going to be published. Am presenting at the AGU next month in NOLA, and the details will be in a book titled Mathematical GeoEnergy to be published by John Wiley late next year.

      I guess you have never done any geophysics models, as the topic I brought up is completely relevant to this thread. I am integrating Laplace's tidal equations with a lunisolar forcing to fit to the ENSO behavior. Numerical error of course would be a concern if I were using a brain-dead chaotic model such as a Lorenz formulation, but I'm not. The perturbation from linear systems are mainly in the applications of a slight Mathieu modulation and with seasonal delay differentials. Like I said, the numerical error does not accumulate since the strong forcing will constantly compensate any natural response that is completely dependent on initial conditions. A good example of this is in a conventional tidal prediction -- a tsunami could come along and wreak havoc via a natural response for a few days, but after that the forced response would continue as if the tsunami never happened. There is no numerical error accumulation here.

      So it's your own problem if you think "numerical error in time accurate chaotic systems" is an issue, because it is an issue in your own mind and not in the important class of problems in climate variability that are controlled by ENSO and other oscillating dipoles.

    8. Part of the problem that you have DY is that you are focusing your concern on limitations of the math instead of solving the geophysical problems. Case in point is the physics of QBO. If you look at the reams of math that Lindzen published on the topic and then step back and try to make any sense of it, you can't and it's likely all gibberish. But the basic physics behind the cause of the QBO monopole is pretty simple. It's just the draconic tidal forcing coupling to the seasonal cycle which produces the oscillations. Fitting a short data interval to a tidal model will reproduce the rest of the time series better than all the math that Lindzen has produced.

      There is no compounded numerical error here and any discrepancies from the model that are observed, such as the recent anomaly in QBO, will eventually get corrected. Just like ENSO, it's a forced response system.

      So to put it simply, you and Linden and the rest of you AGW deniers are essentially trying to solve the wrong problem! Your entire premise is invalid.

    9. Web, great news on the publication. I guess it helps when you have a well defined forcing like the Tides, rather than needing to conjure up an unknown forcing that you then need to label Force Zeta, or suchlike.

    10. Thanks Bill.

      Yes, the WUWT'ers today seem to be infatuated with the unknown forces that lead to the 50% success rate of dowsing for water, LOL. I wonder is that is Force Zeta?

    11. Maybe the success rate for dowsing follows an 11 year oscillation? Definitely a force out there to rival the four fundamental forces of nature.

  10. Paul Pukite, This is going to be my last response because you continue to hurl insults. A talk at a conference is usually not peer reviewed in the same way that a journal paper is. Likewise, its easy to get a book published or slip a paper into a compilation of conference proceedings. Your history of publication is all grey literature, i.e., stuff that most scientists would ignore. The problem here is that real scientists have far too much to do in investigating ideas that are peer reviewed and pass a threshold of believability to delve into grey literature. My experience is that 95% of grey literature is self-serving marketing and has negligible scientific value.

    The fact that you continue to push this on blogs without bothering to submit it to peer review shows something about you that is not very flattering.

    1. "Your history of publication is all grey literature, i.e., stuff that most scientists would ignore"

      The reality:
      Most climatologists are still using conventional methods to analyse their data—but that is changing. “If you go to the major modelling centres and ask them how they work, the answer won’t be machine learning,” says Collins. “But it will get there.”

      DY, How are your reactionary political leanings going to deal with this eventuality?

    2. Steady on, David. Blog science is where it's at nowadays, now that "conventional" science has been utterly discredited through Consensus Science, Pal Review and Data Torture. Look at all those brilliant exposes of these various nails in the coffin from that lady Jan Nivu, or is it Jen Nevo: oh dear, it's all too much for my poor old brain.

    3. Sarcasm aside, peer review is not perfect in that a lot of marginal papers get by. But it is at least a way to weed out obviously flawed ideas and methods. If I had a nickel for every strong willed person who thinks he has a "breakthrough" in some field, I'd be very wealthy.

    4. Yup, that JoNova blog is a perennial award-winner when it comes to science blogs. Holding these Bloggie competitions is a form of approval that is much better than peer-review LOL

    5. I have gotten some feedback from credentialled climate scientists and a couple have indicated that it would be at odds with the foundational theory. One guy said " If these are truly systematic, then the whole wave forcing theory would be wrong, along with all models. "

      That's really a over-the-top claim to make because all I am adding is a forcing stimulus to the conventional wave equation models. Most scientists don't even bother to look into this stuff that carefully and thus would rather provide knee-jerk reactions, much like DY. They aren't getting paid to do it, so that's OK with me.

    6. David Young said:
      "Your history of publication is all grey literature, i.e., stuff that most scientists would ignore"

      First of all, it's laughable that you denier dudes can't understand the "tricks" that mathematicians and physicists use to simplify data analysis. Michael Mann is a condensed matter physicist by training and so knows many of the tricks of the trade. One of my own biggest citations in the non-"grey literature" that you have seemed to miss is an application of a part of the Ewald trick, which Nick mentioned in a post last month. The trick is essentially that the convolution of two functions is the multiplication of the Fourier transforms in the frequency domain.

      This trick of mine is used in two on-line algorithms by the long-running X-Ray Diffuse Scattering server maintained by Stepanov at Argonne National Labs. Click on that link and you can see the citation to a highly-cited paper of mine from years ago.

      DY, Make no mistake, I do understand why you claim that I don't publish outside of the "grey literature", because that's the way that Trump and Fox News fake science works nowadays. You plant lies and keep repeating these, hoping they will persuade some gullible chump.

  11. Nick, I've just heard about the "rainbomb" hitting Melbourne. I hope you're OK

    1. Thanks, Bill,
      Fortunately, we're not in a likely flood area. So far, the rain has been mainly north of Melbourne. It's thunderstorm type rain, so can come down anywhere any time.

  12. Here's more ridiculous stuff from the frowning Dave Young

    His stridency is revealing.

  13. First you should define what you mean by error. Error in numerical simulations is defined as the difference between the value at a grid cell point compared to the "true" value that would have been obtained if solved the pde perfectly given the same initial conditions and boundary conditions.

    If error remains bounded and does not grow this implies you can use gcms to do prediction over the course of 1 year and each grid cell should not diverge from the actual Earth's temperature over that grid cell assuming you have the correct initial conditions. If you actually do this you will find it does not fact it doesn't even come close to working with current GCMS. Which is GCMS don't predict anything over 1 year. In fact the same GCM will diverge from itself given slightly different initial conditions.

    1. The way to do this is that you work the numerical solution computation until it matches the analytical closed-form solution to a Navier-Stokes formulation. Fortunately, there is a solution to N-S along the equator that appears infinitely chaotic yet has a closed-form analytical expression.

      If you can't get the numerical solution to converge to the closed-form expression then you are doing the integration iteration incorrectly. That is a common test for integration schemes -- compare to a known solution.

    2. "Error in numerical simulations is defined as the difference ..."
      Error of what? You talk about a difference in pointwise values, but these are never quoted as the outcome of a CFD or GCM simulation.

      "First you should define what you mean by error"
      Error is basically non-uniqueness. I often quantify as the range of outcomes you might have got by redoing the procedure with other reasonable choices along the way. This applies to measurement or calculation.

      Choosing different initial conditions is an obvious candidate. And it makes a big difference to pointwise values. But it does not make a big difference to the results that are usually quoted; global average temperatures on a multi-decadal scale etc. That is just as well, because the initial conditions are not very well known.

    3. It all boils down to that there are so many ways to get it wrong and only one to get it right.

      Yes, one real issue in comparing to real data is with respect to the initial conditions. It doesn't matter what the error propagation is if the initial conditions are wrong. And even if the initial conditions are right, there is something called the Hawkmoth effect, which is analogous to the Butterfly effect in which sensitivity to the structure of the model will quickly lead to a divergence in the solution.

      So we have these ways to get it wrong:
      1. Error propagation due to imprecise numerical solution
      2. Divergence due to initial conditions ala Butterfly effect
      3. Divergence due to structural model uncertainty ala Hawkmoth effect
      4. Error due to over-fitting with respect to signal vs noise and #DOF
      5. Differences between two big numbers
      6. Counting uncertainty due to insufficient statistics more

      All of these problems are reduced or eliminated if we can isolate a phenomena that is the result of a forced response. There are nearly always echoes of the forced input in the forced response and that gives one a guide to align the results.

      Just about every interval of processed climate data has some form of forced response removed yet many do not even realize this. That is #5 in the case of the yearly and daily signal with respect to any measure.

      As per #6, many might assume that the multiple solutions provided by GCMs are meant to reduce the counting statistics noise. Yet, that is not at all true when one considers that the main constituent of variability is due to ENSO, which is an enormous singleton response and not culled from an ensemble of individual responses.

      There is just so much to discuss on just this topic.

  14. Nick, Pat Frank got his first citation for this paper late last month. I read the context to it in that paper, and don't see how it jibes. But FYI, I am an MS, Oklahoma registered petroleum engineer, who has been using statistical methods for much of my career. I have some understanding and expertise, but not nearly that of you and these authors. I know you can find these papers easily, so will only link upon request. Can you explain the relevance of this citation to that newer paper? Feel free to comment at the level you ordinary use in watts up, and I'll do my best to follow. Thanks..

    1. Bob,
      Sorry about the delay in moderation. I have moderation on only for old articles, so I don't check that often.

      I couldn't find the citation. For some reason, Google Scholar is malfunctioning for me; it gives a long list of papers, which obviously aren't in fact citing Pat Frank.

    2. Bob.
      I have found it now here. It's a technical chemistry paper, nothing to do with climate. It's in a dodgy journal, but seems like a genuine paper. The citation has no relevance - I imagine one of the authors is a mate of PF through chemistry.

  15. " The citation has no relevance"

    Thanks so much for taking the time. As an interested amateur, I also could not find any relevance, but wanted to hear from you. I follow you just for your bend over backwards fairness and (seems to me) technically correct, laconic responses. I search for your comments regularly, in watts up, andthentheresphysics, and TRY to follow your mohyu tech commentary. I get a kick of how many watts up google references you get, where you haven't commented. Those google references come from other commenters who seem haunted by your past, poorly refuted, comments...

    1. Bob,
      Thanks, and sorry again for the further excursion through the spam folder.