Monday, October 31, 2016

Climate and the Lorenz attractor, 3D interactive model.

In my previous post, I described some recent blog discussion of chaos and climate models (GCMs), and gave my views on the relation to Navier-Stokes solution and CFD. People who write about chaos tend to focus on the trajectories, and their touchy relations to starting point, claiming this undermines GCMs. They should instead focus on the attractors, which are independent of start point sensitivity, and are the analogue of climate. And I contend that attractors have a manageable relation (see Appendix for math) to parameters that may vary - forcings for climate, or coefficients in a chaos differential equation.

In this post, I'll focus on the Lorenz DE's

These have, for various parameters values, chaotic solutions with interesting trajectory paths often shown:

A trajectory for the standard Lorenz parameters σ=10, β=8/3, ρ=28. Often displayed without mentioning that specific parameters are required.An attractor (due to Anders Sandberg, Oxford) parameters not specified but seems close to standard.

This post provides below a Javascript interactive display of the Lorenz system. You can choose parameters and start points. It is built on the WebGL of my standard Earth view, so you can rotate with mouse as if it were in a trackball, and also magnify or reduce by right button dragged vertically. There is also provision for viewing separate trajectories, and for running an animation of their evolution.

The general idea is that you can compare the effect of changing start points, with comparison red/blue trajectories, and also see the very great range of different attractors that result when the parameters are changed, However, the changes are continuous. What I'd like to get to eventually (future post) is a possible relation between an average of trajectories and the attractor. That would help understand how GCM runs can be averaged to get a climate evolution.

So here is the interactive WebGL gadget, starting with the standard parameter values:

The plot works as a trackball; you can drag a virtual enclosing sphere with mouse left button down; there is a little set of unit length colored axes (x-red, y-green, z-blue) in the centre which show what is happening. By dragging vertically with the right button, you can change the scale.

Data and controls are on the right. You can enter parameter values or initial (x,y,z) values. Click Go to show a new plot with values you have entered. By default up to two trajectories will show, one red, one blue. If you enter more, the last two will show. You can remove either using the red/blue X buttons.

The solver works by fourth order Runge Kutta from supplied initial point; N is the number of steps that you can enter. Each trajectory starts out with faint color, then heavier toward the end.

You can click the Orient button at any time to restore the y-axis to vertical. The bottom button with a ↻ rotates between a x-y, y-z and z-x view.

You can start an animation by pressing the "Track" button. The trajectories on show will be traced out. The time interval per frame is in the text box dt in units of millisecs nf is the total number of frames for the trajectory.

So do experiment. I'd suggest start with varying the initial point to compare trajectories, then try varying parameters gradually to change the attractor shape. next in the series will, if I can work out a way, show in magnification how trajectories cluster around an attractor. I'd like to explore whether the attractor does emerge from averaging trajectories. That would help with the question of averaging GCM ensemble results to derive climate.

As always, the Javascript for this gadget, which does all the calculation, is available by displaying the HTML (Ctrl-U) and following links. But most of it is here.

UpdateThere is some fairly simple maths that picks out a few points on the shapes. The centres of the spirals are where the derivative is zero, and this comes out to a cubic in x:
y=x,z=x*x/β, x*(ρ-x*x/β-1)=0
The solutions are x=y=z=0 and z=ρ-1;x=±sqrt(βz),y=x.
For standard params, z=27, x=y=±sqrt(72) marks the centres. The other root, (0,0,0) is not an attractor point.

Update Large values of N (100000 say) make pretty pictures and don't take too long to compute. But they can drain resources. Usually they will show initially, but dragging the picture may cause it to vanish, and animating may do similar. The problems seem to accumulate. In Javascript, you have to rely on the system garbage collection, which can be slow to clean up. If this starts happening, best to just refresh the page and start again.

Appendix - attractors

There has been much study of the nature of Lorenz attractors and dependence on parameters. Guckenheimer and Williams, 1979 created a geometric analogue for which the dependence could be shown. Steve Smale included proving the equivalence of this analogue in his interesting problems of next century list - it was solved with due promptness by Warwick Tucker in 2002. There is a more complete description here on the attractors and their behaviour in the various parameter regions.


  1. Another way of thinking about this issue is that while the trajectories are not closed neither do they fill the parameter space. Thus, to circle back neither the weather nor the climate is random

    1. This statement is significant. The recent ground-breaking paper by Astudillo et al demonstrated by applying Takens embedding theorem that all events in ENSO can be reconstructed from prior events dating back to 1880:

      Astudillo, H. F., R. Abarca-del-Río, and F. A. Borotto. "Long-term potential nonlinear predictability of El Niño–La Niña events." Climate Dynamics (2016): 1-11.

      This indicates that the seemingly chaotic ENSO could repeat at relatively short intervals. I would think not much more than the longest tidal repeat period.

      So much for the ENSO=chaos assertions of the AGW denier Anastasios Tsonis, who Judith Curry has chearleaded as evidence for her grand uncertainty monster.

  2. Nick, I think caution is required in extrapolating this simple system and its results to the Navier-Stokes equations. We don't know the dimension of the attractor but the only proven bounds are very large. A complex attractor can make following it very difficult and if its not sufficiently attractive impossible to follow. Its and area of deep ignorance, I would say.

    1. Young has no idea what he is talking about, which means he is the one that is deeply ignorant about how climate varies, ala ENSO.

      Does Young's airplane simulation model take into consideration the idea of sloshing of fluids?

      Does he even understand what behaviors that these simple models are trying to mimic? Is it the ocean? Is it the winds in the stratosphere?

      In reality, it's probably the case that Young just doesn't care about an actual physical model. All he cares about is being a towel-boy for Judith Curry and so goes on these trolling expeditions where he points out the grand uncertainty of everything.

    2. David,
      My emphasis is on the role of attractor and trajectories. I agree that identifying a whole Navier-Stokes attractor is daunting, but basically unnecessary. People just want averaged properties, like lift and drag. My point is really that these are properties derived from the attractor, not of trajectories. Which they have to be, because in any real world situation, trajectories are unknown - all that binds that to analysis, whether CFD or windtunnel, is the attractor.

    3. WHUT,
      "Young has no idea what he is talking about"
      I do not like seeing commenters being shouted down. People do have views that should be respected, and not everyone needs to be as focussed on ENSO as you are.In fact, he does know about Navier-Stokes solution, and has a point about the complexity of attractors.

    4. Nick, My advice is don't try to normalize Internet trolls like Young. We got Trump because of that attitude.

      Oh well, bye. This Google Blogger comment format is horrible anyways. I went over to Wordpress long ago.

    5. And don't think that they will become your friends if you make everything fair and balanced. You give an inch and they will take a mile, all the while sniggering behind your back concerning your naivete. Don't believe me? This from this morning:

      "Mentioning the obvious (and widely understood) correlation between ‘young’ and ‘naive’ is exactly the kind of thing that provokes mindless PC charges of ‘ageism’, ‘micro aggression’, and the rest of the claptrap. When someone is born, they are 100% naive. Some decades later, maybe they are not at all naive, though there seem some who remain hopelessly naive their whole lives. To suggest there is no connection between youth and naiveté, or to deem mention of that connection ‘politically unacceptable’ isn’t just ridiculous, it is a willful rejection of reality…. part and parcel of the ‘PC’ culture which dominates ‘progressive’ politics.
      BTW, I would be surprised if Nick Stokes were not at least in his 60’s, and perhaps 70’s."

      That's from the other dickweed troll by the name of Steve Fitzpatrick.

    6. Yes, Nick, your point about the attractor is a good one and does at least give us a chance to do something computationally with these systems. My point is that we don't really understand much about the parameters and uncertainty of these calculations. I am certain however that the sources of error in time accurate calculations are legion particularly if the attractor is only slightly stable. Seems to me we need some breakthroughs in these areas.

      There is some new work that is still unpublished showing that wall bounded LES can show a disturbing lack of grid convergence. We simply don't know very much about these things yet.

    7. Nick,
      Tried to reply to WebHub's 'dickweed' comment... apparently to no avail. You need a better blog program.

      Steve Fitzpatrick

    8. SteveF,
      I think David Young worked this one out. Apparently you need to enable cookies to comment.

    9. Nick & Steve---I think you also need to have a registered account, and be logged in.

    10. Carrick,
      I think that helps, but the form does say it allows anonymous comments. I don't think David uses an account.

      BTW, it's hard for me to change from Google Blogger. Wordpress is the main alternative, but it doesn't allow authors to use Javascript, which would rather cramp my style.

  3. I agree with Nick, but I do not trust DY at all. I do read his links, and I spend time trying to find anybody on who agrees with him... not easy. Last night I read a paper by a scientist who is testing in a climate model one of the things DY mentioned recently in a comment. I don't think it goes anywhere. By 2020 to 2030, among intelligent life, there will be no skeptics of AGW. Whatever it is that DY wants, it could barely get wind under its wings by then, and nature will have spoken very loudly. It's speaking with a pretty loud voice right now.

    1. JCH, Which paper is that? I'd like to read it.

      It's not hard to find people who agree with me if you delve a little beneath the surface many turbulence modelers do agree with me. I've mentioned this before, but here's something from The Aeronautical Journal in a review article in 2001 I think:

      This review has provided evidence that eddy-viscosity models are fundamentally flawed and often perform poorly in flows featuring separation, strong shock–boundary-layer interaction and 3D vortical structures. More seriously, perhaps, the models do not display a consistent behaviour across a wide range of conditions. In relatively simple flows, which develop slowly and in which a single shear stress (expressed in wall-oriented coordinates) is wholly dominant, eddy-viscosity models can be crafted to give the correct level of this stress and thus yield adequate solutions. This applies to near-wall flows, thin wakes and even separated flows in which the separated region is long and thin and hugs the wall. Another type of flow in which eddy-viscosity models are adequate is one in which inviscid features (pressure gradient, advection) dictate the mean flow, so that the Reynolds stresses are largely immaterial, however wrong they may be. The fact that many flows are an amalgam of shear layers and regions in which turbulence is dynamically uninfluential explains, in part, the moderate level of success of eddy-viscosity models. Among two-equation eddy-viscosity models, SST forms perform fairly well (at least in 2D flow), due to the limiter which prevents the shear stress from responding to the strain to the extent dictated by the stress-strain relationship. The length-scale equation is a key area of uncertainty and its precise form greatly affects model performance. There is some evidence that models using the turbulent vorticity as a length-scale variable near the wall perform (marginally) better than models based on the dissipation-rate equation, although it must be stressed that performance depends greatly on the nature of viscosity-related damping functions and the numerical constants in the length-scale equation.

  4. ...if you delve a little beneath the surface many turbulence modelers do agree with me. ...

    It needs to be here in the form where climate models are part of the discussion:

    1. JCH, Another one is Spalart's recent paper "Philosophies and Fallacies in Turbulence Modeling." I'm sure its available at google scholar. With due respect to climate modelers, they are probably consumers of the models and as such perhaps have not kept up with the literature on the subject or like a lot of CFD practitioners in other fields (or like me 15 years ago), simply don't understand the limitations of the things they are using. That's OK of course, not everyone can know everything, but there is the idea that the experts in turbulence modeling should be part of any discussion of the usages of their models.

    2. David,
      Yes I read Spalart's paper (link). It describes some sticky points in making further progress with turbulence modelling. But I am unconvinced that GCMs are rendered unreliable by turbulence inadequacy, or that there is a crisis in CFD. I also looked at Jameson's SciTech presentation from last year here. He indicates that CFD continues to be a major force in aircraft design, and mentions no problems with Navier-Stokes solution. Now you may reply, well he would say that, wouldn't he? But in terms of actual current use made of CFD, I'm sure it's true.

      And that is why I think it is a pointless attack on GCMs. CFD is still major engineering, and seems to feel no pressing problem. GCMs are not using those critical parts of CFD. They run at low Mach Number. There are really no issues of boundary layer separation. As far as intricacies of turbulence modelling are concerned, this is all about the diffusivity of momentum. But atmospheric momentum transport on a GCM grid scale is primarily advective, even with turbulent enhancement. The surface boundary layer has simple geometry (apart from roughness etc) and we live in it; it has been thoroughly studied. There is really no reason why GCMs should not use the CFD that has served Boeing and others so well for 40+ years.

  5. Nick, you say "CFD is still major engineering, and seems to feel no pressing problem." Yet that is untrue according to both Drikakis (Aeronautical Journal) and Jameson who on page 26 of his presentation says that "The majority of current CFD methods are not adequate for vortex dominated and transitional flows."
    Of course the atmospheric circulation is largely vortex dominated. Among Jameson's conclusions is that "CFD has been on a plateau for the last 15 years with 2nd-order accurate FV methods for the RANS equations almost universally used in both commercial and government codes which can treat complex configurations. These methods cannot reliably predict complex separated, unsteady and vortex dominated flows."

    I believe your characterization of Jameson's presentation is inaccurate. I would point out that these realizations about the issues with CFD have been largely recent in origin, but they do seem to be spreading in the literature. That is the fruit of myself and others being persistent about the problems. I still contend that the vast majority of the literature suffers from positive results bias though. I would recommend some of our recent papers on this subject where we show a wider variety of results and cross compare codes and methods. We have a new one I can send you if you want where we look at simple 2D airfoil flows with 9 different methods and find wide variations in results.

    My experience is that the atmosphere is very turbulent as well. Wind speed fluctuations are often 100% of the mean value. This is true at altitude as well as at the surface. Yet, as I understand it, there is no modeling of turbulence outside the planetary boundary layer.

    Have you looked at the recent paper on GCM tuning? It's I think pretty interesting. We will see what the next few years brings in terms of coming to terms with the tremendous complexity of atmospheric modeling.

    1. David,
      The vortices in GCMs are essentially 2D, and I don't think they are the source of any numerical difficulties on the grid scale. As to CFD being on a plateau, I think that means that people are finding obstacles to progress with high order methods and adaptive gridding. GCMs don't use anything like that; it isn't needed. That plateau has been mostly adequate for the planes we currently fly in, as Jameson expounds (none are being recalled), and GCMs operate well within those limitations.

      I'd be happy to read your paper. But my main contention is that GCMs, in solving N-S equations, are doing no more than regular CFD has been doing for 40 years (this is in answer to people who say it can't be done at all). And I think the fact that wrinkles are showing up in highly demanding situations of high speed flight are really in a very different situation to GCMs.

      Wind speed fluctuations .. that is very much sub-grid. On the grid scale inertia is v important, although at fixed points velocities change due to vortices moving etc. But there is no apparent numerical problem there.

      I've been reading recent papers on tuning. But I think they need to be read carefully. I find myself constantly in discussion with people who think that GCMs are programs where complte station weather data are fed in and some sort of curve fitting done. In fact they are physics based solvers which impose a huge number of constraints on the variables, but occasionally don't have a match. A classic example is where TOA flux is known on physical grounds to be near balanced, but isn't, and cloud albedo, poorly known is a factor. So it is very reasonable to modify albedo (within the range) to balance TOA. The fact is that TOA flux is an indirect observation of cloud albedo that is more reliable than direct.

    2. Nick, There are a number of misperceptions in your statements here I believe related to airplanes, CFD, and even GCM’s.

      1. You say: “As to CFD being on a plateau, I think that means that people are finding obstacles to progress with high order methods and adaptive gridding. What this means is that CFD never really worked very well except in attached essentially 2D flows or very mildly separated ones where the calibration of the turbulence models is done. A lot of the 3D results people have shown are not robust to parameter changes. We have tried and there is a replication crisis in CFD as evidenced by the long series of Drag Prediction Workshops. Mavriplis’ paper I referenced on a previous post here is a prime example. The real problem here is that just as in other fields of science, people are trying to keep the funding stream alive and thus try to put their work in a positive light. This leads to ignoring “bad” results and indeed to honest but biased rationalizations that any “bad” result is due to fixable problems or ignorance that will be addressed soon. Here “soon” has an infinitely flexible meaning.

      2. You say “That plateau has been mostly adequate for the planes we currently fly in, as Jameson expounds (none are being recalled)…” This is largely untrue but is easy to come to believe based on a superficial reading of the propaganda emitted by CFD advocates. Virtually ALL of the heavy lifting of designing and certifying a commercial airplane as safe is based on wind tunnel testing and flight tests. The flight envelope is very large and many of these flow fields are inadequately predicted by current CFD. We are right now working on changing that. But propagandistic statements only hinder real progress by making outsiders think CFD is a solved problem. I don’t blame you personally as you are simply looking at the implications of what some advocates way. Where there has been good progress is in designing the airplane for improved fuel burn. CFD has played a large role in that, but its based on modeling a small class of flows with mild pressure gradients and mostly streeamwise effects. The investment required to do this has been very large. Jameson’s view of his own contribution to this is biased and overstated.

      3. You say: “…high order methods and adaptive gridding. GCMs don't use anything like that; it isn't needed.” Be careful here, GCM’s do use higher order methods, essentially spectral methods with rather heavy handed filtering to make them stable. Jameson has a recent paper on these methods that I’ve referenced I think on the earlier thread and its not a glowing testimonial. There is growing evidence that NS RANS simulations without grid adaptivity and residual convergence can never materially reduce uncertainty. The Drag Prediction Workshop results are a cautionary tale here.

      4. You say: “Wind speed fluctuations .. that is very much sub-grid.” Yes, but modeling this turbulence is essential to getting resolved scaled right. This is easily proved by looking at a simple airfoil with significant pressure gradients in inviscid flow vs. viscous modeling. The viscous forces, despite being 10^7 smaller than the overall level of momentum, make an O(1) differences in global forces. These things cannot just neglected a priori.

      5. You say twice that “there is no apparent numerical problem there.” Well, if any method employs sufficient dissipation, nothing blows up, so there is never an apparent problem. A more scientific approach is to do grid refinement studies, parameter studies, or to compare lots of simulations with truly different methods and parameterization, particularly turbulence. OF course there is always the question of accuracy in addition to stability. And that’s the real rub. I would argue we don’t have very good methods for assessing “accuracy” of our sub grid models if we still have very large errors due to grinding, numerical errors, and other sources of uncertainty.

      (to be continued below)

    3. Your comment length limit got me.:

      I’m not as sure about some of the other things.

      1. Ive heard the “essentially 2D” argument from Isaac Held. Of course, in some sense 2D might be harder because of the lack of 3D relief. We have some interesting work on this in 2D. Basically details of turbulence modeling are critical to how quickly vortices are damped out and that is or course a critical element of predicting the atmosphere. I want to do some more work on this before publishing anything, but its not too surprising to turbulence modelers that its an issue.

      2. I know you are frustrated by some outside skeptics who don’t understand anything about CFD or GCM’s. However, I think a strong case can be made that GCM’s for climate anyway are really little better than far simpler models. If you want to predict GMST simple tuned energy balance models with feedbacks can do very well. GCM’s seem to have little skill at regional climate. So then, what are they good at other than consuming public resources that could be better spent on fundamental research or better data? I myself think intermediate complexity or even low complexity offer the advantages of more transparent parameter tuning and hugely smaller computing times, making extensive experiments possible on realistic time frames.

    4. I believe it was September 2008 when Pat Frank wrote an article for Skeptic Magazine in which he claimed climate models are unreliable, and that nobody knows why it is warming.

      Since then, OHC has shot up like a rocket. The rate of SLR has been around 4.5mm per year. The GISS L&O anomaly is 0.042℃ per year. Closing in on a decade, there could be a decadal average that is twice as high as the .2 ℃ per decade IPCC predicted would occur.

      Because climate scientists have the AMO wrong, I suspect better models would result in higher predictions of future temperature and SLR and economic damage.

    5. Frank was wrong in that we do know why its warming. Its because of greenhouse gases.

    6. David,
      " What this means is that CFD never really worked very well except in attached essentially 2D flows or very mildly separated ones where the calibration of the turbulence models is done."
      That's really what we have in GCMs. Certainly attached, and yes, 2D in terms of vortex behaviour. I'll expand on that below.

      "CFD has played a large role in that, but its based on modeling a small class of flows with mild pressure gradients and mostly streeamwise effects."
      Again, that describes the atmosphere.

      "Be careful here, GCM’s do use higher order methods, essentially spectral methods with rather heavy handed filtering to make them stable."
      They use spectral methods in the dynamical core for speed. Here is an early paper (1982) which is actually quite good on GCM maths generally. I don't think they are done seeking higher order accuracy; they use higher order when they can, but if it is a problem, they can cut back. Here is what another GFDL history paper said:
      "Spectral methods are an alternative to finite difference schemes, the method used by all of the first-generation primitive-equation GCMs. They express the horizontal variation of dynamic model fields in terms of orthogonal spherical harmonics. The technique simplifies the solution of many of the nonlinear partial differential equations used in general circulation modeling. Its utility had been explored as early as 1954 (Platzman, 1960; Silberman, 1954).

      Heavy calculational demands made spectral methods unsuitable for use in early GCMs. Faster computers, and improvements in algorithms for spectral methods which reduced their calculational intensity, led to their adoption in GCMs around 1970 (Bourke, 1974; Eliasen et al., 1970; Orszag, 1970; Robert, 1969)."

      "The viscous forces, despite being 10^7 smaller than the overall level of momentum, make an O(1) differences in global forces. These things cannot just neglected a priori."
      That's true for a wing. The reason is the importance of flow separation, which makes the O(1) difference. Planes create a lot of force on the air, and the torque component creates a lot of angular momentum. So what happens to that is important. In the atmosphere, that isn't true. Most of the torque that produces vortex motion comes from the fact that away from the equator, the surface is rotating under the air (Coriolis). That is why angular momentum is signed by hemisphere. The surface drag depletes the wind of energy, but the effect of shed vorticity is minor.
      I hit the 4096 barrier too. First time :(

    7. Continued...
      "Of course, in some sense 2D might be harder because of the lack of 3D relief."
      Actually, there is 3D relief, in that the grid fits the topography. The real reason that it is 2D is that a resolved vortex radius must be at least 50 km, while its height is a few km at most. So it can't roll up, etc. And while a horizontal axis is possible, as in Hadley cells, say. the motion then is highly anisotropic about the axis. And still not a numerical difficulty.
      " Basically details of turbulence modeling are critical to how quickly vortices are damped out"
      Yes, you have to get damping right. But damping is very slow compared to the time step. And it happens at a fairly uniform much studied surface boundary layer.

      "If you want to predict GMST simple tuned energy balance models with feedbacks can do very well."
      That is a limited aim. And if that is all we had, people would complain about all the things they left out. But I think GCMs do tell a lot more about response to GHGs. Its hard to couple a EBM to a LBL GHG model. You say they have little skill with regional, but I think that is a matter of time frame. Globally they are fairly tightly constrained by energy balance, but regionally, there are processes that allow imbalance to be sustained (traded) for much longer.

    8. I have written a new post on the spectral methods in GCM's here, along with some eccentric ideas on CFD.

    9. Nick, I do appreciate the forum you provide here. You seem to me to be basically saying that vortex dynamics in the atmosphere is very easy compared to aeronautical flows. I don't believe that's the case and will write more this evening. Basically however there is strong evidence even in 2D that the evolution of vortices is an ill posed problem with the balance between dissipation and turbulence being critical. I would expect the result in climate models using course grids to be determined almost entirely by numerical dissipation. Recall there is no eddy viscosity in the field. That might explain why regional climate skill is poor. I would expect vortices to dissipate too rapidly in such a scenario.

  6. And I would say, DY, that PF made a pretty huge mistake on that one. Could be, he's mistake prone.

    On regional prediction skill, I'm not at all convinced this is a major issue. Improvement would be great; it also may never happen.