## Monday, November 28, 2016

### ## Spectral methods in GCMs - and some thoughts on CFD.

There has been a lot of discussion recently on the maths of GCMs. I have summarised some in a series on chaos and the Lorenz equations (here, here, and here). David Young has commented substiantally on this thread, and raised the issue of the use of spectral methods in GCMs. These are used in the dynamical core, which relates pressure and velocity via the Navier-Stokes equations. They are time critical, because they require resolving sound waves, and so the speed performance here fixes that of the code as a whole. Spectral methods are used because they are fast. But some mystery is made of them, which I would like to try to dispel. But I'd like to do this in the context of some simplifying observations about CFD.

#### Discretisation and derivative

Actually this applies to continuum pde (partial diff eqs) in general. To make them amenable to computation, they must be discretised. Fields are represented by discrete values, often at nodal points. I have worked with many schemes for that - finite difference, finite elements and more recently with meshless methods - SPH, RBF. And there's finite volume, and spectral methods. Each is presented with its own bundle of theory, for Navier-Stokes or whatever. But I have found one simplifying feature. The only thing these methods need to do is to provide an operator which gives a discretised first (spatial) derivative from discrete values. From there on, the linear (or non-) analysis proceeds in the same way, regardless of how discretised. I should add a caveat. If the derivative operator maps on to a different space, as with FEM, say, then you need the mapping operator for that - called a mass matrix in FEM. This operator usually is quick to apply or even invert, and should cause no numerical difficulty.

By derivative, I generally mean grad, but the divergence is basically the negative transpose of that. There is an important difference in boundary treatment, which I could go into. The derivative will generally be a large sparse array, which I denote Gjab. j is the index for space dimension, and a,b for the respective nodes. Again, they may not be space nodes, and a,b may be in different spaces. For finite elements,
Gjab Vb = ∫TajUbdE Vb
where the T's are the test functions and U's the basis functions (usually the same set). And the mass matrix is
Mab=∫TaUb dE
I use the summation convention whereby there is summation over repeated indices.

How to relate these to pde's. Simple. Just replace grad operators by (Mab)-1Gjab, and div by the transpose, and replace undifferentiated variables by their discretised values. That of course assumes M is invertible, when it may not be square. If not, a Penrose inverse is needed. Generally the equations will be solved in the range-G space, so the outer multiplication by (Mab)-1 is omitted.

#### Solving

There's a whole rigmarole of solution methods. But I think they should be reduced to just one - inexact Newton-Raphson. The reason is that this is totally systematic. You just write down the equations, as discretised above, as in general F(V)=0. Then you substitute a trial solution V0, which isn't right. So you do your best to make it right by linearising:
B*(V1-V0)+F(V0)=0
Ideally, you'd use for B the derivative of F. But that is often very hard to invert, and maybe even to calculate. So B is a compromise that is reasonably close, and so that you can solve for V1. Then you can iterate, although in most CFD you'd have a good starting V0from the previous step, so you would then go on to the next step.

For Navier-Stokes, forming B often involves block LU-factorisation of the N-S equations on v and P. That leaves an awkward Schur complement (Uzawa), which is still hopefully symmetric positive definite (depending on how you treat advection). The solution task is to approximate that. In Uzawa itself, you invert it by some iterative process like conjugate gradient. That just delegates the approximating task to the preconditioner for CG. In augmented Lagrangian, for example, the Schur would be approximated with a multiple of the unit matrix. And yes, there is upwinding and all that. You can embed that in this framework.

I like inexact N-R conceptually, because it separates the task of deciding what you want (F(V)=0) and of how to make it happen. And if you iterate, you are always checking if what you want has been satisfied. And there are many ways you can form B. You can diagonalise mass matrices. You can drop terms you think don't matter. All that counts for accuracy is that B is good enough that the iterations converge. Of course, you may have a preference for speed too.

#### Spectral methods on the sphere.

As used in GCMs. They have been around for a very long time. I quoted this GFDL history paper:

"Spectral methods are an alternative to finite difference schemes, the method used by all of the first-generation primitive-equation GCMs. They express the horizontal variation of dynamic model fields in terms of orthogonal spherical harmonics. The technique simplifies the solution of many of the nonlinear partial differential equations used in general circulation modeling. Its utility had been explored as early as 1954 (Platzman, 1960; Silberman, 1954).

Heavy calculational demands made spectral methods unsuitable for use in early GCMs. Faster computers, and improvements in algorithms for spectral methods which reduced their calculational intensity, led to their adoption in GCMs around 1970 (Bourke, 1974; Eliasen et al., 1970; Orszag, 1970; Robert, 1969)."

An often quoted early GFDL paper on implementation, which is also a good intro to GCM maths, is here. An expansive text, which explains in detail the reasons for using spherical harmonics (SH), is here, especially starting about sec 18.10. As Boyd points out, the key issue which makes something like SH necessary is the pole problem. The shrinking of lat-lon elements is not only a nuisance (unwanted high resolution) but interferes with the Courant condition. Personally, I would first use a projected cube to avoid the pole problem, but the spectral decomposition has the virtue of mapping into a much smaller space.

#### Spherical Harmonics

I've written quite a lot about spherical harmonics, eg here. I use them in TempLS to map anomalies onto a sphere with I think optimal smoothing, and more ambitiously, to integrate on a sphere for TempLs itself. It's one of the best performing methods for that - comparable to an irregular mesh, and with similar results. So GCM's use them for differentiating.

The spectral transform from lat/lon values to coefficients is done just as you would in Fourier analysis (the SH are orthogonal). You integrate the product of the function with the SH on the grid. That is a 2D integration which ould be costly, but there is a key saver. The SH is just the product of trig functions over longitude with a Legendre polynomial of cos(lat). So you can do an inner loop over longitude, using fast FFT. The outer loop over latitude doesn't have this blessing, but that matters much less for overall speed.

The key reason why this is overall faster is that you don't need that many SH functions. The series can be truncated, with loss of only high frequency noise. There is fuss about aliasing etc, but that has been sorted out. The result is that the mass matrix Mab has many fewer rows than columns, so you solve in a lower dimensional space. A downside is loss of sparsity. But sparse methods do not parallelise as well as dense, so it seems the trade-off works.

They don't usually formulate it as I would, with a grad operator Gjab. I think that is useful, because the derivatives in a lat/lon space carry coefficients to make them correspond to read 3D grad, and I think it is useful to fold them into the G operator. It also clarifies the trade-off, that G is more compact but with corresponding loss of sparsity.

David Young was concerned that the use of spectral might bring the problems of high order spectral in CFD. I don't think that is right here. It isn't high order in the sense of a spectral element method, say (eg hp). It seems to just transform and truncate. So there is no special effort to represent high derivatives, at least not in the form described by CCM3.

1. Also worth noting that not all GCMs use spectral. Hadley don't, for example.

2. > the key issue which makes something like SH necessary is the pole problem

Which the Hadley folk solve by spectral filtering near the poles.

1. William,
Thanks. It can be quite hard to get details of gridding and filtering. It has puzzled me that quite complicated techniques like spectral transform and filtering were used for the pole problem before trying other styles of gridding, like cubed sphere, which GFDL now uses.

3. Thanks for mentioning inexact Newton Raphson. We pioneered these methods in the 1980's. It was one of Forester Johnson's big insights. GGNS is currently the only code to reliably implement it for RANS. That is where new insights will come from.

The problem of course with spectral methods is that you must filter the result to remove oscillations you don't like. First of course you must decide based on some criteria what is noise and what is signal. Generally for 60 years people have found upwind or stabilized FEM to be better and to be independent of "expert judgment" on a case by case basis. In short the method is the method, not a mechanism for rationalizing bias.

4. Nick, you say you "would first use a projected cube to avoid the pole problem". Why not use a triangular/icosahedron grid for both weather and climate global modeling? I'm guessing it makes the math more complicated and thus slows down getting results? But as computers are get faster and faster, maybe that's no longer an issue.

1. Bryan, I think the origin of these methods is deep in the ancient past when FFT's were the fastest solver methods available. There is some good work in the 1970's on fast direct methods using FFT for PDE's. Paul Swartztrauber at NCAR and Brian Sweet were two of those to look for if you are interested. These methods only work on essentially uniform grids in some coordinate system. Thus the grids used in GCMs that are essentially uniform in polar coordinates. That's a decision based on 1960's technology and should probably be revisited. I suspect that the problem here is that a lot of the other "tunable sub grid models" were also optimized for these grids, so there is probably a ripple effect throughout a massive code. In cases like this, starting over is expensive but allows one to really test the idea that more modern methods have advantages.

Of course there has been huge progress in inexact Newton Krylov methods since the 1970's that are now in fact vastly better than FFT based methods because they are easily parallelized using domain decomposition and work well for a wide variety of grids and grinding types. Keyes and Knoll have a good survey article on these methods in Journal of Computational Physics about 2001 I believe.

2. Sorry, "Grinding types" should be "gridding types." Darn spell correction.

3. Bryan,
"Why not use a triangular/icosahedron grid"
Yes, you can do even better. But I looked closely at the cubed sphere here; you can tweak it to fairly equal areas. There is a 2007 paper describing GFDL's adoption of cubed sphere. It completely overcomes the pole singularity issue. Triangles aren't really more difficult, but I suspect would have been a bigger change to their code structure. The good thing about GCM grids is that you should only have to deal with them once, to create matrices that you then store. That's the point of my claim that discretisation shouldn't matter once you have a first derivative, and maybe a mass, array. In timestepping, that is what you use.

Regular longitude is useful for the spectral transform FFT. But if you aren't doing that, I'm not sure where FFT would come in.

5. I haven't seen any discussion of the ocean in this series of posts. Coupling of the ocean and the atmosphere is one of the major advantages of GCMs vs other types of modeling. A huge engineering advance vs 40 years ago. This is a water planet after all.

Chubbs

6. Thanks David and Nick for your insight. I am on old dog still trying to learn new tricks. My first experience in weather modeling was in undergraduate college met lab where we graphically produced barotropic 500 mb forecasts using hand plotted and analyzed CONUS maps via light tables in about 1974. In graduate school I made a simple baroclinic forecast model using FORTRAN IV in 1977, using punch cards to load and run the program on a mainframe computer that was likely much less powerful than cell phones today. I also had a graduate course in FFT, which I can't honestly say I mastered very well but managed to pass it somehow. Since that time I have not been involved in weather modeling other than to frequently use the output for weather and air quality forecasting. So I am much more familiar with the output and its limitations than the detailed workings of the models. Even though I retired last year, I still look at the GFS output every day and marvel at how well it predicts for a few days compared to what we had in the 1970's.

I suspect it would be worthwhile to start over and build a new weather/climate forecast system from scratch to optimize for modern capabilities like parallel processing. I hope that some well funded group can do that before too long. I can imagine it might take years to develop and implement. The triangular/icosahedron grid is appealing to me since the grid cells are uniform in area, although I realize that the earth is not a perfect sphere and there are also issues related to fixing the surface of the model relative to the actual surface of the earth. I remember seeing a short YouTube video posted in comments at WUWT on determining MSL and I was surprised how complicated that is.

7. Bryan - have you looked at any the information about various models available on NASA websites? Curious as to your impressions.

Obviously, they have models for wings.

1. JCH, thanks for the links. The first link to a NOAA page was helpful for visualizing the cubed sphere grid that Nick has mentioned. I still conceptually prefer a triangular/icosahedron global grid, but I have never tried to implement one.

The second link regarding turbulence brought back old memories for me. I had a graduate class called "turbulence and diffusion" based on "Turbulent Diffusion in the Environment" by G T Csandy published in 1974. Some of my fellow students humorously called it "turbulence and confusion". After finishing my graduate classes, I worked at an environmental consulting firm for 9 years and I was involved in data analysis/QA/QC for atmospheric turbulence measurements at 10 and 60 meters AGL from tall towers and from about 50 to 300 meters AGL from doppler acoustic sounders. I learned quite a bit from reviewing lots of data over those years and even had a few conference presentations published. In those days I was focused on turbulence as it is involved dispersion of air pollutants near the ground. But I also seem to recall that it is involved in frictional energy dissipation, which could also play a role in weather and climate.

I have not kept up with turbulence modeling since then, other than in reviewing output from air quality models that include turbulence. I suspect that some aspects of air quality modeling may be helpful in improving weather and climate modeling, especially regarding atmospheric particles (which are very difficult to model) that will have some small effects on atmospheric energy budgets and could also be involved in significant effects on snow and ice albedo. The hypothesis that extreme atmospheric dust levels may have contributed to the end of the many glacial cycles in our current ice age I find very interesting. But I digress.

8. Yes JCH, GFDL has gone with the Woodward Collella finite volume scheme from the 1980's. Somewhat of an advance.

You are out of your depth on the NASA turbulence modeling site. It's all very simple mostly 2D cases. There is far more interesting data and cases in the literature and even some negative results. I've given on an earlier post here some references that take larger investments of time and are not as easily assessable by outsiders, but that give a fairer and more scientific view.

1. David, my reference to wings was a joke.