Discretisation and derivativeActually this applies to continuum pde (partial diff eqs) in general. To make them amenable to computation, they must be discretised. Fields are represented by discrete values, often at nodal points. I have worked with many schemes for that - finite difference, finite elements and more recently with meshless methods - SPH, RBF. And there's finite volume, and spectral methods. Each is presented with its own bundle of theory, for Navier-Stokes or whatever. But I have found one simplifying feature. The only thing these methods need to do is to provide an operator which gives a discretised first (spatial) derivative from discrete values. From there on, the linear (or non-) analysis proceeds in the same way, regardless of how discretised. I should add a caveat. If the derivative operator maps on to a different space, as with FEM, say, then you need the mapping operator for that - called a mass matrix in FEM. This operator usually is quick to apply or even invert, and should cause no numerical difficulty.
By derivative, I generally mean grad, but the divergence is basically the negative transpose of that. There is an important difference in boundary treatment, which I could go into. The derivative will generally be a large sparse array, which I denote Gjab. j is the index for space dimension, and a,b for the respective nodes. Again, they may not be space nodes, and a,b may be in different spaces. For finite elements,
Gjab Vb = ∫Ta∇jUbdE Vb
where the T's are the test functions and U's the basis functions (usually the same set). And the mass matrix is
I use the summation convention whereby there is summation over repeated indices.
How to relate these to pde's. Simple. Just replace grad operators by (Mab)-1Gjab, and div by the transpose, and replace undifferentiated variables by their discretised values. That of course assumes M is invertible, when it may not be square. If not, a Penrose inverse is needed. Generally the equations will be solved in the range-G space, so the outer multiplication by (Mab)-1 is omitted.
SolvingThere's a whole rigmarole of solution methods. But I think they should be reduced to just one - inexact Newton-Raphson. The reason is that this is totally systematic. You just write down the equations, as discretised above, as in general F(V)=0. Then you substitute a trial solution V0, which isn't right. So you do your best to make it right by linearising:
Ideally, you'd use for B the derivative of F. But that is often very hard to invert, and maybe even to calculate. So B is a compromise that is reasonably close, and so that you can solve for V1. Then you can iterate, although in most CFD you'd have a good starting V0from the previous step, so you would then go on to the next step.
For Navier-Stokes, forming B often involves block LU-factorisation of the N-S equations on v and P. That leaves an awkward Schur complement (Uzawa), which is still hopefully symmetric positive definite (depending on how you treat advection). The solution task is to approximate that. In Uzawa itself, you invert it by some iterative process like conjugate gradient. That just delegates the approximating task to the preconditioner for CG. In augmented Lagrangian, for example, the Schur would be approximated with a multiple of the unit matrix. And yes, there is upwinding and all that. You can embed that in this framework.
I like inexact N-R conceptually, because it separates the task of deciding what you want (F(V)=0) and of how to make it happen. And if you iterate, you are always checking if what you want has been satisfied. And there are many ways you can form B. You can diagonalise mass matrices. You can drop terms you think don't matter. All that counts for accuracy is that B is good enough that the iterations converge. Of course, you may have a preference for speed too.
Spectral methods on the sphere.As used in GCMs. They have been around for a very long time. I quoted this GFDL history paper:
"Spectral methods are an alternative to finite difference schemes, the method used by all of the first-generation primitive-equation GCMs. They express the horizontal variation of dynamic model fields in terms of orthogonal spherical harmonics. The technique simplifies the solution of many of the nonlinear partial differential equations used in general circulation modeling. Its utility had been explored as early as 1954 (Platzman, 1960; Silberman, 1954).
Heavy calculational demands made spectral methods unsuitable for use in early GCMs. Faster computers, and improvements in algorithms for spectral methods which reduced their calculational intensity, led to their adoption in GCMs around 1970 (Bourke, 1974; Eliasen et al., 1970; Orszag, 1970; Robert, 1969)."
An often quoted early GFDL paper on implementation, which is also a good intro to GCM maths, is here. An expansive text, which explains in detail the reasons for using spherical harmonics (SH), is here, especially starting about sec 18.10. As Boyd points out, the key issue which makes something like SH necessary is the pole problem. The shrinking of lat-lon elements is not only a nuisance (unwanted high resolution) but interferes with the Courant condition. Personally, I would first use a projected cube to avoid the pole problem, but the spectral decomposition has the virtue of mapping into a much smaller space.
Spherical HarmonicsI've written quite a lot about spherical harmonics, eg here. I use them in TempLS to map anomalies onto a sphere with I think optimal smoothing, and more ambitiously, to integrate on a sphere for TempLs itself. It's one of the best performing methods for that - comparable to an irregular mesh, and with similar results. So GCM's use them for differentiating.
The spectral transform from lat/lon values to coefficients is done just as you would in Fourier analysis (the SH are orthogonal). You integrate the product of the function with the SH on the grid. That is a 2D integration which ould be costly, but there is a key saver. The SH is just the product of trig functions over longitude with a Legendre polynomial of cos(lat). So you can do an inner loop over longitude, using fast FFT. The outer loop over latitude doesn't have this blessing, but that matters much less for overall speed.
The key reason why this is overall faster is that you don't need that many SH functions. The series can be truncated, with loss of only high frequency noise. There is fuss about aliasing etc, but that has been sorted out. The result is that the mass matrix Mab has many fewer rows than columns, so you solve in a lower dimensional space. A downside is loss of sparsity. But sparse methods do not parallelise as well as dense, so it seems the trade-off works.
They don't usually formulate it as I would, with a grad operator Gjab. I think that is useful, because the derivatives in a lat/lon space carry coefficients to make them correspond to read 3D grad, and I think it is useful to fold them into the G operator. It also clarifies the trade-off, that G is more compact but with corresponding loss of sparsity.
David Young was concerned that the use of spectral might bring the problems of high order spectral in CFD. I don't think that is right here. It isn't high order in the sense of a spectral element method, say (eg hp). It seems to just transform and truncate. So there is no special effort to represent high derivatives, at least not in the form described by CCM3.