Wednesday, October 20, 2010

A ramble on GCMs Navier-Stokes equations.

This post is an outgrowth of a discussion at the Air Vent on GCM's and condensation. It's a kind of response to comment claims that GCN's don't allow for inversion, or volume change during condensation etc. I found myself saying often that it all comes out in the solution of the Navier-Stokes equations - you don't have to do something special.

It's often said that the N-S equations are at the heart of GCM's, but if you look at a set of equations actually solved, as in this set from CAM3, it isn't obvious. And it's made less obvious by the funny coordinates they use.

But it's true, and here I'll try to set out the reasons.

The Navier Stokes equations

These go back to Navier in 1822, although his physics was eccentric, based on a stretched version of recent atomic theory. GG Stokes in about 1845 gave a definitive derivation and solved most of the special cases that could be done theoretically. In the mean time, Poisson and Saint-Venant also made big contributions. In fact, what I'll be talking about mainly goes back to Euler.

I'll write out a version for slightly compressible fluids. An abstract form is:
ρDv/Dt = -∇.Ï„ Momentum equation
Dρ/Dt = -∇.(ρv) Massconservation or continuity
ρ-ρ0=(P-P0)c-2Constitutive Relation
Update - I've added a P0 to that last equation (I'm used to thinking of ambient pressure as 0, but of course in the atmosphere that won't do).  At Lucia's suggestion I've used the more orthodox term constitutive relation (not eqn of state). I'll also mention that ρ is density, v velocity, P pressure and t time.
D/Dt represents a total or advective derivative - in a frame moving with the fluid. ∇ is a vector derivative operator (∂/∂x,∂/∂y,∂/∂z), and Ï„ is a stress tensor. Don't worry about what that means if it's unfamiliar - it's about to disappear.

I've chosen this "abstract" style to show the structure. The momentum equation is just Newton's second Law -   ma = F . Continuity just says that if mass is moved by advection (being carried with the flow), the density will change. And this constitutive relation just says that to change the density from a standard (local hydrostatic) value, you have to apply a pressure P, and c is the speed of sound. You could have other constitutive relations for different materials.

The stress tensor could be quite complex, but for now, let's just say there's a pressure P, gravity g, and a viscous stress (coef μ). Atmospheric flows are always turbulent, and turbulence models can work by modifying the viscosity. But I mainly want to talk about the pressure-velocity interaction, so I'll just keep a viscous term as an example of the other things that could come in.

OK, so now we have:

ρDv/Dt = -∇P + ρg + ∇.(μ∇v) Momentum equation
DP/Dt = -ρc2∇.(v) Mass conservation or continuity

Time Scales and numerical solution

Suppose you wanted to solve these numerically. You'll have the space divided up into regions, with the field variables (P, v etc) represented on nodes - maybe the corner points. You'll have linear expressions for the space derivatives in terms of nodal values.

So a naive approach is to say - suppose you calculate the right hand sides at a point in time. just calculate the rates of change of v and P, and assume that holds for a timestep. Update and proceed. That's called an explicit method (forward Euler).

But it won't work, except for very small time steps. The reason is that, while you've got starting information within your little regions, what's going on in one region can affect others during that step. You're limited by the fastest process that can propagate physical effects. And that is the speed of sound, which is the way pressure is propagated.

The reason is that buried in the N-S equations is the acoustic wave equation:
ρdv/dt = ∇P dP/dt = -ρc2∇.v+...
c-2d2 P/dt2 = ∇2P+R1
c-2d2v/dt2 = c22v +R2
I've replaced D by d, because only fast processes are retained, and the difference between D and d is an advective term, which is also slow relative to sound speed. R1 and R2 denote the slow terms.

By small time steps, I mean small enough so that a sound wave will only move part way across one of the small regions. In GCM's, the vertical divisions can be just a few hundred metres. That's a second or so of sound time. That would be ridiculously short for GCM modelling, where steps of about half an hour are common. Incidentally, these are constrained by the speed of other "weather-like" processes.

Coupling and semi-implicit solution

The problem is that with the explicit method, we have to resolve the acoustic oscillations, which are not of interest.

Looking at the pressure wave equation, we can say that, since c is large, only acoustic solutions will have a significant left side. For the solutions we actually want, that can be dropped. Then we get

2P  = - R1

wher again R1 is a collection of slow terms. This is what is called the Pressure Poisson equation. Once solved, the pressure can be used to get the velocity by going back to the momentum equation.

The essential thing here is taking the pressure and velocity equations together - coupling. Everything else can be held constant during the time step (though you can also do better with staggered time steps etc - there's lots of extra art not relevant here).

So now the time step can be extended until the next fastest process will mean that the computational cell cannot be regarded as (sufficiently) isolated during the timestep. Since that process is generally, unlike sound, one of the processes you do want to resolve, that's basically it.


So that's why the Navier-Stokes equations are the core. They can be seen as an engine that matches pressure and velocity, using sound implicitly, but not requiring sound-limited timesteps. Once you have that sorted out in a time-step, all the other physical processes can be updated in good time.

Heretical afterthought - for practitioners

Actually, implicit solution is not as rosy as it is often painted. You can take long time-steps, but you have to solve a big matrix of equations to do it, instead of just calculating new values at each node. You take advantage of the fact that each node only affects a few neighbors, which gives a sparse matrix. So for big systems you use an iterative solver.

The gloomy fact is that the iterative solver, in effect, is an accelerated version of the original explicit solution. And using something like conjugate gradients, the acceleration is significant, but not super. Preconditioning helps, as does a process called multigrid. But the fact remains that, as you extend the timestep, the matrix becomes more ill-conditioned (an analyst's curse word), and you end up with a number of iterations rather related to the number of explicit steps you would have had to perform.

I might say more about preconditioning in a future post.

Thursday, October 14, 2010

Wahl & Ammann proxy calculations

I had been quietly plodding through Wahl and Ammann 2007. This is the paper that re-did the MBH98/99 proxy calculations using conventionally centred principal components analysis and a modified number of PC's. It's relevant to recent discussions of the Wegman report.

Some original code and data used in the paper is available at Nychka's site at NCAR. So I downloaded it. It's elegant in R. Just 11 lines of actual calculation to get from the proxy data to a set of proxy eigenvectors, with decentered scaling. And then, of course, R does the regular PCA for comparison in one line.

Then Ron Broberg put up a very helpful post drawing attention to a discussion by Dr Richard Smith of a recent paper by Li, Nychka and Ammann. As Ron says, it's well worth reading (as is Ron's post). I've only just started.

The R code

To get me going, I re-commented Ammann's code, to help my understanding. I'll show it, with the Figures, below the jump.

So the next stage is for me to recheck the things that are commonly said about the various proxies, and how they are reweighted. Later it might be possible to adapt the code for the Mann08 (Tiljander etc) analysis.

Here's Ammann's code for the decentred PCA. My comments have a double ##:
# read in data

## scan a big file of proxy data - 70 proxies, 581 years from 1400:1980
## North America
## tree is the matrix of proxy data over 1:581 years (1400:1980) - 70 proxies

tt<- c(1400:1980)

# MBH- uncentered PCs  GRL Data, Caspar Ammann's code
# =======
## This part rescales tree according to the mean and sd to get xstd
## regress 1902:1980 columns of xstd against years in calibration period
fm<-lm(xstd[503:581,]~c(1:nrow(xstd[503:581,]))) #fit linear model to calibrati$
## fm($residuals) is the set (79x70) of residuals
## sdprox(1:70) is the sd for each proxy over the 79 years
sdprox<-sd(fm$residuals)                         #get residuals
## so now the xstd are rescaled relative to the calib period.
## This creates the famous MBH98 decentering (note center=F)
mxstd<-scale(xstd,center=FALSE,scale=sdprox)     #standardize by residual stddev
## Now do the svd decomposition on mxstd  = u * d * v
w<-svd(mxstd)                                    #SVD
## w$u is the 581x70 matrix of ordered eigenvectors
## w$d are the eivals =  164.3 83.1 58.9 50 45.6 ...
PCs<-w$u[,1:5]                                   #Retain only first 5 PCs for c$
## At this stage, "retain" is just the number of PC's that we calculate
## Now each PC is rescaled to mean=0, sd=1, again on 1902:1980

## Now just plot - fig1() and fig2() are plot routines which also call princomp().
source("fig1.R") # only need to source these fucntion once if they
source("fig2.R") # are saved in your work space.

## I changed the output from pdf to jpg for html posting
jpeg("fig1.jpg", width=900, height=800)

jpeg("fig2.jpg", width=900, height=600)

Here's the routine for fig1(). Note the one-line calcs for the regular PCA:

`fig1` <-

par( mfrow=c(3,1), oma=c(6,0,0,0))
par( mar=c(0,5,2,2))

## This does the whole PCA by the covariance method (cor=F is the default)
## It returns the "scores" - ie the first 5 (here) principal components
PCs.3<- princomp(tree)$scores[,1:5]

## This does the PCA by the correlation method
PCs.2<- princomp( tree, cor=TRUE)$scores[,1:5]

## tt is 1400:1980. PCs[,1] is first PC. The scale() seems redundant
## but isn't, because it was previously scaled to the calib period.
plot( tt,  scale(PCs[,1]), type="l", lty=1, xaxt="n", ylab="Scaled PC1")
title("MBH centering and scaling in instrumental period",adj=0)

plot( tt,  -1*scale(PCs.3[,1]), type="l", lty=1,xaxt="n", ylab="Scaled PC1" )
title("First PC using covariance",adj=0)

plot( tt,  scale(PCs.2[,1]),

type="l", lty=1, xaxt="n", ylab="Scaled PC1")
abline( h=0, col="blue")
title("First PC using correlations",adj=0)

par( oma=c(0,0,0,0), mar=c(5,5,2,2))
axis( 1)
mtext( side=1, line=3, text="Years")


Here's the resulting plot:

Here's the routine for fig2():

`fig2` <-

par( mfrow=c(2,1), oma=c(5,0,0,0))
par( mar=c(0,5,3,2))

## as for fig1()
PCs.3<- princomp(tree)$scores[,1:5]

PCs.2<- princomp( tree, cor=TRUE)$scores[,1:5]

## Plot the decentred MBH PC1
plot( tt,  scale(PCs[,1]),
type="l", lty=1, xaxt="n", ylab="Scaled PC1")
abline( h=0, col="blue")
title("First five covariance PCs projected (green)
  onto MBH centered and scaled PC (black)",adj=0)

## Regress the covariance PCs against decentred
lm( scale(PCs[,1]) ~ PCs.3[,1:5] ) -> out
## Plot the resulting fitted curve
lines( tt, predict(out), col="green", lwd=2)
lines( tt, scale(PCs[,1])) ## ?? this just seems to replot the MBH PC1

## Same, but done for the correlation method
plot( tt,  scale(PCs[,1]),
type="l", lty=1, xaxt="n", ylab="Scaled PC1")
abline( h=0, col="blue")
title("First five correlation PCs projected (green)
  onto MBH centered and scaled PC (black)",adj=0)

lm( scale(PCs[,1]) ~ PCs.2[,1:5] ) -> out
lines( tt, predict(out), col="green", lwd=2)
lines( tt, scale(PCs[,1]))

par( oma=c(0,0,0,0), mar=c(5,5,2,2))
axis( 1)
mtext(side=1, line=3, text="Years")


Here's the plot:

 I'll comment more on the results in the next post, and also look in more detail at the various PC's.

Wednesday, October 6, 2010

Can downwelling infrared warm the ocean?

Science of Doom has raised this question, which does crop up more frequently than it ought. Doug Hoyt, who should know better, has given it impetus.

It goes like this. Water is opaque to thermal IR, and not a good conductor. IR penetrates only a few microns. Therefore heat from incident IR from above must ...

be all converted to latent heat

or, be reflected, or ...

Land is also opaque to IR, but the idea that IR can't warm it doesn't seem to have the same attraction

Some thoughts below the jump.

Can the ocean emit thermal IR?

This is an equally good question. But it certainly does. In fact, that's how satellites measure SST (skin temperature).

A large part of the insolation that penetrates the sea, to a depth of several meters, is later radiated in this way. It's a big part of the thermal balance. Somehow, that heat is conveyed to the surface, and is emitted by the surface layer.

That pathway is a mix of turbulent advection, conduction in the top few mm, and radiation in the top microns. All those mechanisms are bi-directional. If heat can be emitted, it can be absorbed.

Heat balance

On average, the surface loses heat, by evaporation and radiation (and some conduction to air). Incoming IR does not generally need to be absorbed. It simply offsets some of the emission.

There is some diurnal effect, so that there depth intervals where the heat flow during the day is downward.

Thermal gradients

I've said that heat can be transported to and across the interface. But of course there is some resistance, and this produces a temperature gradient. The temperature differences are noticeable, but not huge. Dr Piotr Flatau has posted some results on Wikipedia:

Note the nearly logarithmic depth scale. The top 10 microns, in which there is nett daytime emission, shows a drop of a bit less than 1°C. There is another drop of about half a degree in the conductive layer. Then the gradient changes, reflecting presumably the upflux of heat from the more rapidly absorbed solar wavelengths. The gradient then diminishes as the source fades, and the turbulent eddies grow in size, increasing turbulent advection.

At night, the gradient becomes monotonic, with the advected flux uniformly upward. But again the temperature difference is less than a degree:


In a very technical sense, the sea is heated by sunlight rather than downwelling IR, as is the land. That's just trivial arithmetic - the sun is the heat source. And the upward IR flux generally exceeds the downward. But downwelling IR does add joules to the sea just as effectively as SW - there is no blocking thermal resistance.

Saturday, October 2, 2010

An entropy budget for the Earth

This post follows up some ideas in comments to the Science of Doom post on 2LoT. Whenever you calculate a heat flux, and you know the temperature, you can calculate the entropy changes. Tracking entropy is different from tracking energy flux, in that entropy is not conserved, but will be created whenever there is irreversible heat transfer. However, it is conserved in reversible processes, and can never decrease globally, although it can be transferred to somewhere else. And as with heat, for Earth steady state requires that entropy that enters or is created, must leave.

So I have the idea of doing an entropy budget, analogous to Trenberth's famous energy budget for the Earth. It's a bit more complicated though, so this post will look at a simplified Earth.

Entropy calculations

Whenever heat Q Joules is added to a region at constant temperature T, the entropy increases by Q/T J/K. If Q is negative (heat is lost), then so is entropy (to that region). If T is not constant, you have to integrate, but the idea is the same.

By the Second Law, if the heat flows from one region to another in an otherwise isolated system, the second must be the cooler, and it will gain more entropy than the first lost.

In these examples, we'll be dealing with fluxes, so Q is replaced by F, the flux in J/s/m^2, or W/m^2, and the result, S, is an entropy flux in W/K/m^2.

Entropy significance

The entropy associated with a heat flux is associated with its capacity to do work. In fact, a Helmholtz (or Gibbs - same since we're not considering PV work here) free energy can be defined:
A = F - TS
remembering that A, F and S are flux densities. A is a measure of the power that could be delivered as mechanical work from the heat flux F if discharged to a heat sink at temperature T.

This can be used as a more rigorous definition of the entropy flux associated with an E/M flux:
where again, A is the mechanical power that could theoretically be generated by using the flux F with heat sink at T.

As a heat flux passes to cooler regions, it loses some capacity to do further work. However, in that transfer it could have done work if a heat engine mechanism existed (equivalent to the capacity it had subsequently  lost). So when you see entropy increasing, it means that where that happened, there was a capacity to generate weather effects, etc.

Snowball Earth

I'll introduce a symbol for a heat/entropy flux. It's an arrow with three numbers. In the middle, in red, is the heat flux. At the tail, in black, is the entropy flux associated with the origin of the flux, and at the head is the entropy transferred to the heat sink.

So here you see the basic calc for average fluxes for snowball Earth, with a clear atmosphere with no GHG. Incoming solar is 342 W/m2, but 107 W/m2 is reflected (assuming Earth's true albedo still applies), so at balance 235 W/m2 is emitted directly to space as IR. The surface temperature required to do that is 255K.

Entropy (820 W/K/m2) is created when SW is thermalised at the surface. This is an irreversible process. The exchange could have driven a heat engine, but didn't. No further irreversible process occurs, and the entropy is exported to space unchanged.

In this calculation the incoming temperature was assigned a value of 2310K. The apparent emission temperature from the sun is 5700K, so why the reduction?

It is a subtle point, but while incoming SW has the theoretical ability to heat a black body to 5700K (with concentrating mirrors etc), at that temperature all the energy is radiated back to space, and cannot be used to perform work. There is a discussion here. The optimum temperature for conversion of solar energy to work is 2310K.

The effect of the difference is slight - on either count, incoming entropy is small.

Earth with GHG shell

GHG's in the atmosphere absorb and emit radiation, some of which returns to the surface, and some of which exits to space. Most of the 235 W/m2 received by the Earth's surface is returned to space emitted from GHG's at high altitude.

The resulting transfers create entropy in a rather distributed way. However much of the effect is captured by a simplified model in which GHG's are confined to a narrow band near TOA, at 225K. This band has an atmospheric window, 40 W/m2. As with the Snowball Earth, that flux gains no further entropy. However, the proportion absorbed at the shell does yield an entropy change. Some returns to the warmer surface, with an entropy decrease. And some is emitted to space, with entropy loss to the Earth (and gain by space). The total export of entropy is 1006 W/K/m2.

The entropy decrease has been the subject of argument at SoD. However, from a budget point of view, the up arrow from ground to shell, and the down arrow, could be merged, with a tail of 667 W/K/m2 and a head of 867, and a net entropy gain of 220 W/K/m2. This is what counts for 2LoT - the nett change in entropy, which is positive.

The Greenhouse Effect and entropy

Comparing with and without GHG:
  • With GHG there is less entropy created by incoming SW at the warmer surface - 714 W/K/m2 vs 820.
  • With GHG the total export of entropy is higher - 1006 W/K/m2 vs 922.
  • The entropy gain from ground to shell of 220 W/K/m2 can be associated with heat engine effects in the atmosphere, or with life processes etc. The colder shell is necessary to export the difference.


Yes, in terms of budgetting. The shell artifice misrepresents the apparent temperature of the downwelling IR (at 225K) - it's much higher. The 190 W/K/m2 entropy reduction does not occur at the surface, but distributed through the atmosphere. Likewise, the upwelling stream entropy increase is likewise distributed, But this does not affect the budget.

And yes, I've ignored clouds.