Thursday, May 4, 2017

Land masks, mesh and global temperature

I have been writing articles about land masks, leading up to using them to check and maybe improve my triangular mesh based TempLS. As I have tried to emphasise, the core of estimating global average temperature anomaly (or average anything) is numerical spatial integration. The temperature is known at a finite number of points. It has to be inferred for all the rest (interpolation) and the result complete data integrated. To do this successfully, the data has to be fairly homoigeneous, so anomalies are formed to take out variance in long term mean values. Then in the triangle method, linear interpolation is done within triangles.

But another kind of inhomogeneity is between land and sea, and indices often use a land mask to try to pin that down. In the mesh context, and in general, the idea is to ensure that values on land are only interpolated from land data; sea likewise.

The method corresponding to what is done with grids would be to count the mask elements within each triangle, and to divide coast-crossing triangle into a land and sea part. Since all that matters in the end is the weighting of each node, it's only necessary to get the area right. Assigning maybe a million grid elements to triangles is a rather heavy computation. So I tried something more flexible.

Here is a snapshot from the WebGL graphic below. It shows a problem section in East Africa. Light blue triangles are those that have two sea nodes, one land, and orange are those with two land, one sea. The Horn of Africa is counted as sea, and there is a good deal of encroachment of sea on land. That is about as bad as it gets, and of course there is some cancelling where land encroaches on sea.


So I refine the mesh. On the longest 20% of lines in such triangles, with land at one end and sea at the other, I make an extra node, and test whether it is sea/land with the mask. Then I give it the value of its matching end type. With the new nodes, I then re-mesh. This process I repeat several times. After respectively four and seven steps I get:

As you see, the situation improves greatly. new nodes cluster around the coast. There is however, still two rather large triangles at sea with a land node. These can show up when everything else seems converged; it is because of the convex hull re-meshing which may make different decisions about some of the large triangles bordering the coast. It slows convergence.

As to placement of that new node on the line, that is where the mask with a metric comes in. I know the approx distance of each node from the coast, and can place the new node where I estimate the cost to cross. I don't want it to be too exact, just to minimise the interior nodes created.


What I really want to know is what this does to the integral. So I tried first integrating the mask itself. That is a severe test; the result should show land/sea proportions as in a count of the mask. Then I tried integrating anomalies for February 2017. I'll show those below, but first, here is the WebGL showing of the seven stages of refinement (radio buttons).

Integration results

The table below shows the results of the progression. The left column is the area of the mixed triangles (part land, part sea), as a proportion of total surface. The next shows the result of integrating the mask itself, which should converge to 0.314. The third are the successive integrals of the anomalies for February 2017.

Mixed area
(fraction of sphere)
Integral of mask Integral of anomaly
°C, Feb 2017
Step 00.17660.31180.8728
Step 10.12680.32280.8583
Step 20.10970.31920.8645
Step 30.08450.32050.8655
Step 40.06820.32120.8646
Step 50.05780.32030.8663
Step 60.04890.32080.8624
Step 70.04290.31990.8611

Conclusion

I think it was a coincidence that the mask integration turned out near its target value of 0.314 at step 0 (no mesh change). As I said above, this is the most demanding case, maximising inhomogeneity. It doesn't improve because of the occasional flipping of triangles which leads to the occasional exceptions that show in the WebGL, but also because it started so close. For anomalies, the difference it makes to February 2017 is small at around 0.01°C.

So, while I am glad to have checked on the coast issue, I don't think it is worth incorporating this method in TempLS. It means extra convex hull calculation for each month, which is slow.







6 comments:

  1. I think your land and sea radio buttons are reversed

    ReplyDelete
  2. Thanks for doing this analysis. Do you think it matters at all that SST anomalies are surface temperatures whereas Land anomalies are based on 2m temperatures ?

    To first order I can't see why it should matter since we only measure offsets to a 30 year mean. However the Apples to Apples paper argues that models using SST values rather than 2m values warm slower bringing them into better agreement with the data. I think they also correct for land/sea masking.

    The net effect is small and seems only affect recent years.

    ReplyDelete
    Replies
    1. Thanks Clive,
      I think the fact that the ocean component of an index is SST rather than SAT can matter. I think what Ed Hawkins is saying is that it matters when comparing indices calculated by models, which are SAT, and that is what Cowtan et al were discussing.

      And I think the general difference between sea and land could matter - that is the reason for this analysis. It seems to turn out that the mesh balances the differences, but that needed to be tested.

      The land SAT/ Sea SST combination is just an index. It produces a single number which is reasonably responsive to temperatures everywhere and importantly, is one that we have the data to calculate reliably. And as you say, anomalies help. As long as it is used consistently it is fine. It is only when compared with indices calculated otherwise that care is needed. GCMs have the converse problem - they can get SAT, but the fine vertical scale of SST is difficult for them.

      Delete
  3. Nice work Nick, and a good verification that the simple SST/ SAT blending method works well on average.
    Regarding the true global SAT, I believe that Gistemp dTs is a slightly better estimate than Gistemp loti. The ultimate evidence would be to run the Gistemp dTs code with model data masked to the same location and time as the gistemp data, and see how it compares with the full global dataset.

    I have done some experimentation with simple Ratpac-like datasets, based on subsampled 95 Crutem gridcells, that produce long-term trends very close to Gistemp dTs.
    The simplicity makes it easy to validate the dataset with exactly replicated model data, suggesting that the trend 1970-2016 differs less than 0.01 C/ decade from the full global mean.
    It looks like this:
    https://drive.google.com/open?id=0B_dL1shkWewaZ2Nqc3QzSEF6MEU

    The simple "CMIP5 95" model dataset follows the full model dataset very well all the way back to 1880, despite some loss of coverage. Gistemp dTs and Crutem95 follow each other and models well back to around 1940, but from 1880 to 1940 these observational datasets have a slightly higher trend than models.

    Reduced sampling may increase short-term noise, but with a sensible scheme for sampling and making global averages, there is very little difference from complete global datasets in the long run.

    ReplyDelete