tag:blogger.com,1999:blog-7729093380675162051.comments2015-05-29T14:40:08.205+10:00moyhuNick Stokeshttp://www.blogger.com/profile/06377413236983002873noreply@blogger.comBlogger5114125tag:blogger.com,1999:blog-7729093380675162051.post-90295807645933051322015-05-29T14:40:08.205+10:002015-05-29T14:40:08.205+10:00The forecast is for erratically alternating La Nin...The forecast is for erratically alternating La Nina and El Nino events ... seriously, science doesn't need to be done by forecasting and then mindlessly waiting for the results to come in. It can just as easily be done by training a model on a specific interval and then evaluating the extrapolation on an out-of-band interval. There are hundreds of years of ENSO proxy records for evaluation purposes.<br /><br />For sure, there is a practical use for forecasts, but I am not going to get caught up in a can't-win situation.<br />WHThttp://www.blogger.com/profile/18297101284358849575noreply@blogger.comtag:blogger.com,1999:blog-7729093380675162051.post-88257016398114252972015-05-29T07:27:43.068+10:002015-05-29T07:27:43.068+10:00WHT - I don't see a forecast. If there is one,...WHT - I don't see a forecast. If there is one, where is it?JCHnoreply@blogger.comtag:blogger.com,1999:blog-7729093380675162051.post-42268988166613516292015-05-28T23:00:18.191+10:002015-05-28T23:00:18.191+10:00The ENSO model that the Azimuth Project forum http...The ENSO model that the Azimuth Project forum http://forum.azimuthproject.org/discussions is constructing has nailed the behavior over much longer intervals. With forcings precisely defined by the much more periodic QBO and Chandler wobble, long deterministic stretches of ENSO behavior can be modeled. WHThttp://www.blogger.com/profile/18297101284358849575noreply@blogger.comtag:blogger.com,1999:blog-7729093380675162051.post-40033585966378183352015-05-28T17:18:43.726+10:002015-05-28T17:18:43.726+10:00The POAMA Nino3.4 forecast for October is modest ...The POAMA Nino3.4 forecast for October is modest compared to the other models. The mean of all models is 2.4, some of them approaching 3. See:<br />http://www.bom.gov.au/climate/ahead/model-summary.shtml#tabs=Pacific-Ocean<br />Olofnoreply@blogger.comtag:blogger.com,1999:blog-7729093380675162051.post-68190914737106196692015-05-28T07:21:46.385+10:002015-05-28T07:21:46.385+10:00They updated. Their model spread now has some pret...They updated. Their model spread now has some pretty high numbers for ONI later this year.JCHnoreply@blogger.comtag:blogger.com,1999:blog-7729093380675162051.post-22434592991398684762015-05-28T06:19:50.129+10:002015-05-28T06:19:50.129+10:00Jonathan, why are you only making an exception for...Jonathan, why are you only making an exception for "decomposition in atmospheric chemical reactions"? The far higher solubility of CO2 compared with SF6 is the result of liquid phase interactions of these molecules with water. In particular chemical reaction of CO2 with water to form the highly soluble HCO3(-) ion. bill hartreehttp://www.blogger.com/profile/14694715444404810761noreply@blogger.comtag:blogger.com,1999:blog-7729093380675162051.post-74504742733625874442015-05-27T10:43:03.225+10:002015-05-27T10:43:03.225+10:00SF6 doesn't want to enter the ocean does it? Y...SF6 doesn't want to enter the ocean does it? You're talking to someone that worked on dopant diffusion from the gas phase years ago. Not all dopants incorporate the same.<br /><br />I really don't get the resistance to these basic physics models.<br />WHThttp://www.blogger.com/profile/18297101284358849575noreply@blogger.comtag:blogger.com,1999:blog-7729093380675162051.post-68548895096992930892015-05-27T10:35:23.452+10:002015-05-27T10:35:23.452+10:00A diffusional model will generate the same fat-tai...A diffusional model will generate the same fat-tail response that the BERN model generates. Find another model that will do this with equivalent conciseness. Oh sure, you can create a multi-compartment model, but all that represents is diffusion. <br />WHThttp://www.blogger.com/profile/18297101284358849575noreply@blogger.comtag:blogger.com,1999:blog-7729093380675162051.post-80368201504741971502015-05-27T09:55:57.783+10:002015-05-27T09:55:57.783+10:00It it's all about Boltzmann statistics and dif...It it's all about Boltzmann statistics and diffusion, then the fraction of emissions that remain in the atmosphere should be about the same for all gases that don't decompose in atmospheric chemical reactions.<br /><br />But around 95% of SF6 emissions remain in the atmosphere, compared to around 50% of CO2 emissions (see, e.g., table 2 of I. Levin et al., Atmos. Chem. Phys. 10, 2655–2662 (2010)). How does your model explain the order-of-magnitude difference between the sinks for the two gases?Jonathan Gilliganhttp://www.blogger.com/profile/09065480842704814847noreply@blogger.comtag:blogger.com,1999:blog-7729093380675162051.post-39040248799568251792015-05-27T03:58:21.940+10:002015-05-27T03:58:21.940+10:001) The equation you are comparing to is not actual...1) The equation you are comparing to is not actually the BERN model. It is a analytic approximation to the BERN model. <br />2) The statistical approximation assumes that the system is in equilibrium at 378 ppm. It also includes assumptions about the climate response to a change in forcing, because when the ocean warms CO2 becomes less soluble and there is increased stratification so it becomes a less effective sink. <br />3) The purpose of creating an analytic approximation to a complex model with an assumption of an equilibrium background is because the authors are not answering the question of "how much will the atmospheric concentration of CO2 change next year", but rather, "what is the CO2 perturbation that will result from a single impulse of CO2" - very useful for calculating global warming potentials, which is what the IPCC wanted.<br />4) One could apply a simplified response function like the BERN cycle aproximation to the question of "how much will the concentration change next year" by assuming a preindustrial equilibrium background in 1750 and using historical emissions estimates for each year between 1750 and the present, which would then allow the calculation of the amount by which CO2 concentrations would drop in the next year in the absence of additional emissions, and then add in one more year's worth of emissions along with its own response function. But, as my point has been from the beginning, applying even this simplified approach will make it obvious that the "about half" rule of thumb doesn't actually work when you have large changes in emissions because it is being driven in large part by the excess sink resulting from the existing disequlibrium. <br /><br />You can read about the development of the simplified response function here:<br />http://www.gfdl.noaa.gov/bibliography/related_files/fj9601.pdf<br /><br />-MMMAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-7729093380675162051.post-71122491803509504102015-05-27T02:16:40.638+10:002015-05-27T02:16:40.638+10:00I never said the science was wrong. I am simply po...I never said the science was wrong. I am simply pointing out that the BERN model is a heuristic representation of a dispersed diffusional process. : This is my model of diffusional sequestration laid on top of the BERN model:<br />http://imagizer.imageshack.us/a/img18/8127/normalizeddecayofco2.gif<br /><br /><br />WHThttp://www.blogger.com/profile/18297101284358849575noreply@blogger.comtag:blogger.com,1999:blog-7729093380675162051.post-72980020266830919592015-05-27T02:10:34.480+10:002015-05-27T02:10:34.480+10:00A random walk is inherent in the process. Consi...A random walk is inherent in the process. Consider the idea of vertical eddy diffusion. There is no way that the random walk is of individual water molecules across an interface in eddy diffusion. Instead the diffusion is of a general process of eddies randomly moving up and down in a vertical column. <br /><br />If people don't get out of this box of mistaking the micro for the macro, there is little hope of modeling large scale processes. <br /><br />WHThttp://www.blogger.com/profile/18297101284358849575noreply@blogger.comtag:blogger.com,1999:blog-7729093380675162051.post-46141222458879950392015-05-27T00:38:02.585+10:002015-05-27T00:38:02.585+10:00WHT: How does your simple diffusion approach accou...WHT: How does your simple diffusion approach account for saturation of the solubility of CO2 in the ocean? This is not a small perturbation but serves as the rate-limiting step.<br /><br />Specifically a CO2 molecule does not simply randomly walk across the air-water interface. There is a chemical potential, which can act as a significant barrier if the surface waters are depleted of carbonate. The carbonate buffering reactions are responsible for roughly 90% of the solubility of CO2. And reaction with rising atmospheric concentrations is indeed reducing the carbonate concentration of surface waters (aka ocean acidification). <br /><br />Archer treats this all very simply in a zero-dimensional box model in his adaptation of Berner's GEOCARB III model at http://climatemodels.uchicago.edu/geocarb/geocarb.doc.htmlJonathan Gilliganhttp://www.blogger.com/profile/09065480842704814847noreply@blogger.comtag:blogger.com,1999:blog-7729093380675162051.post-54360211652388618132015-05-26T23:17:01.938+10:002015-05-26T23:17:01.938+10:00So, given the choice between, "maybe I'm ...So, given the choice between, "maybe I'm wrong" and "the entire field of carbon cycle science is flawed" you go with the latter*? That confirms that the xkcd comic is pretty much the perfect encapsulation of this discussion. <br /><br />-MMM<br /><br />*There was potential for a third path, which would have been that you and climate science were consistent, and that I was the one who was confused about either what you were saying or about what the state of climate science was, but you haven't even bothered to engage with my key hypothetical corner case (e.g., carbon emissions drop to near zero) which might have served to illuminate that question. Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-7729093380675162051.post-34106257416129587232015-05-26T14:51:00.201+10:002015-05-26T14:51:00.201+10:00My earliest background is in diffusion related to ...My earliest background is in diffusion related to semiconductor processing. Definitely a case of climate science lagging behind fundamental science and technology. The xkcd gambit is weak.<br />WHThttp://www.blogger.com/profile/18297101284358849575noreply@blogger.comtag:blogger.com,1999:blog-7729093380675162051.post-51524630953971950422015-05-26T08:31:58.203+10:002015-05-26T08:31:58.203+10:00I have a PhD from MIT, I think I understand what d...I have a PhD from MIT, I think I understand what diffusion is. I also have published several peer-reviewed papers on climate change, and while I am not a carbon cycle expert myself, I have worked with carbon cycle experts and run earth system models. I recommend maybe reading some carbon cycle papers (like the Archer paper I linked to earlier), rather than continuing to https://xkcd.com/793/. <br /><br />-MMM<br />Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-7729093380675162051.post-61621946375912918142015-05-26T04:32:35.904+10:002015-05-26T04:32:35.904+10:00Clive,
"In fact there was not any real discus...Clive,<br /><i>"In fact there was not any real discussion at all in that article by Steve Mosher."</i><br />Well, the methods are at least described. They are RSM, FDM and CAM. The first two have various problems, and the third does abandon stations. I'm recommending a fourth method - least squares - which uses all stations. It applies at the stage of aggregation - globally is my preference, but at cell level if you must (Tamino). Once aggregated, the issue of the loose degree of freedom needs to be pinned down, but we're then dealing with aggregates, not stations.<br /><br />Your cell iteration sounds something like my method. Tc_my is like G. If you alternate G=av_s(T-L) and L=av_y(T-G), that's what I'm suggesting, though I would start with G=0, you seem to start with L=0. But there is no time period issue. No period is mentioned. You use all the data. For me, of course, these are weighted averages.<br />Nick Stokeshttp://www.blogger.com/profile/06377413236983002873noreply@blogger.comtag:blogger.com,1999:blog-7729093380675162051.post-36401715918055633082015-05-26T02:55:07.266+10:002015-05-26T02:55:07.266+10:00Nick,
You alluded to the problem in your previous...Nick,<br /><br />You alluded to the problem in your previous post.<br /><i>"The normal remedy is to adopt a fixed interval, say 1961-90, on which G is constrained to have zero mean. Then the L's will indeed be the mean for each station for that interval, if they have one. The problem of what to do if not has been much studied. There's a discussion here. "</i><br /><br />In fact there was not any real discussion at all in that article by Steve Mosher.<br /><br />Lsm are the offsets for any station. If that station has no data withing the normalisation period then it cannot contribute to the zeroing of Gmy. As an extreme example lets take a station is at the summit of everest so it's offset is allways about 20C. The station only has data from 1850 to 1940. This means it will move Gmy to lower values for early years. There is nothing that can be done about that bias using a fixed normalistion period.<br /><br />What I did instead (and I am not saying this is right) is the following.<br /><br />Suppose we had perfect coverage in all grid cells. Then we can measure exactly the average temperature in each region Tmy(Lat,Lon). Insetad we have a changing population of stations any value will be biased. But by how much? Well we can estimate the 12 offsets for each station Lsm by calculating the mean difference between the measured average Tm(lat,lon) and the individual station measurements Xm, averaged over all years when the station is present.<br /><br />Now we can calculate a new set of regional averaged temperatures Tcmy(lat,lon) = mean (Xms -Lsm), where the avergae only includes stations present in that particular month and year.<br /><br />You can iterate round the loop replacing with the new values to recalculate Lsm. In reality the result converges after just 2 steps. This IMHO is the best estimate possible for a regional Xm(lat,lon) . Note now that the Everest station now only effects years and months in which it appears. The downside is that the normalisation of G is done using the corrected average Tmy per grid. You have to use the full time period rather than 30 years. Otherwise Everest type stations won't contribute.<br /><br />Clive Besthttp://www.blogger.com/profile/10486120708699060846noreply@blogger.comtag:blogger.com,1999:blog-7729093380675162051.post-40704503703927767172015-05-25T21:15:24.016+10:002015-05-25T21:15:24.016+10:00Clive, I'm not sure what the problem is here. ...Clive, I'm not sure what the problem is here. You minimise sum squares to get the offsets and common T. You only need to worry about the fixed period afterwards. I find Tamino's method unnecessarily complicated; if you're minimising SS, there is nothing gained by breaking into cells.<br /><br />Anyway, that's how it works with TempLS, and with the iterative version that I've <a href="http://moyhu.blogspot.com.au/2015/05/how-to-average-temperature-over-space.html" rel="nofollow">described</a>. You aggregate by least squares fitting, effectively as 12 separate month analyses, each with a degree of freedom to add any number from the global to the offsets. Then you use these dofs to align with normals over a fixed period.<br />Nick Stokeshttp://www.blogger.com/profile/06377413236983002873noreply@blogger.comtag:blogger.com,1999:blog-7729093380675162051.post-47824042998934490852015-05-25T19:03:04.870+10:002015-05-25T19:03:04.870+10:00I had assumed that the station offsets as describe...I had assumed that the station offsets as described by Tamino had the advantage that all station data could be incuded. However, when I tried to normalise these station offsets to a fixed period 1961-1990 it didn't work.<br /><br />Anom(mys) = T(mys) - (norm(m,lat,lon) - offset(ms) <br /><br />Now I realise why. The problem with using any fixed period for the 12 normals is that it only applies for stations with measurements in that period. The normalisation itself is affected by these 'missing stations'. The only way round this is to exclude these stations completely or to interpolate their values into the normalisation period. However I strongly suspect that interpolation itself exaggerates any warming trends. I think Berkeley Earth suffers from this because they have to interpolate long times and distances to incude all stations. Too much smoothing reinforces underlying trends.<br /><br />You either use all stations and normalise to the full period or use a fixed period and discard those stations with insufficient data.Clive Besthttp://www.blogger.com/profile/10486120708699060846noreply@blogger.comtag:blogger.com,1999:blog-7729093380675162051.post-2930095794342737282015-05-25T11:43:01.740+10:002015-05-25T11:43:01.740+10:00"third-error"
I meant third order error....<i>"third-error"</i><br />I meant third order error. But I actually made a second order error with the trends. I forgot that I had set x-values to 0 where w=0 (missing values). But when calculating trends, I used unweighted lm(), so the last months of 2015 counted as zero. This has small effect, since I was setting 2014 to zero as the anomaly base. But now the trend effect due to the mean drift over a year is just 0.99998 instead of 1.<br />Nick Stokeshttp://www.blogger.com/profile/06377413236983002873noreply@blogger.comtag:blogger.com,1999:blog-7729093380675162051.post-77477036370035131432015-05-25T06:34:26.132+10:002015-05-25T06:34:26.132+10:00To follow up, there is a third-error there that I ...To follow up, there is a third-error there that I have never corrected. If there is warming, then over 1961-90, Decembers of each cal year tend to be warmer than Januarys, because of warming trend, typically 0.01-0.02 °C. And that superimposes a sawtooth wave of that amplitude. A sawtooth wave has a very small effect on trend - of order amplitude/duration.<br /><br />That shows up here. For Land/Ocean, the trend does not converge to 1, but to 0.999637. That is because of that effect.<br />Nick Stokeshttp://www.blogger.com/profile/06377413236983002873noreply@blogger.comtag:blogger.com,1999:blog-7729093380675162051.post-45386104924201987122015-05-25T06:33:07.976+10:002015-05-25T06:33:07.976+10:00I don't think you understand that diffusion is...I don't think you understand that diffusion is just a random walk, whereby the CO2 can enter and leave an organic carbon cycle many times. The fact that this occurs goes in to the representation of dispersed diffusion. There are so many different time scales of diffusion that MaxEntropy is the most effective way to account for them all with the minimum amount of bias.<br /><br />I recommend the work by James Sethna where he has the concept of "sloppy" models and finding the simplest representation possible. This is on my mind as we are discussing simplified modeling on the Azimuth Forum. I will use this diffusion example over there and see what they think.<br /><br /><br /><br />WHThttp://www.blogger.com/profile/18297101284358849575noreply@blogger.comtag:blogger.com,1999:blog-7729093380675162051.post-26145617177220643032015-05-25T06:21:59.913+10:002015-05-25T06:21:59.913+10:00Thanks, Clive,
Yes, I do compute 12 normals. In fa...Thanks, Clive,<br />Yes, I do compute 12 normals. In fact, the monthly calc is effectively 12 separate time series calcs, combined at the end. It makes a subtle difference to the effect of anomaly base. For an annual calc, shifting the base period just adds or subtracts a constant. But for monthly, it shifts the months relative to each other. So if, in 1961-90 Junes were unusually warm relative to May/July, then ever after June anomalies will tend to show as cool. The inverse pattern will be impressed. That's when having as long a base period as is practicable really helps.<br />Nick Stokeshttp://www.blogger.com/profile/06377413236983002873noreply@blogger.comtag:blogger.com,1999:blog-7729093380675162051.post-7020460791312276302015-05-24T21:40:01.534+10:002015-05-24T21:40:01.534+10:00Very nice !
" I should mention that I normal...Very nice !<br /><br />" I should mention that I normalised the global result at each step relative to the year 2014. Normally this can be left to the end, and would be done for a range of years"<br /><br />In the real world how do you normalise TempLS? Do you calculate 12 monthly normals 1961-1990 ? If not how do you avoid doing so. NCDC and Berkeley use an earlier normalisation 1959-1980 which then causes about a 0.15C offset compared to CRU and GISS. <br /><br />Clive Besthttp://www.blogger.com/profile/10486120708699060846noreply@blogger.com