Thursday, April 20, 2023

Hourly modelling of conversion of USA48 to wind/solar, with costing and optimisation - Updated.

Update note - I made an error which inflated the build costs of S&W by 8.76. That greatly inflated the resulting optima, by about that amount. I have rewritten this post; you can see the original here.

Update 2 (two hours later). I realised that I used the mean GW to cost, rather than the built faceplate GW. I have fixed that; optima are approximately doubled.

Update 3 (four hours later). I realised that The first estimate wasn't bad, because I had multiplied the cost by 3 for the capacity factor. Working with faceplate installed, that isn't appropriate. So it's back in the range of 5-6 $T.

Update 4. Not a mistake this time, but an improvement. I've costed wind and solar separately, with wind the same at $1.647/W, but solar is cheaper at $1/Watt. This is preparatory to improve by changing proportion of solar, as per next post. The difference here is small , saving maybe $0.5T.

In a post earlier this month, I looked at a currently circulating claim, most recently from CFACT, that replacement of fossil fuel (FF) generation with wind and solar (W&S) would require impossible amounts of storage. Costs of hundreds of trillions of dollars were mentioned. That earlier posts has links.

The original analysis of hourly IEA data was done by Ken Gregory, of Friends of Science. I pointed out the problem, which was that they had allowed too little generation capacity, so the hypothetical grid was relying on storage to cope with annual fluctuations in demand. No system, FF or otherwise, can reasonably do this. You must have enough generation to cover the annual peak periods.

So I adapted Ken's spreadsheet to allow variable amounts of generation. Bringing it up to adequate levels hugely reduces the storage requirement. Building more quickly makes it very small.

Now increasing the generation also has costs, but nothing like the huge costs of Gregory and CFACT. A small number of trillions. The US grid is big business.

A commenter at WUWT, Old Cocky, suggested that costing would be interesting. Being the sum of a rising cost of build with rapidly falling storage costs, there must be an optimum. Ken's model was that W&S would increase proportionally (factor H) to meet the demand formerly met by FF, while nuclear and hydro would remain unchanged (and so not in the analysis). You can analyse hourly data for just one year to get the minimum storage that would get through the year.

Data collection

I have recently been able to extend the data to the four years 2019-2022, which seem to be all the full years IEA has. It would be tempting to use a multi-year set, but the problem is that wind has been growing rapidly, so if the expansion of 2019 is sufficient, then later years are wastefully more than sufficient. I could have tested one year of wind on four years of variable demand, but that misses the point, which is looking for the effect of rare bad wind episodes.


There are basically just two costs that matter for this optimisation, and I've used Ken's numbers, for which he gives the source. The capital cost of building W&S is set at $1647 per KW. And for storage, $347 per KWh. To cost the build, we need the currently installed base (to be multiplied by H). I had to look that up in Wikipedia; the values (for all USA, W&S) for 2019-2022 are 0.1821 0.2190 0.2563 0.2813 TW.


A small change from the last post (and KG). H is now the ratio of new (not old+new) W&S to old W&S. So values are one less than in that post.

As shown in that post, the storage requirement and cost reduces rapidly (exponentially) with H. But the build cost increases linearly with H. So somewhere there is an optimum. Here is a table of computed values around the optimum, for the years 2019-2022. The first column is H, the second is minimum storage needed, computed as in the earlier post. The third is the build cost in $B, formed by multiplying the faceplate GW by the cost per GW. The fourth is storage (col 2) multiplied by storage cost/TWh, and the fifth is the sum, for which we want the minimum (bolded).

For any H, as years progress the build cost increases and the storage cost decreases. This is because the base W&S level is increasing. So the optimum moves to lower H. Y

2019Storage TWhCost(H*S&W) $TStorage $TSum Costs $T


The amounts are much less that the hundreds of trillions in the CFACT report, and now around $5T (US GDP is about $22T). How should we think about this? Well, electricity generation is big business, and this represents renewal over, say, 27 years. That would have cost a lot in any technology, and of course produces a system with much reduced fuel costs. But here are some factors which might increase or reduce it
  • The obvious big thing that might increase it is the need to be more conservative about storage. The criterion for each year was to get through that year with storage not drained. There will be worse years.
  • The obvious big thing that might decrease it is continued reduction in costs, which have been coming down a lot.
  • Another big reduction comes from the artificial requirement that the dips in W&S that drain storage are proportional to H. That would be true if the original sites were each expanded by that factor. But new sites will be found, and their diversity will smooth out the dips in W&S.
  • A very big reduction comes if cheaper forms of storage than battery are used, as they will be, particularly pumped storage.
  • In Europe at least, a big reduction will come from improved interconnection, allowing trade in surpluses, and also rationalisation in location. It is much better to spend on solar in Spain (or Morocco) than in Germany. And in Mexico rather than Maine.
  • The model assumed hydro would continue as before. But there will be a continued market-led shift of hydro to drain dams only when W&S are low (and prices high). This has the effect of a big storage increase.
Incidentally, storage costs at the optima are less than build costs.


As before, the very high levels of storage in the reports by Ken Gregory and CFACT are ridiculous. You have to optimise, and then the costs come down to manageable levels (about $5T) as capital cost to replace FF with W&S and battery storage. And there are many ways of ameliorating them.


I have added the csv files for each of the years 2019-2022 to the earlier zipfile.

I thought about ways to test all years in a continuous optimisation. The problem is that W&S has been increasing rapidly, so how should it be expanded. If uniformly, the storage troubles of 2019 will dominate. There isn't any point in using one W&S year expanded for 2019-22; the point is to test the variability of W&S over four years.

I tried detrending W&S (and FF), so 2019 was in effect pre-expanded. But this was still not good enough; 2019 was still the limiting year, because exaggerated expansion leads to exaggerated variability. So I'm stuck, for the moment, with doing individual years.


  1. Hi Phil,
    I have received your comment by email, but I can't get it to show here - possibly because it is too long. I think it is important, especially TG's email. I could try to post a slimmed version if you like, or you could do that.

    1. I'll try beaking it up.

      On Wed, February 16, 2022 8:12 am, wrote:

      I am interested in learning more about the methodology behind your
      global average temperature calculation. Specifically:

      How do you weight the disparate data sources for geographical spread?

      Do you grid the data? If so how large are the gridcells and how are
      they weighted? If not, how do you handle areas of sparse coverage?

      How do you account for biases introduced by station moves and
      closures, instrumentation changes, observation time changes etc?

      Is the source code available for reproduction/replication?

      Lastly, could you share the names and experience of the site developers?

      Phil Clarke.

    2. Hi Phil.

      There is no weighing or gridding of the data - it is a straight statistical mean.

      Regarding station/instrument changes, we do not account for that. The data goes through a QA process to compare against neighboring sites and station averages. As long as it passes QA, it will get added to the database. Since there is no spatial component to the data, if a station closes, it has no bearing on the statistical mean.

      The code is not available, but all of the data is freely available online.
      The code simply gathers the data and stores it in our database. A separate process runs to create the mean calculation.

      We have over 30 years of meteorological and climatological experience, with graduate degrees in meteorologist and physical geography.


    3. Hi Phil.

      On Wed, February 16, 2022 8:44 am, wrote:
      Thanks for the prompt reply, TG.

      So you are saying every station carries equal weight in the mean?
      Surely this will result in areas such as the US which has more
      stations per square mile than say, Africa or Antarctica being
      massively over-represented? That is, it is not really a 'global mean'?

      Phil Clarke

    4. There are plenty of global means (e.g. NOAA and NASA) that use weighting and grids to account for station distribution. That's not the purpose of our project. We want to acquire all available surface and ocean readings on a real-time basis, and calculate a statistical mean using that data.

    5. So the TG developers are quite clear, their product is not a 'global mean'!

    6. NB Temperatue.Global were courteous and prompt in responding, it might be worth somebody else emailing them similar questions, given my correspondence is a year old.

    7. Nick: Net-Zero America (a Princeton program) has several highly detailed plans and cost estimates for achieving Net-Zero emissions by 2050. For the 100% renewable plan, they build about 3 fold the renewable generation capacity needed to meet average US demand with average output. In other words, they waste 2/3rd of potential electric power generation trying to get through the periods when it is calm or cloudy. This same factor of three shows up in two other analyses I've read. That triples the current cost of renewable (which is also rising dramatically with rising interest rates on capital intensity projects). This is after they increase transmission capacity 5-fold in a cost-ineffective attempt to get free power that is going to waste to places where it is needed.

      I skimmed through this plan more than a year ago and it has been changing. If things may be different now, but this is the essence of the high renewable option when I read it.

      Batteries are now a financially practical option (IMO) in the latest Southern California cheap solar project for getting to 100% renewable on 300 sunny days a year. Unfortunately we will have trouble just replacing internal combustion engines in cars with batteries and can't possibly deploy them for storage on grid scale. The best way to make renewable electricity into reliable electricity is to back it up with 100% natural gas. The capital cost of having a natural gas plant idle is only about $0.01/kW-h and you only pay the cost for fuel when you aren't paying renewable generators. That cost is much cheaper than any other option for making renewables reliable. We could probably get 70% of our electricity from renewables at a low cost and reliably, but that won't satisfy idealists (who risk turning off the pubic with high cost and unreliability.)

      Pumped storage wastes about 50% of your power in the process of storing and regenerating it. The website is a great and realistic resource, but is getting out of date.

      Best, Frank

    8. Frank,
      I presume the factor of 3 is on top of the capacity factor of about 3. That makes H=9, which is at the low end of my range.

      I agree that gas will probably be used for some time, as a fallback option.

      The wastage of pumped hydro (not as I understand as much as 50%) in many ways doesn't matter, because unlike FF, you can just keep W&S generating when in excess without penalty, and that is the power you would be using. But I think now that even ordinary hydro can be used as a very effective store. In the end you are just getting power from the river flow, regardless of dam. So run to keep a reserve in the dam. Use the reserve at times of great shortage, and make up by using less hydro at other times. Tasmania is in that catch-up stage now.

      The market will reinforce this. Managers want to use their water when prices are high.

    9. Nick: Yes, I meant two factors of 3. A means of electric generation on demand is the only way truly make "renewable electricity into reliable electricity" and gas can do that until nuclear makes a comeback. You can claim you will eventually add carbon capture and storage to a strategy generous back up with natural gas, which is probably cheaper than trying to make something that is inherently unreliable into something reliable. Your point that "free" excess renewal electricity that would be wasted can be used by pumped storage even if it wastes 50% of that point is excellent; I missed that. Of course. large areas are not suitable for pumped storage and those that are often too far away from generation and use. A large fraction of electricity today is generated within 50? miles from where it is consumed because transmission is expensive.

      The problem I have with a lot of this is that goals are arbitrarily set by activists. No one is asking: "How much can we reduce CO2 emissions during electricity generation for $200 a ton or less. (Pick your favorite social cost for carbon.) The goal is 100% reduction; no matter what the cost! And the cost will always be much higher than expected. And if you want the efficiency of privately owned generation, you should be using the same discount factor in the social cost of carbon that the business is going to charge its customers for capital costs.


    10. Frank
      "gas can do that until nuclear makes a comeback"
      Nuclear doesn't help here, because it is usually operated at 100%, and so can't compensate for variable supply (or demand).

      Pumped storage needn't be close. The UK has had just one big one for nearly 50 years (Dinorweg) and it has been useful for the entire grid. Eastern Australia likewise has had just one (Talbingo) and is developing another (Snowy 2) which will be used everywhere. Victoria isn't close to Talbingo, but has found it very useful.

    11. Need for power storage in Great Britain and pumped storage:

  2. Hi Nick. This is quite interesting. Sorry to have missed it when posted. The ongoing war with corrections to analysis is something everybunny fights. FWIW Eli has always thought that there are three factors such calculations never wrestle with.

    The first is increased efficiency. You need a lot less energy if you use heat pumps and LEDs than if you use thermal heating and incandescents. You can call this elegance.

    The second is that the overbuild can be used to displace energy greedy things like metallurgy. It's not a cost but a scheduling opportunity. What if instead of new build you use just-in-time energy to only do those sort of things on the 300 days a year when W&S are in oversupply and send everyone home on the 65 when not.

    The third is that economics places a lower cost on future spending than current which means that the ongoing cost of fuel is artificially lower than the capital cost of new W&S. Displacing costs to the future in the name of inflation assumes a future.

  3. Thanks, Eli
    Yes, I agree with those general things. People will find a way to use the near-free electricity. And demand will go down with heat pumps etc, probably, though in the short term it may be dominated by switching from gas to HP electric.

    Looking into it a bit more I saw that the sticking point in the 4 years was the time in Feb 2021 with the Texas troubles. That has the combination of low sun, low wind and high demand. It's the sort of risk that has to be provided for (though the current grid didn't do very well). If it weren't for that, a shift to more solar would have been very advantageous.