The subject of EMICs – Earth Models of Intermediate Complexity – came up in recent comments on Ghosts of Climates Past – Eleven – End of the Last Ice age. I promised to write something about EMICs, in part because of my memory of a more recent paper on EMICs. This article will just be short as I found that I have already covered some of the EMIC ground.

In the previous 19 articles of this series we’ve seen a concise summary (just kidding) of the problems of modeling ice ages. That is, it is hard to model ice ages for at least three reasons:

  • knowledge of the past is hard to come by, relying on proxies which have dating uncertainties and multiple variables being expressed in one proxy (so are we measuring temperature, or a combination of temperature and other variables?)
  • computing resources make it impossible to run a GCM at current high resolution for the 100,000 years necessary, let alone to run ensembles with varying external forcings and varying parameters (internal physics)
  • lack of knowledge of key physics, specifically: ice sheet dynamics with very non-linear behavior; and the relationship between CO2, methane and the ice age cycles

The usual approach using GCMs is to have some combination of lower resolution grids, “faster” time and prescribed ice sheets and greenhouse gases.

These articles cover the subject:

Part Seven – GCM I – early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Eight – GCM II – more recent work from the “noughties” – GCM results plus EMIC (earth models of intermediate complexity) again trying to produce perennial snow cover

Part Nine – GCM III – very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

One of the the papers I thought about covering in this article (Calov et al 2005) is already briefly covered in Part Eight. I would like to highlight one comment I made in the conclusion of Part Ten:

What the paper [Jochum et al, 2012] also reveals – in conjunction with what we have seen from earlier articles – is that as we move through generations and complexities of models we can get success, then a better model produces failure, then a better model again produces success. Also we noted that whereas the 2003 model (also cold-biased) of Vettoretti & Peltier found perennial snow cover through increased moisture transport into the critical region (which they describe as an “atmospheric–cryospheric feedback mechanism”), this more recent study with a better model found no increase in moisture transport.

So, onto a little more about EMICs.

There are two papers from 2000/2001 describing the CLIMBER-2 model and the results from sensitivity experiments. These are by the same set of authors – Petoukhov et al 2000 & Ganopolski et al 2001 (see references).

Here is the grid:

From Petoukhov et al (2000)

From Petoukhov et al (2000)

The CLIMBER-2 model has a low spatial resolution which only resolves individual continents (subcontinents) and ocean basins (fig 1). Latitudinal resolutions is the same for all modules (10º). In the longitudinal direction the Earth is represented by seven equal sectors (roughly 51º􏰖 longitude) in the atmosphere and land modules.

The ocean model is a zonally averaged multibasin model, which in longitudinal direction resolves only three ocean basins Atlantic, Indian, Pacific). Each ocean grid cell communicates with either one, two or three atmosphere grid cells, depending on the width of the ocean basin. Very schematic orography and bathymetry are prescribed in the model, to represent the Tibetan plateau, the high Antarctic elevation and the presence of the Greenland-Scotland sill in the Atlantic ocean.

The atmospheric model has a simplified approach, leading to the description 2.5D model. The time step can be relaxed to about 1 day per step. The ocean grid is a little finer in latitude.

On selecting parameters and model “tuning”:

Careful tuning is essential for a new model, as some parameter values are not known a priori and incorrect choices of parameter values compromise the quality and reliability of simulations. At the same time tuning can be abused (getting the right results for the wrong reasons) if there are too many free parameters. To avoid this we adhered to a set of common-sense rules for good tuning practice:

1. Parameters which are known empirically or from theory must not be used for tuning.

2. Where ever possible parametrizations should be tuned separately against observed data, not in the context of the whole model. (Most of the parameters values in Table 1 were obtained in this way and only few of them were determined by tuning the model to the observed climate).

3. Parameters must relate to physical processes, not to specific geographic regions (hidden flux adjustments).

4. The number of tuning parameters must be much smaller than the degrees of freedom predicted by the model. (In our case the predicted degrees of freedom exceed the number of tuning parameters by several orders of magnitude).

To apply the coupled climate model for simulations of climates substantially different from the present, it is crucial to avoid any type of ̄flux adjustment. One of the reasons for the need of ̄flux adjustments in many general circulation models is their high computational cost, which makes optimal tuning􏱃 difficult. The high speed of CLIMBER-2 allows us to perform many sensitivity experiments required to identify the physical reasons for model problems and the best parameter choices. A physically correct choice of model parameters is fundamentally different from a flux adjustment; only in the former case the surface fluxes are part of the proper feedbacks when the climate changes.

Note that many GCMs back in 2000 did need to use flux adjustment (in Natural Variability and Chaos – Three – Attribution & Fingerprints I commented “..The climate models “drifted”, unless, in deity-like form, you topped up (or took out) heat and momentum from various grid boxes..)

So this all sounds reasonable. Obviously it is a model with less resolution than a GCM, and even the high resolution (by current standards) GCMs need some kind of approach to parameter selection (see Models, On – and Off – the Catwalk – Part Four – Tuning & the Magic Behind the Scenes).

What I remembered about EMICs and suggested in my comment was based on this 2010 paper by Ganopolski, Calov & Claussen:

We will start the discussion of modelling results with a so-called Baseline Experiment (BE). This experiment represents a “suboptimal” subjective tuning of the model parameters to achieve the best agreement between modelling results and palaeoclimate data. Obviously, even with a model of intermediate complexity it is not possible to test all possible combinations of important model parameters which can be considered as free (tunable) parameters.

In fact, the BE was selected from hundred model simulations of the last glacial cycle with different combinations of key model parameters.

Note, that we consider “tunable” parameters only for the ice-sheet model and the SEMI interface, while the utilized climate component of CLIMBER-2 is the same in previous studies, such as those used by C05 [this is Calov et al. (2005)]. In the next section, we will discuss the results of a set of sensitivity experiments, which show that our modelling results are rather sensitive to the choice of the model parameters..

..The ice sheet model and the ice sheet-climate interface contain a number of parameters which are not derived from first principles. They can be considered as “tunable” parameters. As stated above, the BE was subjectively selected from a large suite of experiments as the best fit to empirical data. Below we will discuss results of a number of additional experiments illustrating the sensitivity of simulated glacial cycle to several model parameters. These results show that the model is rather sensitive to a number of poorly constrained parameters and parameterisations, demonstrating the challenges to realistic simulations of glacial cycles with a comprehensive Earth system model.

And in their conclusion:

Our experiments demonstrate that the CLIMBER-2 model with an appropriate choice of model parameters simulates the major aspects of the last glacial cycle under orbital and greenhouse gases forcing rather realistically. In the simulations, the glacial cycle begins with a relatively abrupt lateral expansion of the North American ice sheets and parallel growth of the smaller northern European ice sheets. During the initial phase of the glacial cycle (MIS 5), the ice sheets experience large variations on precessional time scales. Later on, due to a decrease in the magnitude of the precessional cycle and a stabilising effect of low CO2 concentration, the ice sheets remain large and grow consistently before reaching their maximum at around 20 kyr BP..

..From about 19 kyr BP, the ice sheets start to retreat with a maximum rate of sea level rise reaching some 15 m per 1000 years around 15kyrBP. The northern European ice sheets disappeared first, and the North American ice sheets completely disappeared at around 7 kyr BP. Fast sliding processes and the reduction of surface albedo due to deposition of dust play an important role in rapid deglaciation of the NH. Thus our results strongly support the idea about important role of aeolian dust in the termination of glacial cycles proposed earlier by Peltier and Marshall (1995)..

..Results from a set of sensitivity experiments demonstrate high sensitivity of simulated glacial cycle to the choice of some modelling parameters, and thus indicate the challenge to perform realistic simulations of glacial cycles with the computationally expensive models.

My summary – the simplifications of the EMIC combined with the “trying lots of parameters” approach means I have trouble putting much significance on the results.

While the basic setup, as described in the 2000 & 2001 papers seems reasonable, EMICs miss a lot of physics. This is important with something like starting and ending an ice age, where the feedbacks in higher resolution models can significantly reduce the effect seen by lower resolution models. When we run 100’s of simulations with different parameters (relating to the ice sheet) and find the best result I wonder what we’ve actually found.

That doesn’t mean they are of no value. Models help us to understand how the physics of climate actually works, because we can’t do these calculations in our heads. GCMs require too much computing resources to properly study ice ages.

So I look at EMICs as giving some useful insights that need to be validated with more complex models. Or with further study against other observations (what predictions do these parameter selections give us that can be verified?)

I don’t see them as demonstrating that the results “show” we’ve now modeled ice ages. The exact same comment also goes for another 2007 paper which used a GCM coupled to an ice sheet model that we covered in Part Nineteen – Ice Sheet Models I. An update of that paper in 2013 came with a excited Nature press release but to me simply demonstrates that with a few unknown parameters you can get a good result with some specific values of those parameters. This is not at all surprising. Let’s call it a good start.

Perhaps Abe Ouchi et al 2013 was the paper that will be verified as the answer to the question of ice age terminations – the delayed isostatic rebound.

Perhaps Ganopolski, Calov & Claussen 2010 with the interaction of dust on ice sheets will be verified as the answer to that question.

Perhaps neither will be.

Articles in this Series

Part One – An introduction

Part Two – Lorenz – one point of view from the exceptional E.N. Lorenz

Part Three – Hays, Imbrie & Shackleton – how everyone got onto the Milankovitch theory

Part Four – Understanding Orbits, Seasons and Stuff – how the wobbles and movements of the earth’s orbit affect incoming solar radiation

Part Five – Obliquity & Precession Changes – and in a bit more detail

Part Six – “Hypotheses Abound” – lots of different theories that confusingly go by the same name

Part Seven – GCM I – early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Seven and a Half – Mindmap – my mind map at that time, with many of the papers I have been reviewing and categorizing plus key extracts from those papers

Part Eight – GCM II – more recent work from the “noughties” – GCM results plus EMIC (earth models of intermediate complexity) again trying to produce perennial snow cover

Part Nine – GCM III – very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age – a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Eleven – End of the Last Ice age – latest data showing relationship between Southern Hemisphere temperatures, global temperatures and CO2

Twelve – GCM V – Ice Age Termination – very recent work from He et al 2013, using a high resolution GCM (CCSM3) to analyze the end of the last ice age and the complex link between Antarctic and Greenland

Thirteen – Terminator II – looking at the date of Termination II, the end of the penultimate ice age – and implications for the cause of Termination II

Fourteen – Concepts & HD Data – getting a conceptual feel for the impacts of obliquity and precession, and some ice age datasets in high resolution

Fifteen – Roe vs Huybers – reviewing In Defence of Milankovitch, by Gerard Roe

Sixteen – Roe vs Huybers II – remapping a deep ocean core dataset and updating the previous article

Seventeen – Proxies under Water I – explaining the isotopic proxies and what they actually measure

Eighteen – “Probably Nonlinearity” of Unknown Origin – what is believed and what is put forward as evidence for the theory that ice age terminations were caused by orbital changes

Nineteen – Ice Sheet Models I – looking at the state of ice sheet models


CLIMBER-2: a climate system model of intermediate complexity. Part I: model description and performance for present climate, V Petoukhov, A Ganopolski, V Brovkin, M Claussen, A Eliseev, C Kubatzki & S Rahmstorf, Climate Dynamics (2000)

CLIMBER-2: a climate system model of intermediate complexity. Part II: model sensitivity, A Ganopolski, V Petoukhov, S Rahmstorf, V Brovkin, M Claussen, A Eliseev & C Kubatzki, Climate Dynamics 􏱄(2001)

Transient simulation of the last glacial inception. Part I: glacial inception as a bifurcation in the climate system, Reinhard Calov, Andrey Ganopolski, Martin Claussen, Vladimir Petoukhov & Ralf Greve, Climate Dynamics (2005)

Simulation of the last glacial cycle with a coupled climate ice-sheet model of intermediate complexity, A. Ganopolski, R. Calov, and M. Claussen, Climate of the Past (2010)

The respected Gratton Institute in Australia hosted a discussion of energy insiders – grid operators, distributors, the regulator. It’s well worth reading for many reasons. When I was thinking about this article I remembered the discussion. Here are a few extracts:

MIKE: Andrew, one of the elements in the room here is the growth in peak demand. I can put however many air conditioners I want in my house and as long as I can pay for the electricity, I can turn them on and I don’t have to worry about that. You certainly can’t regulate for it. When are we going to allow you to regulate for peak demand? Obviously it’s not in the interests of the network operators who get a guaranteed rate of return on investment in growing the grid, as I understand it. It’s not there in the business model anyway. Do you see that coming?

MIKE: Well, controlling this thing which is really driving a lot of the issues that we have which is peak demand growth. The issue at the moment is that we haven’t had peak demand growth in the last few years because we haven’t had hot weather. We just don’t know how many air conditioners are out there that have never been turned on – three or four per household? People have made those investments, and when the next hot weather comes they’re going to recoup their investments by running them full bore. We don’t know what the load be like when that happens.

ANDREW: Mike’s quite right. Unless there is a change in usage, there’s the risk of this ongoing growth in demand and the ongoing necessity for investment in the network, and a continued increase in prices. That is the key to it. Then the question becomes who’s responsible for managing the demand? Ought it to be the businesses themselves, and providing the businesses with the incentives to go for the lowest cost solution, whether that is network augmentation or demand management. That’s a very good way of approaching it. The other is to look at the pricing structures such that those consumers who are putting the extra load on the network, with the four air conditioners, are paying for their load on the network. At the moment everybody pays on the basis of average use rather than paying for how much demand they put on the network. Now that’s a pretty radical change in the way electricity is charged. That would lead to arguably a much better outcome in terms of the economics, it would then give people the right signals to manage their demand…

MATT: I think customers face network charges and at the moment they don’t have any way to manage their network bill because it’s just based on average usage rather than peak demand and they don’t get a signal that tells them use less peak power.

GREG: How far are we away from consumers being able to control that?

TRISTAN: In other parts of the world it’s already working. For large customers at the moment they can already do that. We have a number of customers within Victoria and Australia who when the wholesale price of power goes high they curtail their usage. Smelters who just stop hotlines for a couple of hours to reduce their usage at that point in time. The reason they can do that is they can see the price signal. They have a contract which tells them in times of high prices if you turn off you get a financial reward for doing it. And they say, it’s worth doing it, I’ll turn off. Retail customers don’t get any of those price signals at the moment.

GREG: Should they?

TRISTAN: We think they should. We think there’s about $11b of installed electricity infrastructure that’s used for about eight days a year, but no-one sees that price signal. If you’ve got something that’s not used very often, it’s very expensive. The reality is if you want people to use less of something, charge them what it costs. If they’re willing to pay it, they can use it. If they’re not willing to pay it, then they’ll do something about it. In terms of enablers, though, then you do have to have things like smart meters which allow people to actually see what’s happening in their household, and you have to have products from retailers and other participants that can allow them to do something about it. Some of the things that we’re exploring in that field are the pricing mechanisms off-time use pricing, linkages to smart appliances, so your fridge, your air conditioner, your washing machine, your dishwasher, can all be interrupted based on a price signal received by the smart meter that turns the appliance on and off. We’re getting to the point where we can do that, but we need to have the regulatory infrastructure that just enables that sort of competition and pricing to occur.

Demand management is an important topic for the electricity industry regardless of any questions about renewable energy.

The highlighted portion in the last statement is the key – to cope with peak demand, lots of investment has to be added that will only ever be rarely used. Earlier in the discussion (not shown in the extract) there was talk about discussing with the community the tradeoff between prices and grid reliability. Basically, making the grid 99.99% reliable imposes a lot of costs.

Maybe consumers would rather have had the option to pay 2/3 of their current bill and go without electricity for half a day every 5 years.

Imagine for example, that you live in a place with hot summers and this is scenario A:

  • you pay 20c per kWh
  • across a year you pay $2,000 for electricity

Now the next year the rules are changed and you have scenario B

  • you pay 12c per kWh
  • a saving of $800 per year on your bill
  • on 20 or so hot days you would pay $1 per kWh from 11am to 3pm
  • on one day of the year between midday and 3pm you would pay $20 per kWh.

This is all, by the way, because we can’t store electricity (not with any reasonable cost). For the same reason, before intermittent renewable energy came on the scene, economical storage of electricity was also in high demand. But it wasn’t, and still isn’t, available.

If we picture the change from scenario A to B, a lot of people would be happy. Most people would take B if it was an option. Sure it’s hot but lots of people survived without air conditioners a decade before, definitely a generation before (lots still do). Fans, ice cubes, local swimming pools, beaches.. Saving $800 a year means a lot to some people. Of course, there would be winners and losers. The losers would be the air conditioning industry which would lose a large chunk of its business; suppliers of transmission and distribution equipment no longer needed to upgrade the networks; hospitals that had to pay the high costs to keep people alive..

Of course, what actually happens in this scenario, given the regulated nature of the industry that exists in most (all?) developed countries would be a little different. As peak demand falls off, the price falls off. So it isn’t a case of no one buying electricity at $1 per kWh. What happens is the demand drops off and so the price falls. Supply and demand – an equilibrium is reached where people are willing to pay the real cost. And based on the new peak demand patterns the industry tries to forecast what it needs to upgrade or expand the network over the next 5-10 years and negotiates with the regulator how this affects prices.

But the key is people paying for the very expensive peak demand they want to use at the real cost, rather than having their costs subsidized by everyone else.

It makes perfect sense once you understand a) how an electricity grid operates and b) electricity cannot be stored.

Let’s consider a different country. Although England has some hot summers the problem of peak demand in England is a different one – cold winter evenings. Now I haven’t checked any real references but my understanding is that lots of people die indoors due to the cold each year in cold countries and it’s more of a problem than people dying due to heat in the middle of the day in hot countries. (I might be wrong on this, but I’m thinking of the subset of countries where electricity is available and affordable by the general population).

If you add demand management in a cold country maybe the problem becomes a different one – poorer old people already struggling with their electricity bills now turn off the heating when they need it the most. The cost being pushed up by prosperous working people with their heating set on the maximum for comfort. The principle is the same, of course – demand management means higher prices for electricity and so on average people use less heating.

So in my hugely over-simplified world, demand management has different questions around it in different climates. Air conditioning in the middle of the summer day as a luxury vs heating in the winter evenings as a necessity.

The problem becomes more complicated when considering renewables. Now it is less about reducing peak demand, instead about trying to match demand with a variable supply.

There are a lot of studies in demand management, essentially pilot studies, where a number of consumers get charged different rates and the study looks at the resulting reduction in electricity use. Some of them suggest possible large demand reduction, especially with intelligent meters. Some of them suggest fairly pedestrian reductions. We’ll have a look at them in the next article.

Consumer demand management can come in a few different ways:

  1. Change in schedule – e.g., you run the dishwasher at a different time. There is no reduction in overall demand, but you’ve reduced peak demand. This is simply a choice about when to use a device, and it has little impact on you the consumer, other than minor planning, or a piece of technology that needs to be programmed
  2. Energy storage – e.g., during winter you heat up your house during the middle of the day when demand is low – and electricity rates are low – so it’s still warm in the evening. You’ve actually increased overall demand because energy will be lost (insulation is not perfect), but you have reduced peak demand
  3. Cutting back – e.g. you don’t turn on the airconditioning during the middle of the summer day because electricity is too expensive. In this example, you suffer some small character-building inconvenience. This is not energy use deferred or changed, it’s simply overall reduction in usage. In other examples the suffering might be substantial.

The demand management “tools” don’t create energy storage. Apart from the heat capacity of a house, reduced by less than perfect insulation, and the heat capacities of fridges and freezers, there is not much energy storage (and there’s effectively no electricity storage). So the choices come down to changing a schedule (washing machine, dishwasher) or to cutting back.

It’s easy to reduce total demand. Just increase the price.

The challenge of demand management to help with intermittent renewables also depends on whether solar or wind is the dominant energy source. We’ll look at this more in a subsequent article.

Articles in this Series

Renewable Energy I – Introduction

Renewables II – Solar and Free Lunches – Solar power

Renewables III – US Grid Operators’ Opinions – The grid operators’ concerns

Renewables IV – Wind, Forecast Horizon & Backups – Some more detail about wind power – what do we do when the wind goes on vacation

Renewables V – Grid Stability As Wind Power Penetration Increases

Renewables VI – Report says.. 100% Renewables by 2030 or 2050

Renewables VII – Feasibility and Reality – Geothermal example

Renewables VIII – Transmission Costs And Outsourcing Renewable Generation

Renewables IX – Onshore Wind Costs

Renewables X – Nationalism vs Inter-Nationalism

Renewables XI – Cost of Gas Plants vs Wind Farms

Renewables XII – Windpower as Baseload and SuperGrids

Renewables XIII – One of Wind’s Hidden Costs

Renewables XIV – Minimized Cost of 99.9% Renewable Study

Renewables XV – Offshore Wind Costs

Renewables XVI – JP Morgan advises

A while back, I had a chat to Cory Budischak, lead author of the paper we looked at in XIV – Minimized Cost of 99.9% Renewable Study. He recommended a very recent JP Morgan document for investors in renewable energy – Our annual energy paper: the deep de-carbonization of electricity grids. And it is excellent. Best to read the paper itself. When I was in the middle of this article I saw an article on Judith Curry’s blog referencing the same paper, so rather than spend more time writing this article, here it is..

Still, for those who don’t read the paper, a few extracts from me and no surprises for readers who have worked their way through this series:

This year, we focus on Germany and its Energiewende plan (deep de-carbonization of the electricity grid in which 80% of demand is met by renewable energy), and on a California version we refer to as Caliwende.

  • A critical part of any analysis of high-renewable systems is the cost of backup thermal power and/or storage needed to meet demand during periods of low renewable generation. These costs are substantial; as a result, levelized costs of wind and solar are not the right tools to use in assessing the total cost of a high-renewable system
  • Emissions. High-renewable grids reduce CO2 emissions by 65%-70% in Germany and 55%-60% in California vs. the current grid. Reason: backup thermal capacity is idle for much of the year
  • Costs. High-renewable grid costs per MWh are 1.9x the current system in Germany, and 1.5x in California. Costs fall to 1.6x in Germany and 1.2x in California assuming long-run “learning curve” declines in wind, solar and storage costs, higher nuclear plant costs and higher natural gas fuel costs
  • Storage. The cost of time-shifting surplus renewable generation via storage has fallen, but its cost, intermittent utilization and energy loss result in higher per MWh system costs when it is added
  • Nuclear. Balanced systems with nuclear power have lower estimated costs and CO2 emissions than high-renewable systems. However, there’s enormous uncertainty regarding the actual cost of nuclear power in the US and Europe, rendering balanced system assessments less reliable. Nuclear power is growing in Asia where plant costs are 20%-30% lower, but political, historical, economic, regulatory and cultural issues prevent these observations from being easily applied outside of Asia
  • Location and comparability. Germany and California rank in the top 70th and 90th percentiles with respect to their potential wind and solar energy (see Appendix I). However, actual wind and solar energy productivity is higher in California (i.e., higher capacity factors), which is the primary reason that Energiewende is more expensive per MWh than Caliwende. Regions without high quality wind and solar irradiation may find that grids dominated by renewable energy are more costly

They also comment that they excluded transmission costs from their analysis, but this “.. could substantially increase the estimated cost of high-renewable systems..

Their assessment of the future German system with 80% renewables:

  • Backup power needs unchanged. Germany’s need for thermal power (coal and natural gas) does not fall with Energiewende, since large renewable generation gaps result in the need for substantial backup capacity (see Appendix II), and also since nuclear power has been eliminated
  • Emissions sharply reduced. While there’s a lot of back-up thermal capacity required, for much of the year, these thermal plants are idle. Energiewende results in a 52% decline in natural gas generation vs. the current system, and a 63% decline in CO2 emissions
  • Cost almost double current system. The direct cost of Energiewende, using today’s costs as a reference point, is 1.9x the current system. Compared to the current system, Energiewende reduces CO2 emissions at a cost of $300 per metric ton

They contrast the renewable options (with no storage and various storage options) with nuclear:

From JP Morgan 2015

From JP Morgan 2015

Nuclear is the bottom line in the table – the effective $ cost of CO2 reduction is vastly improved. Their comments on nuclear costs (and the uncertainties) are well worth reading.

They look at California by way of this comment:

Energiewende looks expensive, even when assuming future learning curve cost declines. Could the problem be that Germany is the wrong test case?

This is the same point I made in X – Nationalism vs Inter-Nationalism. The California example looks a lot better, in terms of the cost of reducing CO2 emissions. If your energy sources are wind and solar, and you want to reduce global CO2 emissions, it makes (economic) sense to spend your $ on the most effective method of reducing CO2.

Basically, they reach their conclusions from the following critical elements:

  • energy cannot be stored economically
  • time-series data demonstrates that, even when wind power is sourced over a very wide area, there will always be multiple days where the wind/solar energy is “a lot lower” than usual

The choices are:

  • spend a crazy amount on storage
  • build out (average) supply to many times actual demand
  • backup intermittent solar/wind with conventional
  • build a lot of nuclear power

These are obvious conclusions after reading 100 papers. The alternatives are:

  • ignore the time-series problem
  • assume demand management will save the day (more on this in a subsequent article)
  • assume “economical storage” will save the day

Many papers and a lot of blogs embrace these alternatives.

Articles in this Series

Renewable Energy I – Introduction

Renewables II – Solar and Free Lunches – Solar power

Renewables III – US Grid Operators’ Opinions – The grid operators’ concerns

Renewables IV – Wind, Forecast Horizon & Backups – Some more detail about wind power – what do we do when the wind goes on vacation

Renewables V – Grid Stability As Wind Power Penetration Increases

Renewables VI – Report says.. 100% Renewables by 2030 or 2050

Renewables VII – Feasibility and Reality – Geothermal example

Renewables VIII – Transmission Costs And Outsourcing Renewable Generation

Renewables IX – Onshore Wind Costs

Renewables X – Nationalism vs Inter-Nationalism

Renewables XI – Cost of Gas Plants vs Wind Farms

Renewables XII – Windpower as Baseload and SuperGrids

Renewables XIII – One of Wind’s Hidden Costs

Renewables XIV – Minimized Cost of 99.9% Renewable Study

Renewables XV – Offshore Wind Costs

In a number of earlier articles we looked at onshore wind because it is currently the lowest cost method of generating renewable electricity.

The installed onshore wind capacity (nameplate) in Europe at the start of 2015 was 121 GW. By comparison the offshore wind capacity (nameplate) by comparison was 8 GW. (Both figures from EWEA).

For recap – “nameplate” means what a wind turbine will produce at full capacity. A typical onshore wind farm in Europe will produce something like 16-30% actual output over the course of the year. If you pick some great locations in Oklahoma, you might get over 40%. It all depends on the consistency and speed of the wind. The actual output as a percentage of the nameplate capacity is usually given the term “capacity factor”. This isn’t some big disadvantage of wind – ‘it “only” produces 30% of its supposed capacity‘ – on the contrary, it’s just terminology. But it is important to check what value you are seeing in press releases and articles – so when you see that Europe has 121 GW of onshore wind installed, it usually means “nameplate”. And so the actual production of electricity, depending on location, will be something like 25-50 GW averaged over the year. End of recap..

There are three big advantages of offshore wind. And these are the reasons why a lot of money is being poured into offshore wind in Europe:

  • the intermittancy is lower – the wind blows more consistently
  • the capacity factor is higher – you get more out of your turbine, because the offshore wind speed is higher
  • they aren’t parked 300m from the houses of voters

In the last article XIV – Minimized Cost of 99.9% Renewable Study we saw an interesting point from one study – when storage costs were high (actually quite low, but higher than a “possible” super-low rental cost of storage from future owners of electric cars) the lowest cost method of building out the PJM network (eastern US) included a large portion of offshore wind.

This is the key to understanding the first major appeal of offshore. Intermittancy has a cost – something we will come back to again – that is a little difficult to quantify. You can smooth out the peaks and troughs by installing wind farms over a wide area, but you can’t eliminate the fact that at certain times in a given 10-year period there will be almost no wind for a week. Of course, it depends on the region, but so far even potential “super-grids” have a week’s down time (see XII – Windpower as Baseload and SuperGrids and also VIII – Transmission Costs And Outsourcing Renewable Generation)

Offshore gives you more consistent electricity production and less intermittancy.

The second point – more electricity on average from a given nameplate turbine – only helps when we consider the actual cost of different wind installations. Let’s say we put 1 GW of wind turbines onto land and these get a capacity factor of 25% – we get, on average, 250 MW. That is, across the year we get 2,190 GWh (0.25GW x 8760). Now we put 1 GW of nameplate offshore wind turbines into coastal water and we get a capacity factor of 40% on average – that is, 400 MW. So across the year we get 3,504 GWh (0.4 x 8760). This increased capacity factor only helps if the cost of installing the 1 GW of turbines offshore is less than 60% more expensive. Unfortunately, this is not the case (at the moment).

The third point is of great interest in Europe. Germany, Spain, the UK and Ireland have been installing a lot of onshore wind turbines. These are highly populated countries. For a later article, producing say 50% of each of these country’s electricity requires a lot of land area. Of course, the footprint on the actual land is quite small, but each turbine has to be some distance from every other turbine. This means that producing 15 GW of electricity from wind in the UK (about half of the average) would take up a lot of land area. The problem is more acute in Germany with a lower capacity factor.

So, those are the upsides. Now let’s look at the price tag. “If you have to ask, you can’t afford it..”

In an earlier article – IX – Onshore Wind Costs – we looked at the capex cost of onshore wind and (by the time we get into the comments) we find a current capital cost of about €1M per 1MW of (nameplate) capacity. There are lots of different numbers cited, but let’s use that for now. For people more familiar with the greenback, this is about US$1.2M per 1MW.

EWEA gives a current price tag for capex cost of offshore of €2.8 – €4.0M per 1MW of (nameplate) capacity. A larger proportion of the capital cost of offshore is the installation.

Remember that we have to factor in the “capacity factor”. So the capital cost of offshore is not 3-4x the onshore cost. If we calculate the cost based on the actual production of electricity then onshore costs (capex) something like €4M per 1MW of output and offshore costs (capex) something like €7-8M per 1MW – roughly double.

Now, we can be relatively sure of capital costs because there are enough datapoints and current installations. Governments publish figures when they are paying. Suppliers give out indicative pricing. Customers give out data on contracts.

But there are big questions about maintenance costs and, unlike onshore wind with a lot of data, this is still a little shrouded in mystery. I’ve consulted a lot of sources but it seems that, with only 9GW of offshore wind constructed in Europe – and much of this very recent – there is not enough public data to confirm any estimates.

One point only is clear (as you might expect) it is “quite a bit more” than the maintenance costs of onshore wind. The marine environment impacting on the equipment combined with the hazards of getting maintenance people out on the ocean.

So far it seems that offshore has some maintenance issues that are hard to cost up. It’s an industry still in its infancy.

Of course, to get more funding, many confident predictions are made: “Offshore wind will be cheaper than gas plants by 2020.

Without confident predictions, maybe no one will fund the next 5 years of development. I don’t want to delve any deeper into spruiking. Let’s just accept that most of what passes for discussion in the general media, repeated on many blogs, is simply press releases from governments, lobby groups and big companies, mostly repeated without any fact checking.

It’s quite possible that offshore wind will be much lower in 2020 than it is today. There are a lot of installation issues that might be improved with the combination of volume of installations, time on the job and engineering improvements. It’s also quite possible that offshore wind won’t be a lot lower in 2020 than it is today. (See points made in Renewable Energy I).

Here is IRENA for just 2 years:

From IRENA 2012

From IRENA 2012

And UKERC Offshore costs from a 2012 document:

From UKERC 2013

From UKERC 2012

And another from a different UKERC document, attempting to learn from experience, with reference to wind power cost projections vs how the world actually turned out:

In the short-term costs may rise before they can fall. Cost reductions from learning can be overwhelmed in the short-term by supply chain bottlenecks, build delays and ‘teething trouble’, for example lower than expected reliability at first. There is historical precedent for technologies deployed in the power sector to demonstrate cost increases during early commercialisation before supply chains and learning from experience are firmly established


From UKERC 2013

These graphs are only presented as a reminder that predictions don’t always come true. Engineering problems are hard and optimism is easy.

I’m sure offshore wind costs will come down in the long run, but as Keynes usefully reminded us, in the long run we are all dead. So “the long run” is not so useful. Whether offshore costs will come down to onshore costs in a reasonable time frame, and whether – in this time frame – they will further come down to the cost of gas turbine electricity production is open to question. Time will tell.

I’m generally an optimist. The glass is half full. Probably it’s almost full. And lots of people don’t have much, so my glass is anyway pretty amazing. It’s only the weight of blog world articles and media (lobby groups press releases) articles on this subject that compels me to remind readers that confident predictions of the future may not be correct.

Lots of sources quote LCOE (levelized cost of electricity) – this “adds” capital cost, factored by the cost of capital (interest rates), to maintenance costs and energy costs (when we consider conventional power stations with fuel costs). As explained in previous articles, this LCOE is not so useful (i.e., it’s misleading) when we consider intermittent renewables vs dispatchable conventional electricity.

As a rule of thumb consider offshore capex wind costs to be “about double” onshore wind costs, and offshore maintenance costs to be somewhat unknown, but definitely higher than onshore costs.

These rules of thumb are as much as I have been able to establish so far.


Wind in Power 2014 European Statistics, published February 2015 by European Wind Energy Association (EWEA)

Renewable Energy Technologies: Cost Analysis Series, Volume 1: Power Sector, Issue 5/5, Wind Power, IRENA (International Renewable Energy Agency), June 2012

Presenting the Future: An assessment of future costs estimation methodologies in the electricity generation sector, UKERC (2013)

UKERC Technology and Policy Assessment, Cost Methodologies Project: Offshore Wind Case Study, UKERC (2012)

Budischak et al (2013) is a very interesting paper (and free). Here is the question they pose:

What would the electric system look like if based primarily on renewable energy sources whose output varies with weather and sunlight? Today’s electric system strives to meet three requirements: very high reliability, low cost, and, increasingly since the 1970s, reduced environmental impacts. Due to the design constraints of both climate mitigation and fossil fuel depletion, the possibility of an electric system based primarily on renewable energy is drawing increased attention from analysts.

Several studies (reviewed below) have shown that the solar resource, and the wind resource, are each alone sufficient to power all humankind’s energy needs. Renewable energy will not be limited by resources; on the contrary, the below-cited resource studies show that a shift to renewable power will increase the energy available to humanity.

But how reliable, and how costly, will be an electric system reliant on renewable energy? The common view is that a high fraction of renewable power generation would be costly, and would either often leave us in the dark or would require massive electrical storage.

Good question.

We do not find the answers to the questions posed above in the prior literature. Several studies have shown that global energy demand, roughly 12.5 TW increasing to 17 TW in 2030, can be met with just 2.5% of accessible wind and solar resources, using current technologies [refs below]. Specifically, Delucci and Jacobson pick one mix of eight renewable generation technologies, increased transmission, and storage in grid integrated vehicles (GIV), and show this one mix is sufficient to provide world electricity and fuels. However, these global studies do not assess the ability of variable generation to meet real hourly demand within a single transmission region, nor do they calculate the lowest cost mix of technologies.

Emphasis added.

[Refs: M.A. Delucchi, M.Z. Jacobson, Energy Policy, Dec. 2010;    M.Z. Jacobson, M.A. Delucchi, Energy Policy, Dec. 2010;    L. Brown, Plan B 4.0: Mobilizing to Save Civilization, Earth Policy Institute, 2009]

This is also what I have found – I’ve read a number of “there’s no barrier to doing this” papers including Delucchi & Jacobson – so I was glad to find this paper. (As an aside, I question some points and assumptions in this paper, but that’s less important and brief comments on those points towards the end).

The key is investigating time series based on real demand for a region and real supply based on the actual wind and sun available.

Before we look at what they did and what they found, here are some comments that are relevant for some of our recent discussions:

In a real grid, we must satisfy varying load, and with high-penetration renewables, charging and discharging storage will at times be limited by power limits not just by stored energy. More typical studies combining wind and solar do not seek any economic analysis and/or do not look at hourly match of generation to load..

Hart and Jacobson determined the least cost mix for California of wind, solar, geothermal and hydro generation. Because their mix includes dispatchable hydro, pumped hydro, geothermal, and solar thermal with storage, their variable generation (wind and photovoltaic solar) never goes above 60% of generation. Because of these existing dispatchable resources, California poses a less challenging problem than most areas elsewhere, most or all practical renewable energy sources are variable generation, and dedicated storage must be purchased for leveling power output. We cannot draw general conclusions from the California case’s results..

The ability to reliably meet load will still be required of systems in the future, despite the variability inherent in most renewable resources. However, a review of existing literature does not find a satisfactory analysis of how to do this with variable generation, nor on a regional grid-operator scale, nor at the least cost. We need to solve for all three.

What does the paper do?

  • Use the demand load from PJM (East Coast grid operator) for 4 years as a basis for assessing the cost-minimized solution – with the average load being 31.5 GW
  • Assign a cost (unsubsidized) to each type of renewable resource: onshore wind, offshore wind, solar based on 2008 costs and forecasts for 2030 costs (roughly 50% of 2008 capex costs with similar O&M costs)
  • Assign a cost to 3 different storage types: centralized hydrogen, centralized batteries, and grid integrated vehicles (GIV)
  • And then run through nearly 2 billion or so combinations to first ensure demand is met, then secondly calculate the cost of each combination
From Budischak et al 2013

From Budischak et al 2013

Figure 1

An most important note for me, something we will review in future articles, rather than here, is the very low cost assigned to storage using vehicle batteries – at $32/kWh, whereas centralized storage is $318/kWh. It’s clear, as we will see, that storage costs skew the analysis strongly.

Here was their lowest cost solution for 30%, 90% and 99.9% renewables. The results are probably not so surprising to people who’ve followed the series so far. Energy Produced GWa is basically the average power over the year (so 8760 GWh, which is a constant 1GW all year = 1 GWa):

From Budischak et al 2013

From Budischak et al 2013

Figure 2

So we can see that the lowest cost method of matching demand is to produce almost 3 times the required demand. That is, the energy produced across the year averages at 91.3 GW (and appears to have peaks around 200GW). This is because storage costs so much – and because supply is intermittent. Here is the time series – click to expand:

From Budischak et al 2013

From Budischak et al 2013

Figure 3 – Click to Expand

We see that the energy in storage (middle row) is pulled down in summer, which the paper explains as due to less supply in summer (generally less wind).

Here is a challenging week in detail, the top graph shows the gaps that need to be filled in with storage, the bottom graphs with the gaps filled by storage and also how much supply is “spilled“:

From Budischak et al 2013

From Budischak et al 2013

Figure 4 – Click to Expand

Here is the mix of generation and storage for each of the 30%, 90%, 99.9% each under the two cost assumptions of 2008 and 2030:

From Budischak et al 2013

From Budischak et al 2013

Figure 5 – Click to Expand

Looking at the 99.9% cases we see that the projected solar PV cost in 2030 means it has a bigger share compared with wind but that wind is still the dominant power source by a long way. (We will investigate offshore wind costs and reliability in a future article).


The paper assesses that generating 30% of power from renewables today is already cheaper than conventional generation, and producing 90% in 2030 will be cheaper than conventional generation, with 99.9% at parity.

The key point I would like to draw readers attention to, is that unlike conventional generation, the higher the penetration of renewables the more expensive the solution (because the intermittency is then a bigger problem and so requires a more costly solution).

I’m not clear how they get to the result of renewables already being cheaper than conventional (for a 30% penetration). Their wind power cost from 2008 is roughly double what we found from a variety of sources (see IX – Onshore Wind Costs & XI – Cost of Gas Plants vs Wind Farms) and we found – depending on the gas price and the discount rate – that wind at that price was generally somewhat more expensive than gas. Using current US gas prices this is definitely the case. The authors comment that there are significant subsidies for conventional generation – I have not dug into that as yet.

The cost of storage seems low. If we take instead their cost of centralized storage – $318/kWh – and look at the lowest-cost solution to meet demand we find quite a different result. First, there is a lot less storage – 360 vs 891 GWh. That’s because it’s so pricey.

Second, although the final cost per kWh of energy is not given, we can see that whereas in the GIV storage case we build 16GW solar, 90GW offshore wind, 124GW inland wind = 230GW peak, with centralized storage we build 50, 129, 61 = 240GW peak and probably need the expensive offshore wind as a more reliable (less intermittent) source than onshore wind.

My basic calculation from his data is that the capital cost of the best case central storage solution is 45% more than the GIV storage solution. And more offshore wind will definitely require additional transmission cost (which was not included in the study).

I like their approach. What is clear is that finding the best cost solution depends heavily on the cost of storage, and the mix is radically different for different storage costs. Again, it is the intermittent nature of renewables for the region in question that shapes the result.

Questions on the Analysis

We simplify our grid model by assuming perfect transmission within PJM (sometimes called a “copper plate” assumption), and no transmission to adjacent grids. We also simplify by ignoring reserve requirements, within-hourly fluctuations and ramp rates; these would be easily covered with the amount of fast storage contemplated here. In addition, we assume no preloading of storage from fossil (based on forecasting) and no demand-side management. Adding transmission would raise the costs of the renewable systems calculated here, whereas using adjacent grids, demand management, and forecasting all would lower costs. We judge the latter factors substantially larger, and thus assert (without calculation) that the net effect of adding all these factors together would not raise the costs per kWh above those we calculate below.

Their analysis consumed a lot of computing resources. Adding transmission costs would add another level of complexity. However, I don’t agree with the conclusion that the transmission costs would be offset by adjacent grids, demand management and forecasting.

In brief:

  1. Adjacent grids have the exact same problem – the wind and solar are moving approximately in sync – meaning supply in adjacent regions is quite highly correlated; and hot and cold temperatures are likewise in sync so air-conditioning and heating demand is similar in adjacent regions – therefore another region will be drawing on their storage at the same times as the PJM region. Also, “using adjacent grids” means adding even longer transmission lines of very high capacity. That has a cost.
  2. “Demand management” is possibly a mythical creation to solve the problem of demand being at the “wrong time”. Apart from paying big industrials to turn off power during peak demand, which is already in play for most grid operators, it apparently equates to people not turning on the heating in the cold weather – or to people buying expensive storage. I will be looking for research with some data that puts “demand management” into some reality-based focus.
  3. Forecasting doesn’t exactly help, unless you have demand management. Better wind forecasting currently helps grid operators because it allows them to buy reserve (conventional generation) at the right time, making a more efficient use of conventional generation. I can’t see how it helps a mostly renewable scenario to be more cost-effective. Perhaps someone can explain to me what I am missing.

And I will dig into storage costs in a future article.


The paper is very good overall – their approach is the important aspect. There are a great many papers which all confidently state that there is no technical barrier to 100% renewables. This is true. But maybe two or three papers is enough.

If you add “enough” wind farms and “enough” solar and “enough” storage – along with “enough” transmission – you can make the grid work. But what is the cost and how exactly are you going to solve the problems? After the first few papers to consider this question, any subsequent ones that don’t actually cover the critical problem of electricity grids with intermittent renewables are basically a waste of time.

What is the critical problem? Given that storage is extremely expensive, and given the intermittent nature of renewables with the worst week of low sun and low wind in a given region – how do you actually make it work? Because yes, there is a barrier to making a 100% renewable network operate reliably. It’s not technical, as such, not if you have infinite money..

It should be crystal clear that if you need 500GW of average supply to run the US you can’t just build 500GW of “nameplate” renewable capacity. And you can’t just build 500GW / capacity factor of renewable capacity (e.g. if we required 500GW just from wind we would build something like 1.2-1.5TW due to the 30-40% capacity factor of wind) and just add “affordable storage”.

So, there is no technical barrier to powering the entire US from a renewable grid with lots of storage. Probably $50TR will be enough for the storage. Or forget the storage and just build 10x the nameplate of wind farms and have a transmission grid of 500GW around the entire country. Probably the 5TW of wind farms will only cost $5TR and the redundant transmission grid will only cost $20TR – so that’s only $25TR.

Hopefully, the point is clear. It’s a different story from dispatchable conventional generation. Adding up the possible total energy from wind and solar is step 1 and that’s been done multiple times. The critical item, missing from many papers, is to actually analyze the demand and supply options with respect to a time series and find out what is missing. And find some sensible mix of generation and storage (and transmission, although that was not analyzed in this paper) that matches supply and demand.

So this paper has a lot of merit.

It shows with their storage costs (which seem very low), that the lowest cost solution to building a 99.9% renewable network in one (reasonable sized) region is to build nearly 3 times the actual supply needed (this is not a “capacity factor” issue – see note 2).

In future articles we will look at storage costs, as I have questions about their costing. But the main points from this paper are more than enough for one article.

Articles in this Series

Renewable Energy I – Introduction

Renewables II – Solar and Free Lunches – Solar power

Renewables III – US Grid Operators’ Opinions – The grid operators’ concerns

Renewables IV – Wind, Forecast Horizon & Backups – Some more detail about wind power – what do we do when the wind goes on vacation

Renewables V – Grid Stability As Wind Power Penetration Increases

Renewables VI – Report says.. 100% Renewables by 2030 or 2050

Renewables VII – Feasibility and Reality – Geothermal example

Renewables VIII – Transmission Costs And Outsourcing Renewable Generation

Renewables IX – Onshore Wind Costs

Renewables X – Nationalism vs Inter-Nationalism

Renewables XI – Cost of Gas Plants vs Wind Farms

Renewables XII – Windpower as Baseload and SuperGrids

Renewables XIII – One of Wind’s Hidden Costs


Cost-minimized combinations of wind power, solar power and electrochemical storage, powering the grid up to 99.9% of the time, Cory Budischak, DeAnna Sewell, Heather Thomson, Leon Mach, Dana E. Veron & Willett Kempton, Journal of Power Sources (2013) – free paper


Note 1: Tables 1 & 2 of cost estimates with notes from Budischak et al 2013 – click to expand

Budischak 2013 table 1

Budischak 2013 table 2

Note 2: The 2-3 overbuilding is not the nameplate vs capacity factor question. Let me explain. Imagine we are only talking about wind. If we build 3GW of wind farms we might get 1GW of average output across a year. This is a 33% capacity factor. The % depends on the wind turbines and where they are located.

Now if we need to get 1GW average across the year and meet demand 99.9% of the time, the lowest cost solution won’t be to build 3GW of nameplate (=1GW of average output) and add lots of storage, instead it will be to build 9GW of nameplate and some storage.

In earlier articles we looked at wind power, what it costs, what it does to the grid, and what to do when the wind is not blowing.

Now a frequent comment – which conceals more than it reveals – is: “the wind always blows somewhere”. This is true – if you have lots of wind farms that are geographically dispersed you do average out your peaks and troughs, and you do also reduce the % change hour by hour.

However, if you have 20% of your average power coming from wind, then on one given day it might be 60% of your requirements, yet the next day it might be 0.3%. This means that sometimes you are “winding back” your conventional generation, and sometimes you are “cranking up” your conventional generation – and much more in absolute terms than in a network of 98%+ conventional generation. The larger the penetration of wind energy the more problems this causes.

The question has come up a few times without being answered – what is the impact on efficiency of conventional power generation?

It’s clear that the impact depends on the penetration of wind. Very recent analysis is hard to find.

First, here is an older NREL study from 2004:

It is important to understand that the key issue is not whether a system with a significant amount of wind capacity can be operated reliably, but rather to what extent the system operating costs are increased by the variability of the wind..

..Over the past two years, several investigations of these questions have been conducted by or on behalf of U.S. electric utilities. These studies addressed utility systems with different generating resource mixes and employed different analytical approaches. In aggregate, this work provides illuminating insights into the issue of wind’s impacts on overall electric system operating costs.

I extracted two useful examples from the NREL study:


PacifiCorp, a large utility in the northwestern United States, operates a system with a peak load of 8,300 MW that is expected to grow to 10,000 MW over the next decade. PacifiCorp recently completed an Integrated Resource Plan (IRP) that identified 1,400 MW (14%) of wind capacity over the next 10 years as part of the least-cost resource portfolio.

A number of studies were performed to estimate the cost of wind integration on its system. The costs were categorized as incremental reserve or imbalance costs. Incremental reserves included the cost associated with installation of additional operating reserves to maintain system reliability at higher levels of wind penetration, recognizing the incremental variability in system load imposed by the variability of wind plant output.

Imbalance costs captured the incremental operating costs associated with different amounts of wind energy compared to the case without any wind energy.

At wind penetration levels of 2,000 MW (20%) on the PacifiCorp system, the average integration costs were $5.50/MWh, consisting of an incremental reserve component of $2.50 and an imbalance cost of $3.00. The cost of additional regulating reserve was not considered. These costs are considered by PacifiCorp to be a reasonable approximation to the costs of integrating the wind capacity.

Great River Energy:

Great River Energy (GRE) is a Generation and Transmission electric cooperative serving parts of Minnesota and northeast Wisconsin. It is primarily a thermal system in the Mid- Continent Area Power Pool (MAPP) region with a summer peak load in excess of 2300 MW, growing at 3%-4% per year.. As part of its planning process to meet this objective, GRE performed a study with Electrotek that examined adding 500 MW of wind in 100 MW increments between now and 2015. GRE operates with a fixed fleet of generation and uses a static scheduling process, so it did not decompose the problem into the three time periods commonly used in the analysis of ancillary-service costs in larger utilities. It also looked at providing the ancillary services required from its own resources, including a 600-MW combined-cycle unit, which was subsequently cancelled. GRE found ancillary- service costs of $3.19/MWh at 4.3% penetration and $4.53/MWh at 16.6% penetration. It is likely that the costs would have been higher without the combined-cycle unit and self-providing the ancillary services without economical intermediate resources.

It appears that these studies are based on nameplate values (I didn’t find an explicit statement but the wording implies it and later references to the data agree). That’s a pretty big difference, because in GRE’s case it means that the “16.6%” would actually be something like “5-6% of average electricity production from wind”. The 16.6% would be when the wind farms were operating at their maximum.

It seems like there should be many more studies, especially given the increase in wind penetration of electricity networks in Germany, UK and Ireland. However, many references work their way back to the same papers. For example, Overview of wind power intermittency impacts on power systems, MH Albadi & EF El-Saadany, Electric Power Systems Research (2010) says:

Smith et al. reported that the existing case studies have explored wind capacity penetrations of up to 20–30% of system peak and have found that the primary considerations are economic, not physical [9].

The reference [9] is Utility Wind Integration and Operating Impact State of the Art, J Smith et al, IEEE Transactions on Power Systems (2007), which states:

On the cost side, at wind penetrations of up to 20% of system peak demand, it has been found that system operating cost increases arising from wind variability and uncertainty amounted to about 10% or less of the wholesale value of the wind energy [2]. This finding will need to be reexamined as the results of higher-wind-penetration studies—in the range of 25% to 30% of peak balancing-area load—become available. However, achieving such penetrations is likely to require one or two decades.

The reference [2] here is Wind plant integration, E DeMeo et al, IEEE Power Energy Mag (2005) which has the same data as the NREL study, not surprising as two of the authors are the same.

Albadi & EF El-Saadany 2010 compile some data, note the reference again is to peak penetration:

From Albadi & El-Saadany 2010

From Albadi & El-Saadany 2010

Figure 1

We can see a big range. For example:

  • the UK costs at the top of the graph, with peak penetration of 20-40% (=average penetration of 6-12%) having costs of around $5/MWh, or 0.5c/kWh
  • the Finland costs at the bottom right with 32-65% (=average of 10-20%, but I’m unsure of their capacity factor) having costs of around $1/MWh, or 0.1c/kWh

The study that produced these particular values (and some others) is H. Holttinen et al 2009. This is from 2009 and what appears to be the same data is in a paper in Wind Energy (2011). However, the studies that produced their data:  Finland and Nordic – PhD by Holtinnen 2004; Sweden – paper by U Axelsson et al from 2005; Ireland – 2004 study; UK – paper by Strbac et al from 2007; Germany – Dena Grid study from 2005; Minnesota – paper for Minnesota Public Utilities Commission from 2006; and California – paper by Porter et al from 2007.

Holtinnen et al 2011 summary:

From the cost estimates presented in the investigated studies it follows that at wind penetrations of up to 20 % of gross demand (energy), system operating cost increases arising from wind variability and uncertainty amounted to about 1–4 €/MWh of wind power produced (Fig. 5). This is 10 % or less of the wholesale value of the wind energy. The actual impact of adding wind generation in different balancing areas can vary depending on local factors. Important factors identified to reduce integration costs are aggregating wind plant output over large geographical regions, larger balancing areas, and utilizing shorter gate closure times with accurate forecast systems and sub-hourly schedule changes.

An important point, often missed by pundits looking at Denmark:

The interconnection capacity to neighbouring systems is often significant. For the balancing costs, it is then essential to note in the study setup whether the interconnection capacity can be used for balancing purposes or not. A general conclusion is that if interconnection capacity is allowed to be used also for balancing purposes, then the balancing costs are lower compared to the case where they are not allowed to be used.

The two points for Greennet Germany at the same wind penetration level reflect that balancing costs increase when neighbouring countries get more wind (the same applies for Greennet Denmark). For a small part of an interconnected system, a wind integration study stating a high penetration level can also be misleading if the wind penetration in neighbouring areas is low and interconnection capacity plays a major part in integration.

They have many interesting points in their paper:

In Denmark the TSO has estimated the impacts of increasing the wind penetration level from 20 % to 50 % (of gross demand) and concluded that further large scale integration of wind power calls for exploiting both, domestic flexibility and international power markets with measures on the market side, production side, transmission side and demand side ([19] and [20]).

This kind of implies there are big issues, but the documentation is locked away in conference proceedings. Surely some published papers have come out of this important question so I will continue to dig..

A digression, for people concerned that wind power research and costing ignores transmission costs, another (counter-) example:

Transmission cost is the extra cost in the transmission system when wind power is integrated. Either all extra costs are allocated to wind power, or only part of the extra costs are allocated to wind power – grid reinforcements and new transmission lines often benefit also other consumers or producers and can be used for many purposes, such as increase of reliability and/or increased trading. The cost of grid reinforcements due to wind power is therefore very dependent on where the wind power plants are located relative to load and on the grid infrastructure, and one must expect numbers to vary from country to country.

Grid reinforcement costs are by nature dependent on the existing grid. The costs vary with time and are dependent on when the generator is connected. After building some lines, often several generators can be connected before new reinforcement needs occur. After a certain time, new lines, substations or something else is needed.

The grid reinforcement costs are not continuous; there can be single very high cost reinforcements. Using higher voltages generally results in lower costs per MW transported but this also means that there are even higher increments of capacity and grid costs. The same wind power plant, connected at different times, may therefore lead to different grid reinforcement costs. For transmission planning, the most cost effective solution in cases that require considerable grid reinforcements would be to build the transmission network for the final planned amount of wind power in the network – instead of having to upgrade transmission lines in several phases.


It seems like everyone studying wind power believes the additional costs incurred as a result of having to ramp up and down conventional power systems are relatively low – typically less than 0.5c/KWh for 20% penetration. Likewise, everyone agrees that there is a real cost to be paid. The cost for 50% penetration is unclear, even if it is feasible.

There doesn’t seem to be any real world data for high wind penetrations, which is not surprising as Germany, a wind power leader, has only about 10% of (annual average) power coming from wind, and Denmark is effectively part of a much larger grid (by virtue of interconnection).

Whether or not the current estimates factor in the lifetime impact on power stations (due to lots more heating and cooling causing more stressing of various parts of the system) is something that might only be found by the real world experiment of doing it for a couple of decades.

[Note that many statements and press releases on the subject of wind do not clarify whether they are talking about “peak”, i.e., nameplate, or “average”, i.e. the nameplate x capacity factor – it is essential to clarify this before putting any weight on the claim].

Articles in this Series

Renewable Energy I – Introduction

Renewables II – Solar and Free Lunches – Solar power

Renewables III – US Grid Operators’ Opinions – The grid operators’ concerns

Renewables IV – Wind, Forecast Horizon & Backups – Some more detail about wind power – what do we do when the wind goes on vacation

Renewables V – Grid Stability As Wind Power Penetration Increases

Renewables VI – Report says.. 100% Renewables by 2030 or 2050

Renewables VII – Feasibility and Reality – Geothermal example

Renewables VIII – Transmission Costs And Outsourcing Renewable Generation

Renewables IX – Onshore Wind Costs

Renewables X – Nationalism vs Inter-Nationalism

Renewables XI – Cost of Gas Plants vs Wind Farms

Renewables XII – Windpower as Baseload and SuperGrids


Wind Power Impacts on Electric Power System Operating Costs: Summary and Perspective on Work to Date, JC Smith, EA DeMeo, B Parsons & M Milligan, NREL (2004)

Overview of wind power intermittency impacts on power systems, MH Albadi & EF El-Saadany, Electric Power Systems Research (2010)

Design and operation of power systems with large amounts of wind power, H Holttinen et al, VTT (2009) & Impacts of large amounts of wind power on design and operation of power systems, results of IEA collaboration, H Holttinen et al, Wind Energy (2011)


In Part I and IV – Wind, Forecast Horizon & Backups we looked at a few basics, including capacity credit which is basically how much “credit” the grid operator gives you for being there. If you are a 1GW coal-fired power station you probably get around 850MW – 900MW capacity credit. This reflects the availability that your power generation offers. The grid operator needs to ensure the region or country can meet the demand in any given second, minute, hour, day, week, month and year.

And so the grid operator’s calculation is a statistical one – given a “fleet” (always a strange name to me for such immobile units) of generating units how can we be sure that we can meet demand in every minute of the year? Conventional generation (gas, coal, nuclear) is mostly “dispatchable” – which means that, apart from unexpected outages, you can choose to run the gas plant or nuclear power station when you want.

Wind power, on the other hand, is not dispatchable. And it turns out that its capacity credit, as a proportion of actual capacity, reduces significantly as its penetration into the network increases. Another way of saying it is that wind is less reliable than conventional generation at any given point in time and this problem gets worse the more wind power you have available.

However, this doesn’t present some insuperable obstacle to using wind. What it means at the moment in various countries is that you can use windpower when the wind is blowing, and when it’s not blowing (or not much) you can crank up a gas plant. As wind power penetration grows in a given network the variability of this ever larger power source must result in less efficiency of the conventional units operating at part load or in reserve. Everyone agrees on this point. However, in this series so far we have not reviewed any actual papers or data on the loss of efficiency – something to look forward to.

Baseload Power

On a different point – the focus of this article – the intermittency of windpower raises an important question, especially as it is the cheapest source of renewable energy (given that hydro is “tapped out” in most developed countries).

Is it possible to generate base load power from wind?

If not, then there is clearly a limit to the growth of wind power. (We have already covered some real problems of high wind power penetration in V – Grid Stability As Wind Power Penetration Increases – those problems haven’t gone away). This is related to the question of the maximum reduction in GHG emissions from electricity generation while “keeping the lights on”.

Generally we can think of increasing wind power in a region as creating a benefit and a problem:

  • the benefit – more wind power usually means more geographical dispersion which averages out peaks and troughs (see IV – Wind, Forecast Horizon & Backups)
  • the problem – more wind power means peaks and troughs cause more problems for the grid (a 100MW unforecast fluctuation over a few hours is easily dealt with in most countries, but a 5GW unforecast fluctuation is more problematic)

So.. on with this article..

In Czisch & Ernst (2001), the authors consider a massive area wind power network. I don’t believe this paper is a complete answer because some questions are unanswered, but the idea is instructive.

As they state:

Europe currently has by far the highest installed wind power capacity of all regions in the world. However, this is not due to Europe being the best possible place to build wind power, but rather to a favourable political climate

It is a slightly different take on the question I asked in X – Nationalism vs Inter-Nationalism – why is Germany building windfarms in Germany instead of places with lots of wind?

In the graph below “Full Load Hours/Year” is basically a way of showing capacity factor (not to be confused with capacity credit). Capacity factor is average output/nameplate and depends how much wind you get – across the UK capacity factor is just over 30%, in Germany it is around 18%, in Oklahoma, maybe 41%.

In this genre of papers it’s common to dispense with the crazy idea of percentages – who can understand them? Instead of percentages, let’s use the much more intuitive idea of the output expressed as if the wind farm ran at full load for x number of hours in the year. So 2100 full load hours = the old school 24% (2100/8760)..

Anyway, via color coding (which at least follows a familiar pattern), we see why Ireland and the north of the UK has a windfarm advantage, along with European and African coastal regions:

From Czisch & Ernst 2001

Figure 1

The data above is based on wind speeds taken from reanalysis data (from ECMWF). “Reanalysis” is basically a blend of data and models filling in the blanks where data doesn’t exist. (See Water Vapor Trends under the sub-heading “Filling in the Blanks”).

Then they look at the correlation between different sites, based on actual measurements.

For people new to wind power, a low correlation is good. A high correlation is bad. Why? If you have 1000x 3MW wind turbines and the correlation of output power between the turbines is high then they will be producing 3GW some of the time, 1.5GW some of the time, and 0GW some of the time – their output power rises and falls in unison. If the correlation is low then they will be producing (for example) 1GW nearly all of the time – this is clearly much better – as one turbine slows down, another speeds up.

Low correlation implies sustained output. High correlation implies big peaks and troughs.

As we might expect – as the turbines get further apart their output power correlation reduces = good. For example, the wind in London is well correlated with the wind in Reading, England (60km apart), but not well correlated with the wind in Moscow (2,500 km apart).

And for any given geographical separation the correlation is higher (=bad) as we consider longer time periods. This is also expected. There might be considerable minute to minute fluctuation between two sites due to the turbulent nature of wind, but the average across 12 hours will be more correlated because the overall weather patterns cover bigger areas:


From Czisch & Ernst 2001

From Czisch & Ernst 2001

Figure 2

Super Grids

Now let’s look at longer time periods and longer distances (I don’t understand the dots in this graph):

From Czisch & Ernst 2001

From Czisch & Ernst 2001

Figure 3

The paper goes on to select the best regions, place large hypothetical wind farms in those regions and calculate the wind farm output:

The potentials described in the above section altogether make a capacity of nearly 950 GW and close to 2800 TWh annual electricity production. This is more than the total demand of the EU countries plus Norway which was 2100 TWh in 1997. The average production exceeds 2900 full load hours.

Electricity consumption in the EU has increased a fair bit since 1997 – EU consumption in 2014 was about 2800 TWh (which for reference is about 320GW continuously) – I’m not sure if this represents economic growth, adding countries to the EU or both.

But the regions that they propose have more than sufficient wind to meet much higher output. The population density is low and wind potential is high in the regions they select. Unlike the places where most European wind power is being built at the moment (with high population density and low wind speeds).

The key lines in the graph are the red line = demand and the black line = supply for one scenario:

From Czisch & Ernst 2001

From Czisch & Ernst 2001

Figure 4


To me their paper doesn’t quite complete the picture. They provide some more insights, including transmission and storage requirements, and propose providing baseload power but not peak power for the whole of the EU. Given the potential wind power in the regions they select it’s not clear what limits actually exist.

The questions seem straightforward:

  1. For a given scenario (nameplate per region) produce the usual graph of hourly output, not as a time-series, but in declining output order (e.g. fig 6 below), so we can see for how many hours the output drops below key values
  2. Calculate the actual nameplate capacity needed in the various production regions to ensure “Loss of Load Probability” (LOLP) below the standard 9 years per century (or some other metric)

Armed with this data we would be able to see the number of wind turbines required and the transmission requirements between each region. And what, if any, pumped hydro storage would be needed in addition. And what, if any, conventional backup generation would be needed.

In a scenario with wind power producing a large portion of EU energy any problems get amplified.

For example – and this is just my example – if we built a wind network to supply say 320GW it might be a nameplate capacity of 900GW (something like 450,000 wind turbines of 2MW). But if our analysis showed that individual regions at any given time would be supplying most of the load, the nameplate capacity of the whole system might need to be 3TW (1,500,000 wind turbines). If the system instead had gas power as a backup for say 10% of the time when the wind “super-system” dropped well below the demand, the fuel cost would be relatively low, but the construction cost would be very high for the power supplied – because we would have built 300GW of supply to run just 10% of the time.

Some of these ideas are taken up by other papers.

In Archer & Jacobson (2007) the authors look at the statistics of wind energy across 19 sites in the midwest of the USA:

In this study, benefits of interconnecting wind farms were evaluated for 19 sites, located in the Midwestern United States, with annual average wind speeds at 80 m above ground, the hub height of modern wind turbines, greater than 6.9 m/s (class 3 or greater). We found that an average of 33% and a maximum of 47% of yearly-averaged wind power from interconnected farms can be used as reliable, baseload electric power.

From Archer & Jacobson 2007

From Archer & Jacobson 2007

Figure 5

Unfortunately, their description of “reliable baseload power” indicates they are “having a laugh” – let’s hope they didn’t really mean it.

After noting the problem of intermittency of wind power they state:

On the other hand, because coal combustion can be controlled, coal energy is not considered intermittent and is often used as “baseload” energy. Nevertheless, because coal plants were shut down for scheduled maintenance 6.5% of the year and unscheduled maintenance or forced outage for another 6% of the year on average in the United States from 2000-2004, coal energy from a given plant is guaranteed only 87.5% of the year, with a typical range of 79-92% (NERC 2005, Giebel 2000).

And in their wind power analysis they are then content when their hypothetical system meets this 79% threshold, given that is the (minimum) benchmark for one coal-fired power station. Hopefully readers of this series can see the problem with this threshold. Grid operators provide baseload power by combining multiple units of dispatchable power. No one is under the illusion that one coal-fired power station is nirvana. This was probably true even 100 years ago in England. Grid operators provide some statistical inevitability of keeping the lights on by using more than one power station.

The real question we want to answer is whether combining ever more distant wind farms can actually provide baseload power to meet grid operator requirements and therefore replace a network of conventional power stations. And how many turbines in each of how many locations, what transmission capacity, and so on.

The key curves are given below, with the blue curve being the key one representing the combination of 19 sites, with output power placed in decreasing order. As we can see, moving from 1 to 7 to 19 sites increases our minimum output:

From Archer & Jacobson 2007

From Archer & Jacobson 2007

Figure 6

It’s a useful graph. In this example we would put up 19 wind farms in a wide area, with the furthest extremes separated by almost 900 km. And for 90% of the year we would get more than 13% of the nameplate output. And for 95% of the year we would get more than 10% of the output.

So does that mean we should build nameplate capacity at 10x our required demand, and provide gas plants to match demand for the 2½ weeks a year that the wind farms can’t keep up?

It’s half an answer, like the earlier paper we reviewed. At least it gives us the graph we need to see (figure 6, their figure 3) for this specific geographical distribution.


Baseload electric power is not an optional extra – unless the population decides to vote for it, which seems unlikely. It does seem possible that the right combination of wind farms across a super-grid might be a solution for most of the EU’s energy needs. It needs to be evaluated in more detail and costed.

If the statistics of wind power variability make this solution a possible contender, then the costs will be a minimum of $1-5TN plus transmission costs. Perhaps $2-10TN. Perhaps more. I’m just trying to get a broad idea of the cost.

For the EU this is not such a large amount. Germany has already spent more than €40BN just to get 10% of electricity from wind power.

If the EU is serious about decarbonizing electricity generation then putting up wind turbines in Germany and central Europe – instead of investigating a European super-grid capitalizing on the best regions – is possibly a monumental failure of policy (maybe this subject has already been discussed and discarded).

Articles in this Series

Renewable Energy I – Introduction

Renewables II – Solar and Free Lunches – Solar power

Renewables III – US Grid Operators’ Opinions – The grid operators’ concerns

Renewables IV – Wind, Forecast Horizon & Backups – Some more detail about wind power – what do we do when the wind goes on vacation

Renewables V – Grid Stability As Wind Power Penetration Increases

Renewables VI – Report says.. 100% Renewables by 2030 or 2050

Renewables VII – Feasibility and Reality – Geothermal example

Renewables VIII – Transmission Costs And Outsourcing Renewable Generation

Renewables IX – Onshore Wind Costs

Renewables X – Nationalism vs Inter-Nationalism

Renewables XI – Cost of Gas Plants vs Wind Farms


High wind power penetration by the systematic use of smoothing effects within huge catchment areas shown in a European example, Czisch & Ernst, Windpower (AWEA), 2001

Supplying baseload power and reducing transmission requirements by interconnecting wind farms, CL Archer & MZ Jacobson, Journal of Applied Meteorology and Climatology (2007)