This field is changing rapidly and so some of these issues may be better resolved than appears from some of the extracts. But it is useful to understand that currently there are limits to the penetration of some kinds of renewable energy on the electricity grid and it is still an area of international research.

In essence the “old-fashioned” power system had lots of big rotating equipment generating power at the business end. This has a lot of inertia – by which I mean inertia in the physics sense, rather than in the sense of institutional resistance..

The rotation is at a speed that generates 50 Hz or 60 Hz depending on where in the world you live. Supply has to match demand on a second by second basis. As the load on the system increases, it slows down the rotation of all of the large generation equipment and this allows two things:

  • automatic response from systems (that monitor the frequency) to increase power
  • flags to the operator to bring other power supply systems online (standby systems, aka reserves)

Wind turbines also rotate but they they don’t act the same as “old-fashioned” power systems – their inertial energy, in most cases, is effectively decoupled from the grid. This isn’t a problem at small penetration levels but the problem increases as the wind power penetration increases. This is called System Non-Synchronous Penetration (SNSP) – although in different places there may be different terms and acronyms.

There is also the critical issue of fault ride-through, which means that if the line voltage drops/collapses – when it comes back the wind farm should continue to provide power. This is critical at high penetration levels, because, without fault ride-through in wind farms, a temporary line voltage drop could take out the entire wind power generation system.

Here is Göksu et al (2010):

Conventional power plants, which are composed of synchronous generators, are able to support the stability of the transmission system by providing inertia response, synchronizing power, oscillation damping, short-circuit capability and voltage backup during faults. These features allow the conventional power plants to comply with the grid codes, thus today’s TSOs have a quite stable and reliable grid operation worldwide.

Wind turbine generator technical characteristics, which are mainly fixed and variable speed induction generators, doubly fed induction generators and synchronous generators with back to back converters, are very different to those of the conventional generators. As the installation of WPPs, which consist of these wind turbine generators, has reached important levels that they have a major impact on the characteristics of the transmission system..

Coughlan, Smith, Mullane & O’Malley (2007):

Renewable energy generation systems are being connected in increasing numbers to power systems worldwide. Of the commercially available systems, wind-turbine generators (WTGs) using non-synchronous-based technology are proving most successful. Unlike the synchronous machine whose operating characteristics have been documented and understood for decades, the generation of bulk ac electricity using non-synchronous machine-based generators is a relatively new phenomenon.

The effects of large penetrations of non-synchronous machine-based generators on power system stability have not been thoroughly studied. This problem is most serious in smaller power systems such as the Republic of Ireland, which have very large proportions of installed wind capacity compared to conventional generation and limited interconnection capability. Such systems are likely to experience possible stability issues related to wind generation, earlier than larger systems having lower proportions of installed wind generation..

..The level of wind turbine modelling detail required for power system stability studies remains an area where there is as yet not widespread agreement. This issue is complicated by the large number of wind turbine designs, the requirement for models in different time-frames, and the application of the model. As the end users of wind turbine models have predominantly been power system operators and due to the general lack of power system analysis expertise on the part of the wind turbine manufacturers, the wind turbine model development process has also proved cumbersome. Models are developed on behalf of manufacturers by third parties and supplied to system operators for use.

As many of the turbine models are not yet mature, system operators have acted as model testers reporting model bugs, irregularities, and errors and often advising manufacturers on appropriate action. Remedial action is then often relayed to third parties who make the necessary software changes.

Zhao & Nair (2010):

Renewable energy generation systems are being increasingly connected to power system networks worldwide. Among all commercially available systems, wind turbine generators (WTGs) using non-synchronous-based technology are being used predominantly. Unlike the traditional synchronous machine whose operating characteristics have been understood for decades, electricity generation using induction machine-based wind generators is relatively recent. In order to allow for the continued penetration of wind generation into electricity networks in the absence of operational experience, dynamic models of WTG have become more important for carrying out stability studies..

.. However, it is generally observed during large-scale wind integration studies that the so-called ‘standard’ components of the wind turbine models are quite often not standardised among manufacturers. Further during simulations, more detailed individual models (i.e. manufacturer-specific models) are used for analysis. The non-disclosure of the model details makes it very difficult to diagnose problems using simulation results. Considerable effort is needed to reproduce the model in a case containing no confidential data..

..Unlike conventional synchronous generators, where injection tests can be employed to test the unit response during a grid disturbance, a wind farm does not provide this option. Utilities rely solely on the WTGs model to determine how they would react to system dynamics, and therefore, the accuracy and validity of the model is important. To date, a very few number of wind turbine generator field test results are published..

..The validation of user-written models with field measurements needs careful planning and preparation, which includes obtaining permission from authorities, the power system operator and the wind turbine manufacturer. Disturbances which the wind turbines and the power system network can be subjected to are often limited. For example, it is not always easy to obtain permission to execute a balanced three-phase short-circuit fault in the transmission network, even though the results of such experiments would be highly valuable for validating the dynamic wind turbine model.

[Emphasis added].

Hansen & Michalke (2007):

Today, the wind turbines on the market mix and match a variety of innovative concepts with proven technologies for both generators and power electronics. The main trend of modern wind turbines/wind farms is clearly the variable-speed operation and a grid connection through a power converter interface.

Two variable-speed wind turbine concepts have a substantial predominance on the market today. One of them is the variable-speed wind turbine concept with partial-scale power converter, known as the doubly fed induction generator (DFIG) concept. The other is the variable-speed wind turbine concept with full-scale power converter and synchronous generator. These two variable-speed wind turbine concepts compete against each other on the market, with their more or less weak and strong features.

Nowadays, the most widely used generator type for units above 1 MW is the doubly fed induction machine. Presently, the primordial advantage of the DFIG concept is that only a percentage of power generated in the generator has to pass through the power converter. This is typically only 20–30% compared with full power (100%) for a synchronous generator-based wind turbine concept, and thus it has a substantial cost advantage compared to the conversion of full power

It seems that many national grid codes have been revised, and also that many people are studying the subject. Zhao & Nair compared wind farm models with reality under a line fault and found quite a discrepancy. However, in that case reality was a lot better than the model predicted, which is obviously a good thing.

A key question is what level of wind power the network can support before “curtailment”. Garrigle, Deane & Leahy (2013) discussed some scenarios in Ireland given that the current system non-synchronous penetration (SNSP) is set by the grid operator at 60%, but might be lifted to 75%.

You might think that a 60% limit on windpower means wind can achieve a penetration of 60% – pretty good, right?

But no. Remember that wind power is an intermittent resource. If wind power was like a conventional “dispatchable” generation source you would keep increasing wind farms and the output would rise up to 60% and then there would be no more wind farms built (until such time as the wind farm electrical characteristics were improved, or other methods of improving grid stability had been introduced).

Taking an extreme counter-example just for the purposes of illustration – imagine that some of the time there is zero wind, and the rest of the time all the wind-farms are running at 100%. And let’s say that the average output is 40% of nameplate capacity – i.e., we have no wind 60% of the time and lots of wind 40% of the time. Let’s say the country needs 5GW continuously and the government target to come from wind power is 40%, or 2GW on average. If we have 5GW of “nameplate” windpower capacity that implies that we can produce our target of 2GW.

However, the grid requires curtailment of any “non-synchronous” source above 60%. So in fact, from 5GW nameplate we will be producing 5GW x 60% for 40% of the time and 0 for the remainder. The result is an output of only 1.2GW, not 2GW – i.e., 24% of the national output instead of 40% of the national output.

Under this extreme scenario, it is impossible to produce the required 40% of national output from windpower.

Of course, this scenario is not reality. But the challenge remains – when the grid requires curtailment the limitation has a greater effect than we might first think.

Garrigle et al studied the effect of wind power curtailment under a variety of scenarios (including a certain amount of offshore wind power, currently a lot more expensive than onshore but less correlated to onshore wind power):

The primary result from this work is an estimate of the required installed wind capacities for both NI [Northern Ireland] and ROI [Republic of Ireland] to meet their 2020 RES-E targets. It is evident that this varies greatly due to the large differences in wind curtailment that will occur based on the assumptions made.

The required capacity estimates range from 5911 MW to 6890 MW which results in extra cost of c. € 459 million between what is considered to be the lowest technically feasible wind curtailment scenario (high offshore wind at SNSP limit of 75%, including TCGs) to that of the highest (low offshore wind at SNSP limit of 60%, including TCGS)

In the context of the electricity system this is a considerable extra expense similar in magnitude to the cost of two of the proposed North-South interconnector between NI and ROI. This illustrates the importance of increasing the SNSP limit as high as technically and economically feasible.

There were also dependencies on the interconnection to the rest of Great Britain. The way to think about this is:

  • can you export power to another country when you produce “too much”?
  • if that other country is also producing significant power from the same source (windpower in this example) how correlated is their output to yours?

Grid interconnections aren’t cheap. And if Great Britain is producing peak windpower at the same time as NI/ROI is producing peak windpower then the interconnections are of no benefit for that particular case.

Articles in this Series

Renewable Energy I – Introduction

Renewables II – Solar and Free Lunches – Solar power

Renewables III – US Grid Operators’ Opinions – The grid operators’ concerns

Renewables IV – Wind, Forecast Horizon & Backups – Some more detail about wind power – what do we do when the wind goes on vacation


Wind Turbine Modelling for Power System Stability Analysis—A System Operator Perspective, Coughlan, Smith, Mullane & O’Malley, IEEE Transactions on Power Systems (2007)

Assessment of wind farm models from a transmission system operator perspective using field measurements, S. Zhao N.-K.C. Nair, IET Renewable Power Generation (2010)

Fault ride-through capability of DFIG wind turbines, Anca Hansen & Gabriele Michalke, Renewable Energy (2007)

How much wind energy will be curtailed on the 2020 Irish power system? EV Mc Garrigle, JP Deane & PG Leahy, Renewable Energy (2013)

Overview of Recent Grid Codes for Wind Power Integration, Altin, Göksu, Teodorescu, Rodriguez, Jensen & Helle, 12th International Conference on Optimization of Electrical and Electronic Equipment (2010)

For some countries – cold, windy ones like England – wind power appears to offer the best opportunity for displacing GHG-emitting electricity generation. In most developed countries renewable electricity generation from hydro is “tapped out” – i.e., there is no opportunity for developing further hydroelectric power.

There’s a lot of confusion about wind power. Some of this we looked at briefly in earlier articles.

Nameplate & Actual

The nameplate is not what anyone (involved in the project) is expecting to get out of it.

So if you buy “10GW” of wind farms you aren’t expecting 10 GW x 8,760 (thanks to DeWitt Payne for updating me on how many hours there are in a year) = 87.6 TWh of annual electricity generation. Depending on the country, the location, the turbines and turbine height you will get an “average utilization”. In the UK that might be something like 30%, or even a little higher. So for 10GW of wind farms – everyone (involved in the project) is expecting something like 26 TWh of annual electricity generation (10 x 8760 x 0.3 / 1000).

We could say, on average the wind farm will produce 3GW of power. That’s just another way of writing 26 TWh annually. So 10GW of nameplate wind power does not need “10 GW of backup” or “7 GW of backup”. Does it need “3GW of backup”? Let’s look at capacity credit.

Just before we do, if you are new to renewables, whenever you see statements, press releases and discussions about “X MW of wind power being added” check whether it is nameplate power or actual expected power. Often it is secondarily described in terms of TWh or GWh – this is the actual energy expected over the year from the wind farm or project.

Capacity Credit

The capacity credit is the “credit” the operator gives you for providing “capacity” when it is in most demand. Operators have peaks and troughs in demand. There are lots of ways of looking at this, here is one example from Gross et al 2006, showing the time of day variation of demand for different seasons in the UK. We can see winter is the time of peak demand:

From Gross et al 2006

From Gross et al 2006

Figure 1

If you have a nuclear power station it probably runs 90% of the time. Some of the off-line time is planned outages for maintenance, upgrades, replacement of various items. Some of the off-line time is unplanned outages, where the grid operator gets 10 minutes notice that “sorry Sizewell B is going off line, can’t chat now, have a great day”, taking out over 1GW of capacity. So the capacity credit for nuclear reflects the availability and also the fact that the plant is “dispatchable” – apart from unplanned outages it will run when you want it to run.

The grid of each country (or region within a country) is a system. Because all of the generation within most of the UK is connected together, Sizewell B doesn’t need to be backed up with its own 1GW of coal-fired power stations. All you need is to have sufficient excess capacity to cope with peak demand given the likelihood of any given plant(s) going off line.

It’s a pool of resources to cope with:

  • a varying level of demand, and
  • a certain amount of outage from any given resource

Wind is “intermittent” (likewise for solar). So you can’t dispatch it when you need it. Everyone (involved in producing power, planning power, running the grid) knows this. Everyone (“”) knows that sometimes the wind turns off.

If you add lots of wind power – let’s say a realistic 3GW of wind, from 10GW of nameplate capacity – the capacity credit isn’t 90% of 3GW like you get for a nuclear power station. It is a lot smaller. This reflects the fact that at times of peak demand there might be no wind power (or almost no wind power). However, wind does have some capacity credit.

This is a statistical calculation – for the UK, the winter quarter is used to calculate capacity credit (because it is the time of maximum demand). The value depends on the wind penetration, that is, how much energy is expected from the wind from that period. For low penetrations of wind, say 500 MW, you get full capacity credit (capacity credit = 500MW). For higher penetrations it changes. Let’s say wind power provides 20% of total demand. Total demand averages about 40GW in the UK so wind power would be producing an average 8GW. For significant penetrations of wind power you get a low percentage of the output as capacity credit. The value is calculated from the geographical spread and statistical considerations, and it might be 10-20% of the expected wind power. Let’s say 8GW of output power (averaged over the year) gets 0.8GW – 1.6GW of “capacity credit”.

This means that when calculating how much aggregate supply is available windpower gets a tick in the box for 0.8GW – 1.6GW (depending on the calculation of credit). This is true even though there are times when the wind power is zero. How can it get capacity credit above zero when sometimes its power is zero? Because it is a statistical availability calculation. How can Sizewell B get a capacity credit when sometimes it has an unplanned outage? We can’t rely on it either.

The point is, hopefully it is clear, sorry for laboring it – when the wind is zero, Sizewell B and another 60GW of capacity are probably available. (If it’s not clear, please ask, I’m sure I can paint a picture with an appropriate graph or something).

Low Capacity Credit Doesn’t Mean Low Benefit – And What We Do About Low Capacity Credit

Let’s say the capacity credit for wind was zero, just for sake of argument. Even then, wind still has a benefit (it has a cost as well). Its benefit comes from the fact that the marginal cost of energy is zero (neglecting O&M costs). And the GHG emissions are zero from all the energy produced. It has displaced GHG-emitting electricity generation.

What we do about the low capacity credit is we add – or retain – GHG-emitting conventional backup. The grid operator, or the market (depends on the country in question), has the responsibility/motivation to provide backup. Running a conventional station less often, and keeping a station part running, but not at full load – these reduce efficiency.

Let’s say we produce 70 TWh of electricity from wind (20% of UK electricity requirement of 350 TWh). Wonderful. We have displaced 70 TWh of GHG emitting power. But we haven’t. We have kept some GHG emitting power stations “warmed up” or “operational at part load” and so we might have displaced 65 TWh or 60 TWh (or some value) of GHG emitting power stations because we ran the conventional generators less efficiently than before.

We will look at the numbers in a later article.

So wind has benefit even though it is not “dispatchable”, even though sometimes at peak demand it produces zero energy.

Statistics of Wind and Forecast Time Horizons

Let’s suppose that even though wind is not “dispatchable” we had a perfect forecast of wind speeds around the region for the next 12 months. This would mean we could predict the power from the wind turbines for every hour of the day for the next 365 days.

In this imaginary case, power plant could be easily scheduled to be running at the right times to cover the lack of wind power. We could make sure that major plants did not have outages in the periods of prolonged low wind speeds. The efficiency of our “backup” generation would be almost as perfect as before wind power was introduced. So if we produced 70 TWh of wind energy we would displace just about 70 TWh of conventional GHG emitting generation. We would also probably need less excess capacity in the system because one area of uncertainty had been removed.

Of course we don’t have that. But at the same time, our forecast horizon is not zero.

The unexpected variability of wind changes with the time horizon we are concerned about. Let’s put it another way, if we are getting 1.5 GW from all of our wind farms right now, the chance of it dropping to 0 GW 10 minutes from now is very small. The chance of it being 0 GW 1 hour from now is quite small. But the chance of it being 0 GW in 4 hours might be quite a bit higher.

I hope readers are impressed with the definitive precision with which I nailed the actual probabilities there..

There are many dependencies – the location of the wind farms (the geographical spread), the actual country in question and the season and time of day under consideration.

We’ve all experienced the wind in a location dropping to nothing in an instant. But as you install more turbines over a wider area the output variance over a given time period reduces. A few graphs from Boyle (2007) should illuminate the subject.

Here is a comparison of 1 hour changes between a single wind farm and half of Denmark:

Boyle 2010 Denmark hourly change single vs all

Figure 2

Here is a time-series simulation of a given 1000MW capacity in one location (single farm) vs that same capacity spread across the UK:

Boyle 2010 Wind power single vs distributed

Figure 3

Here is an example from the actual output of the wind power network in Germany:

From Boyle 2010

Figure 4

At some stage I will dig out some more recent actuals. The author of that chapter comments:

Care should be taken in drawing parallels, however, between experiences in Germany and Denmark and the situation elsewhere, such as in the UK. Wind conditions over the whole British electricity supply system should be assumed to be different unless proved otherwise. Differences in latitude and longitude, the presence of oceans, as well as the area covered by the wind power generation industry make comparisons difficult. The British wind industry, for example, has a longer north–south footprint than in Denmark, while in Germany the wind farms have a strong east–west configuration.


Here is an example from Gross et al (2006) of variations across 1, 2 and 4 hours:

From Gross et al 2006

From Gross et al 2006

Figure 5

Here’s another breakdown of how the UK wind output varies, this time as a probability distribution:

Boyle 2010 PD of wind power in UK

Figure 6

In another paper on the UK, Strbac et al 2007:

Standard deviations of the change in wind output over 0.5hr and 4hr time horizons were found to be 1.4% and 9.3% of the total installed wind capacity, respectively. If, for example, the installed capacity of wind generation is 10 GW (given likely locations of wind generation), standard deviations of the change in wind generation outputs were estimated to be 140 MW and 930 MW over the 0.5-h and 4-h time horizons, respectively.

What this means for a grid operator is that predictability changes with the time horizon. This matters because their job is to match supply and demand and if the wind is going to be high, less conventional stations are needed to be “warmed up”. If the wind is going to be low, more conventional stations are needed. But if we didn’t know anything in advance – that is, if we could get anything between 0GW to 10GW with just 30 minutes notice – it would present a much bigger problem.

Closing the Gate, Spinning Reserves and Frequency

The grid operator has to match supply and demand (see note for an extended extract on how this works).

Demand varies, but must be met – except for some (typically) larger industrial customers who have agreed contracts to turn off their plant under certain conditions, such as when demand is high.

The grid operator has a demand forecast based on things like “reviewing the past” and as a result enters into contracts for the hour ahead for supply. This is the case in the UK. Other countries have different rules and time periods, but the same principles apply: the grid operator “closes the gate”. To me this is not an intuitive term because he/she has contracts for flexible supply and reserves – in case demand is above what is expected or contracted plant goes offline. So gate closure means the contract position is fixed for the next time period.

However, the actual problem is to meet demand and to do this flexible plant is up and running and part loaded. Some load matching is done automatically. This happens via frequency. If you increase the load on the system the frequency starts to fall. Reserve plant increases its output automatically as the frequency falls (and the converse). This is how the very short term supply-demand matching takes place.

So the uncertainty about the wind output over the next hour is the key for the UK grid operator. It is a key factor in changing the cost of reserves as wind power penetration increases. If the gate closure was for the next 12 hours it should be clear that the cost to the grid operator of matching supply and demand would increase – given that the uncertainty about wind is higher the longer the time period in question.

Whether one hour or 12 hours gate closure makes a huge difference in overall cost of supply is likely a very complicated one, and not one I expect we can uncover easily, or at all. The market mechanism in the UK is around the 1 hour gate closure and so suppliers have all creating pricing models based on this.

Grid Stability – SNSP and Fault Ride-Through

System Non-Synchronous Penetration (SNSP) and fault ride-through capability are important for wind power. Basically wind power has different characteristics from existing conventional plant and has the potential to bring the grid down. We will look at the important question of what wind power does to the stability of the grid in a subsequent article.

Articles in this Series

Renewable Energy I – Introduction

Renewables II – Solar and Free Lunches – Solar power

Renewables III – US Grid Operators’ Opinions – The grid operators’ concerns

Renewables IV – Wind, Forecast Horizon & Backups – Some more detail about wind power – what do we do when the wind goes on vacation

Renewables V – Grid Stability As Wind Power Penetration Increases – Brief simplified discussion of Fault ride-through and System Non-Synchronous Penetration (SNSP)


Renewable Electricity and the Grid : The Challenge of Variability, Godfrey Boyle, Earthscan (2007) – textbook

The Costs and Impacts of Intermittency: An assessment of the evidence on the costs and impacts of intermittent generation on the British electricity network, Gross et al, UK Energy Research Centre (2006) – free research paper

Impact of wind generation on the operation and development of the UK electricity systems, Goran Strbac, Anser Shakoor, Mary Black, Danny Pudjianto & Thomas Bopp, Electric Power Systems Research (2007)


Extract from Gross et al 2006 explaining the UK balancing in a little detail – the whole document is free and well-worth spending the time to read:

The supply of electricity is unlike the supply of other goods. Electricity cannot be readily stored in large amounts and so the supply system relies on exact second-by-second matching of the power generation to the power consumption. Some demand falls into a special category and can be manipulated by being reduced or moved in time.

Most demand, and virtually all domestic demand, expects to be met at all times.

It is the supply that is adjusted to maintain the balance between supply and demand in a process known as system balancing.

There are several aspects of system balancing. In the UK system, contracts will be placed between suppliers and customers (with the electricity wholesalers buying for small customers on the basis of predicted demand) for selling half hour blocks of generation to matching blocks of consumption. These contracts can be long standing or spot contracts.

An hour ahead of time these contract positions must be notified to the system operator which in Great Britain is National Grid Electricity Transmission Limited. This hour-ahead point (some countries use as much as twenty-four hour ahead) is known as gate closure.

At gate closure the two-sided market of suppliers and consumers ceases. (National Grid becomes the only purchaser of generation capability after gate closure and its purpose in doing so is to ensure secure operation of the system.) What actually happens when the time comes to supply the contracted power will be somewhat different to the contracted positions declared at gate closure. Generators that over or under supply will be obliged to make good the difference at the end of the half hour period by selling or buying at the system sell price or system buy price. Similar rules apply to customers who under or over consume.

This is known as the balancing mechanism and the charges as balancing system charges. This resolves the contractual issues of being out-of- balance but not the technical problems.

If more power is consumed than generated then all of the generators (which are synchronised such that they all spin at the same speed) will begin to slow down. Similarly, if the generated power exceeds consumption then the speed will increase. The generator speeds are related to the system frequency. Although the system is described as operating at 50 Hz, in reality it operates in a narrow range of frequency centred on 50 Hz. It is National Grid’s responsibility to maintain this frequency using “primary response” plant (defined below). This plant will increase or decrease its power output so that supply follows demand and the frequency remains in its allowed band. The cost of running the primary response plant can be recovered from the balancing charges levied on those demand or supply customers who did not exactly meet their contracted positions. It is possible that a generator or load meets its contract position by consuming the right amount of energy over the half hour period but within that period its power varied about the correct average value. Thus the contract is satisfied but the technical issue of second-by-second system balancing remains..

..Operating reserve is generation capability that is put in place following gate closure to ensure that differences in generation and consumption can be corrected. The task falls first to primary response.

This is largely made up of generating plant that is able to run at much less than its rated power and is able to very quickly increase or decrease its power generation in response to changes in system frequency. Small differences between predicted and actual demand are presently the main factor that requires the provision of primary response. There can also be very large but infrequent factors that need primary response such as a fault at a large power station suddenly removing some generation or an unpredicted event on TV changing domestic consumption patterns.

The primary response plant will respond to these large events but will not then be in a position to respond to another event unless the secondary response plant comes in to deal with the first problem and allow the primary response plant to resume its normal condition of readiness. Primary response is a mixture of measures. Some generating plant can be configured to automatically respond to changes in frequency. In addition some loads naturally respond to frequency and other loads can be disconnected (shed) according to prior agreement with the customers concerned in response to frequency changes.

Secondary response is normally instructed in what actions to take by the system operator and will have been contracted ahead by the system operator. The secondary reserve might be formed of open-cycle gas-turbine power stations that can start and synchronise to the system in minutes. In the past in the UK and presently in other parts of the world, the term spinning reserve has been used to describe a generator that is spinning and ready at very short notice to contribute power to the system. Spinning reserve is one example of what in this report is called primary response. Primary response also includes the demand side actions noted in discussing system frequency..

In Part I we had a brief look at the question of intermittency – renewable energy is mostly not “dispatchable”, that is, you can’t choose when it is available. Sometimes wind energy is there at the right time, but sometimes when energy demand is the highest, wind energy is not available.

The statistical availability depends on the renewable source and the country using it. For example, solar is a pretty bad solution for England where the sun is a marvel to behold on those few blessed days it comes out (we all still remember 1976 when it was more than one day in a row), but not such a bad solution in Texas or Arizona where the peak solar output often arrives on days when peak electricity demand hits – hot summer days when everyone turns on their air-conditioning.

The question of how often the renewable source is available is an important one, but is a statistical question.

Lots of confusion surrounds the topic. A brief summary of reality:

  1. The wind does always blow “somewhere”, but if we consider places connected to the grid of the country in question the wind will often not be blowing anywhere, or if it is “blowing” the output of the wind turbines will be a fraction of what is needed. The same applies to solar. (We will look at details of the statistics in later articles).
  2. The fact that at some times of peak demand there will be little or no wind or solar power doesn’t mean it provides no benefit – you simply need to “backup” the wind / solar with a “dispatchable” plant, i.e. currently a conventional plant. If you are running on wind “some of the time” you are displacing a conventional plant and saving GHG emissions, even if “other times” you are running with conventional power. A wind farm doesn’t need “a dedicated backup”, that is the wrong way to think about it, instead there needs to be sufficient “dispatchable” resources somewhere in the grid available for use when intermittent sources are not running.
  3. The costs and benefits are the key and need to be calculated.

However, the problem of intermittency depends on many factors including the penetration of renewables. That is, if you produce 1% of the region’s electricity from renewables the intermittency problem is insignificant. If you produce 20% it is significant and needs attention. If you produce 40% from renewables you might have a difficult problem. (We’ll have a look at Denmark at some stage).

Remember (or learn) that grid operators already have to deal with intermittency – power plants have planned and, even worse, unplanned outages. Demand moves around, sometimes in unexpected ways. Grid operators have to match supply and demand otherwise it is a bad outcome. So – to some extent – they have to deal with this conundrum.

What do grid operators think about the problem of integrating intermittent renewables, i.e., wind and solar into the grid? It’s always instructive to get the perspectives of people who do the actual work – in this case, of balancing supply and demand every day.

Here’s an interesting (free) paper: The intermittency of wind, solar, and renewable electricity generators: Technical barrier or rhetorical excuse? Benjamin K. Sovacool. As always I recommend reading the paper for yourself. Here is the abstract:

A consensus has long existed within the electric utility sector of the United States that renewable electricity generators such as wind and solar are unreliable and intermittent to a degree that they will never be able to contribute significantly to electric utility supply or provide baseload power. This paper asks three interconnected questions:

  1. What do energy experts really think about renewables in the United States?
  2. To what degree are conventional baseload units reliable?
  3. Is intermittency a justifiable reason to reject renewable electricity resources?

To provide at least a few answers, the author conducted 62 formal, semi-structured interviews at 45 different institutions including electric utilities, regulatory agencies, interest groups, energy systems manufacturers, nonprofit organizations, energy consulting firms, universities, national laboratories, and state institutions in the United States.

In addition, an extensive literature review of government reports, technical briefs, and journal articles was conducted to understand how other countries have dealt with (or failed to deal with) the intermittent nature of renewable resources around the world. It was concluded that the intermittency of renewables can be predicted, managed, and mitigated, and that the current technical barriers are mainly due to the social, political, and practical inertia of the traditional electricity generation system.

Many comments and opinions from grid operators are provided in this interesting paper. Here is one from California:

Some system operators state that the intermittence of some renewable technologies greatly complicates forecasting. David Hawkins of the California Independent Systems Operator (ISO) notes that:

Wind, for instance, can be forecasted and has predictable patterns during some periods of the year. California uses wind as an energy resource but it has a low capacity factor for meeting summer peak-loads. The total summer peak-load is 45,000 MW of load, but in January daily peak-loads are 29,000 MW, meaning that 16,000 MW of our system load is weather sensitive. In the winter and spring months, big storms come into California which creates dramatic changes in wind. We have seen ramps as large as 800 MW of wind energy increases in 30 min, which can be quite challenging“.

..A report from the California ISO found that relying on wind energy excessively complicated each of the five types of forecasts. As the study concluded, ‘‘although wind generator output can be forecast a day in advance, forecast errors of 20–50% are not uncommon’’

And a little later:

For instance, California Energy Commissioner Arthur Rosenfeld comments that:

Germany had to build a huge reserve margin (close to 50 percent) to back up its wind. People show lots of pictures of wind turbines in Germany, yet you never see the standby power plants in the picture. This is precisely why utilities fear wind: the cost per kWh of wind on the grid looks good only without the provision of large margins of standby power“.

Thomas Grahame, a senior researcher at the U.S. Department of Energy’s Office of Fossil Fuels, comments that:

‘‘when intermittent sources become a substantial part of the electricity generated in a region, the ability to integrate the resource into the grid becomes considerably more complex and expensive. It might require the use of electricity storage technologies, which will add to cost. Additionally, new transmission lines will also be needed to bring the new power to market. Both of these add to the cost’’

The author looks at issues surrounding conventional unplanned outages, at the risks and costs involved in the long cycle of building a new plant plus getting it online – versus the rapid deployment opportunities with wind and solar.

I’m aware of various studies that show that up to 20% wind is manageable on a grid, but above that issues may arise (e.g. Gross et al., 2006). There are, of course, large numbers of studies with many different findings – my recommendation for placing any study in context is first ask “what percentage of renewable penetration was this study considering”. (There are many other questions as well – change the circumstances and assumptions and your answers are different).

The author of this paper is more convinced that any issues are minor and the evidence all points in one direction:

Perhaps incongruously, no less than nine studies show that the variability of renewables becomes easier to manage the more they are deployed (not the other way around, as some utilities suggest). In one study conducted by the Imperial College of London, researchers assessed the impact that large penetration rates (i.e., above 20 percent) of renewable energy would have on the power system in the United Kingdom. The study found that the benefits of integrating renewables would far exceed their costs, and that ‘‘intermittent generation need not compromise electricity system reliability at any level of penetration foreseeable in Britain over the next 20 years.’’ Let me repeat this conclusion for emphasis: renewable energy technologies can be integrated at any level of foreseeable penetration without compromising grid stability or system reliability.

Unfortunately, there was no reference provided for this.

Claiming that the variability of renewable energy technologies means that the costs of managing them are too great has no factual basis in light of the operating experience of renewables in Denmark, Germany, the United Kingdom, Canada, and a host of renewable energy sites in the United States.

As I commented earlier I recommend readers interested in the subject to read the whole paper instead of just my extracts. It’s an interesting and easy read.

I can’t agree that the author has conclusively, or even tentatively, demonstrated that wind & solar (intermittent renewables) can be integrated into a grid to any arbitrary penetration level.

In fact most of the evidence cited in his paper is at penetration levels of 20% or less. Germany is cited because the country “is seeking to generate 100 percent of its electricity from renewables by 2030”, which doesn’t quite stand as evidence (and it would be uncharitable to comment on the current coal-fired power station boom in Germany). Denmark I would like to look at in a later article – is it a special case, or has it demonstrated the naysayers all to be wrong? We will see.

The penetration level is the key, combined with the technology and the country in question. It’s a statistical question. Conceptually it is not very difficult. Analyze meteorological data and/or actuals for wind and solar power generation in the region in question over a sufficiently long time and produce data in the format required for different penetration levels:

  • minimums at times of peak demand
  • length of time power from X MW capacity is below Y% of X MW capacity & how often this occurs & as a function of peak demand times
  • ..and so on

This does mean – it should be obvious – that each region and country will get different answers with different technologies. Linking together different regions with sufficient redundant transmission capacity is also not trivial, neither is “adding sufficient storage”.

If the solution to the problem is an un-costed redundant transmission line, we need to ask how much it will cost. The answer might be surprisingly high to many readers. If the solution to the problem is “next-generation storage” then the question is “will your solution work without this next-generation storage and what specification & cost are required?”

Of course, I would like to suggest another perspective to keep in mind with the discussion on renewables: the sunk cost of the existing power generation, transmission and distribution network is extremely high, and more than a century of incremental improvement and dispersion of knowledge and practical experience has led us to today – with obviously much lower marginal costs of using and expanding conventional power. But, we are where we are. What I hope to shed some light on in this series is what renewables actually cost, what benefits they bring and what practical difficulties exist in expanding renewables.

The author concludes:

Conventional power systems suffer variability and reliability problems, just to a different degree than renewables. Conventional power plants operating on coal, natural gas, and uranium are subject to an immense amount of variability related to construction costs, short-term supply and demand imbalances, long term supply and demand fluctuations, growing volatility in the price of fuels, and unplanned outages.

Contrary to proclamations stating otherwise, the more renewables that get deployed, the more – not less – stable the system becomes. Wind- and solar-produced power is very effective when used in large numbers in geographically spaced locations (so the law of averages yields a relative constant supply).

The issue, therefore, is not one of variability or intermittency per se, but how such variability and intermittency can best be managed, predicted, and mitigated.

Given the preponderance of evidence referenced here in favor of integrating renewables, utility and operator objections to them may be less about technical limitations and more about tradition, familiarity, and arranging social and political order.

The work and culture of people employed in the electricity industry promote ‘‘business as usual’’ and tend to culminate in dedicated constituencies that may resist change.

Managers of the system obviously prefer to maintain their domain and, while they may seek increased efficiencies and profits, they do not want to see the introduction of new and disruptive ‘‘radical’’ technologies that may reduce their control over the system.

In essence, the current ‘‘technical’’ barriers to large-scale integration of wind, solar, and other renewables may not be technical at all, and more about the social, political, and practical inertia of the traditional electricity generation system.

I’ve never met a grid operator, but I have worked with many people in technical disciplines in a variety of fields – in operations, production, maintenance, technical support, engineering and design. This includes critical infrastructure and the fields include process plants, energy, telecommunications networks, as well as private and municipal. You get a mix of personality types. Faced with a new challenge some relish the opportunity (more skills, more employable, promotion & pay opportunities, just the chance to learn and do something new). Others are reluctant and resist.

The author of the paper didn’t have so many doubts about this subject – other studies have concluded it will all work fine so the current grid operators are trapped in the past.

If I was asking lots of people in the field doing the actual job about the technical feasibility of a new idea and they unanimously said it would be a real problem, I would be concerned.

I would be interested to know why grid operators in the US that the author interviewed are resistant to intermittent renewables. Perhaps they understand the problem better than the author. Perhaps they don’t. It’s hard to know. The evidence Sovacool brings forward includes the fact that grid operators currently have to deal with unplanned outages. I suspect they are aware of this problem more keenly than Sovacool because it is their current challenge.

Perhaps US grid operators think there are no real technical challenges but expect that no one will pay for the standby generation required. Or they have an idea what the system upgrade costs are and just expect that this is a cost too high to bear. It’s not clear from this paper. I did peruse his PhD thesis that this paper was drawn from but didn’t get a lot more enlightenment.

However, it’s an interesting paper to get some background on the US grid.


The intermittency of wind, solar, and renewable electricity generators: Technical barrier or rhetorical excuse? Benjamin K. Sovacool, Utilities Policy (2009)

[Later note, Sep 2015, it’s clear – as can be seen in the later comments that follow the article – there is a difference between a number of papers that cannot be explained by ‘improved efficiencies in manufacturing’ or ‘improved solar-electricity conversion efficiencies’. The discrepancies are literally one group making a large mistake and taking “energy input” to be electricity input rather than fuel to put into power stations to create electricity – or the reverse. I suspect that the paper I highlight below is making the mistake, in which case this article is out by a factor of 3 against solar being a free lunch. In due course, I will try to fight through all the papers again to get to the bottom of it. I also have not been able to confirm that any of the papers really account for building all the new factories that manufacture the solar panels (instead perhaps they are just considering the marginal electricity use to make each solar cell).]

There are lots of studies of the energy and GHG input into production of solar panels. I’ve read some and wanted to highlight one to look at some of the uncertainties.

Lu & Yang 2010 looked at the energy required to make, transport and install a (nominal) 22 KW solar panel on a roof in Hong Kong – and what it produced in return. Here is the specification of the module (the system had 125 modules):

Lu & Wang 2010 - Solar panel details

For the system’s energy efficiency, the average energy efficiency of a Sunny Boy inverter is assumed 94%, and other system losses are assumed 5%.

This is a grid-connected solar panel – that is, it is a solar panel with an inverter to produce the consumer a.c. voltage, and excess power is fed into the grid. If it had the expensive option of battery storage so it was self-contained, the energy input (to manufacture) would be higher (note 1).

For stand-alone (non-rooftop) systems the energy used in producing the structure becomes greater.

Here’s the pie chart of the estimated energy consumed in different elements of the process:

From Lu & Yang (2010)

From Lu & Yang (2010)

A big part of the energy is consumed in producing the silicon, with a not insignificant amount for slicing it into wafers. BOS = “balance of system” and we see this is also important. This is the mechanical structure and the inverter, cabling, etc.

The total energy per meter squared:

  • silicon purification and processing – 666 kWh
  • slicing process – 120 kWh
  • fabricating PV modules – 190 kWh
  • rooftop supporting structure – 200 kWh
  • production of inverters – 33 kWh
  • other energy used in system operation and maintenance, electronic components, cables and miscellaneous – 125 kWh

Transportation energy use turned out pretty small as might be expected (and is ignored in the total).

Therefore, the total energy consumed in producing and installing the 22 kW grid-connected PV system is 206,000 kWh, with 29% from BOS, and 71% from PV modules.

What does it produce? Unfortunately the data for the period is calculated not measured due to issues with the building management system (the plan was to measure the electrical production, however, it appears some data points have been gathered).

Now there’s a few points that have an impact on solar energy production. This isn’t comprehensive and is not from their paper:

  • Solar cells rated values are taken at 25ºC, but when you have sunlight on a solar cell, i.e., when it’s working, it can be running at a temperature of up to 50ºC. The loss due to temperature is maybe 12 – 15% (I am not clear how accurate this number is).
  • Degradation per year is between 0.5% and 1% depending on the type of silicon used (I don’t know how reliable these numbers are at 15 years out)
  • Dust reduces energy production. It’s kind of obvious but unless someone is out there washing it on a regular basis you have some extra, unaccounted losses.
  • Inverter quality

Obviously we need to calculate what the output will be. Most locations, and Hong Kong is no exception, have a pretty well-known solar W/m² at the surface. The angle of the solar cells has a very significant impact. This installation was at 22.5º – close to the best angle of 30º to maximize solar absorption.

Lu & Yang calculate:

For the 22 kW roof-mounted PV system, facing south with a tilted angle of 22.5, the annual solar radiation received by the PV array is 266,174 kWh using the weather data from 1996 to 2000, and the annual energy output (AC electricity) is 28,154 kWh. The average efficiency of the PV modules on an annual basis is 10.6%, and the rated standard efficiency of the PV modules from manufacturer is 13.3%. The difference can be partly due to the actual higher cell operating temperature.

The energy output of the PV system could be significantly affected by the orientations of the PV modules. Therefore, different orientations of PV arrays and the corresponding annual energy output are investigated for a similar size PV system in Hong Kong, as given in Table 3. Obviously, for the same size PV system, the energy output could be totally different if the PV modules are installed with different orientations or inclined angles. If the 22 kW PV system is installed on vertical south-facing facade, the system power output is decreased by 45.1% compared that of the case study.

So the energy used will be returned in approximately 7.3 years.

Energy in = 206 MWh. Energy out = 28 MWh per year.

Location Location

Let’s say we put that same array on a rooftop in Germany, the poster-child for solar takeup. The annual solar radiation received by the PV array is about 1000 KWh per m², about 60% of the value in HK (note 2).

Energy in = 206 MWh. Energy out in Germany = 15.8 MWh per year (13 years payback).

I did a quick calculation using 13.3% module efficiency (rated performance at 25ºC), a 15% loss due to the high temperature of the module being in the direct sunlight (when it is producing most of its electricity), an inverter & cabling efficiency of 90% and a 0.5% loss per year of solar efficiency. Imagine no losses from dust. Here is the year by year production – assumes 1000 kWhr solar radiation annually and 150 m² PV cells:

Screen Shot 2015-08-02 at 6.10.33 pm

Screen Shot 2015-08-02 at 6.14.11 pm

Here we get to energy payback at end of year 14.

I’m not sure if anyone has done a survey of the angle of solar panels placed on residential rooftops but if the angle is 10º off its optimum value we will see very roughly something towards a 10% loss in efficiency. Add in some losses for dust (pop quiz – how many people have seen residents cleaning their solar panels on the weekend?) What’s the real long term energy efficiency of a typical economical consumer solar inverter? It’s easy to see the energy payback moving around significantly in real life.

Efficiency Units – g CO2e / kWh and Miles per Gallon

When considering the GHG production in generating electricity, there is a conventional unit – amount of CO2 equivalent per unit of electricity produced. This is usually grams of CO2 equivalent (note 3) per KWh (a kilowatt hour is 3.6 MJ, i.e., 1000J per second for 3,600 seconds).

This is a completely useless unit to quote for solar power.

Imagine, if you will, the old school (new school and old school in the US) measurement of car efficiency – miles per gallon. You buy a Ford Taurus in San Diego, California and it gets you 28 miles per gallon. You move to Portland, Maine and now it’s doing 19 miles per gallon. It’s the exact same car. Move back to San Diego and it gets 28 miles per gallon again.

You would conclude that the efficiency metric was designed by ..

I’m pretty sure my WiFi router uses just about the same energy per GBit of data regardless of whether I move to Germany, or go and live at the equator. And equally, even though it is probably designed to sit flat, if I put it on its side it will still have the same energy efficiency to within a few percent. (Otherwise energy per GBit would not be a useful efficiency metric).

This is not the case with solar panels.

With solar panels the metric you want to know is how much energy was consumed in making it and where in the world most of the production took place (especially the silicon process). Once you have that data you can consider where in the world this technology will sit, at what angle, the efficiency of the inverter that is connected and how much dust accumulates on those beautiful looking panels. And from that data you can work out the energy efficiency.

And from knowing where in the world it was produced you can work out, very approximately (especially if it was in China) how much GHGs were produced in making your panel. Although I wonder about that last point..

The key point on efficiency in case it’s not obvious (apologies for laboring the point):

  • the solar panel cost = X KWh of electricity to make – where X is a fixed amount (but hard to figure out)
  • the solar panel return = Y KWhr per year of electricity – where Y is completely dependent on location and installed angle (but much easier to figure out)

The payback can never be expressed as g CO2e/KWh without stating the final location. And the GHG reduction can never be expressed without stating the manufacturing location and the final location.

Moving the Coal-Fired Power Station

Now let’s consider that all energy is not created equally.

Let’s suppose that instead of the solar panel being produced in an energy efficient country like Switzerland, it’s produced in China. I can find the data on electricity production and on GHG emissions but China also creates massive GHG emissions from things like cement production so I can’t calculate the GHG efficiency of their electricity production. And China statistics have more question marks than some other places in the world. Maybe one of our readers can provide this data?

Let’s say a GHG-conscious country is turning off efficient (“efficient” from a conventional fossil-fuel perspective) gas-fired power stations and promoting solar energy into the grid. And the solar panels are produced in China.

Now while the energy payback stays the same, the GHG payback might be moving to the 20 year mark or beyond – because 1 KWh “cost” came from coal-fired power stations and 1 KWh return displaced energy from gas-fired power stations. Consider the converse, if we have solar panels made in an (GHG) energy efficient country and shipped to say Arizona (lots of sun) to displace coal-fired power it will be a much better equation. (I have no idea if Arizona gets energy from coal but last time I was there it was very sunny).

But if we ship solar panels from China to France to displace nuclear energy, I’m certain we are running a negative GHG balance.

Putting solar panels in high latitude countries and not considering the country of origin might look nice – and it certainly moves the GHG emissions off your country’s balance sheet – but it might not be as wonderful as many people believe.

It’s definitely not free.

Other Data Points

How much energy is consumed in producing the necessary parts?

This is proprietary data for many companies.

Those very large forward-thinking companies that might end up losing business if important lobby groups took exception to their business practices, or if a major government black-listed them, have wonderful transparency. A decade or so ago I was taken on a tour through one of the factories of a major pump company in Sweden. I have to say it was quite an experience. The factory workers volunteer to take the continual stream of overseas visitors on the tour and all seem passionate about many aspects including the environmental credentials of their company – “the creek water that runs through the plant is cleaner at the end than when it comes into the plant”.

Now let’s picture a solar PV company which has just built its new factory next to a new coal-fired power station in China. You are the CEO or the marketing manager. An academic researcher calls to get data on the energy efficiency of your manufacturing process. Your data tells you that you consume a lot more power than the datapoints from Siemens and other progressive companies that have been published. Do you return the call?

There must be a “supplier selection” bias given the data is proprietary and providing the data will lead to more or less sales depending on the answer.

Perhaps I am wrong and the renewables focus of countries serious about reducing GHGs means that manufacturers are only put on the approved list for subsidies and feed-in tariffs when their factory has been thoroughly energy audited by an independent group?

In a fairly recent paper, Peng et al (2013) – whose two coauthors appear to be the same authors of this paper we reviewed – noted that mono-silicon (the solar type used in this study) has the highest energy inputs. They review a number of studies that appear to show significantly better energy paybacks. We will probably look at that paper in a subsequent article, but I did notice a couple of interesting points.

Many studies referenced are from papers from 15 years ago which contain very limited production data (e.g. one value from one manufacturer). They comment on Knapp & Jester (2001) who show much higher values than other studies (including this one) and comment “The results of both embodied energy and EBPT are very high, which deviate from the previous research results too much.” However, Knapp & Jester appeared to be very thorough:

This is instead a chiefly empirical endeavor, utilizing measured energy use, actual utility bills, production data and complete bill of materials to determine process energy and raw materials requirements. The materials include both direct materials, which are part of the finished product such as silicon, glass and aluminum, and indirect materials, which are used in the process but do not end up in the product such as solvents, argon, or cutting wire, many of which turn out to be significant.

All data are based on gross inputs, fully accounting for all yield losses without requiring any yield assumptions. The best available estimates for embodied energy content for these materials are combined with materials use to determine the total embodied and process energy requirements for each major step of the process..

..Excluded from the analysis are (a) energy embodied in the equipment and the facility itself, (b) energy needed to transport goods to and from the facility, (c) energy used by employees in commuting to work, and (d) decommissioning and disposal or other end-of-life energy requirements.

Perhaps Knapp & Jester got much higher results because their data was more complete? Perhaps they got much higher results because their data was wrong. I’m suspicious.. and by the way they didn’t include the cost of building the factory in their calculations.

A long time ago I worked in the semiconductor industry and the cost of building new plants was a lot higher than the marginal cost of making wafers and chips. That was measured in $ not kWh so I have no idea on the fixed/marginal kHr cost of making semiconductors for solar PV cells.


One other point to consider, the GHG emissions of solar panels all occur at the start. The “recovered” GHG emissions of displaced conventional power are year by year.

Solar power is not a free lunch even though it looks like one. There appears to be a lot of focus on the subject so perhaps more definitive data in the near future will enable countries to measure their decarbonizing efforts with some accuracy. If governments giving subsidies for solar power are not getting independent audits of solar PV manufacturers they should be.

In case some readers think I’m trying to do a hatchet job on solar, I’m not.

I’m collecting and analyzing data and two things are crystal clear:

  • accurate data is not easily obtained and there may be a selection bias with inefficient manufacturers not providing data into these studies
  • the upfront “investment” in GHG emissions might result in a wonderful payback in reduction of long-term emissions, but change a few assumptions, especially putting solar panels into high-latitude energy-efficient countries, and it might turn out to be a very poor GHG investment


Environmental payback time analysis of a roof-mounted building-integrated photovoltaic (BIPV) system in Hong Kong, L. Lu, H.X. Yang, Applied Energy (2010)

Review on life cycle assessment of energy payback and greenhouse gas emission of solar photovoltaic systems, Jinqing Peng, Lin Lu & Hongxing Yang, Renewable and Sustainable Energy Reviews (2013)

Empirical investigation of energy payback time for photovoltaic modules, Knapp & Jester, Solar Energy (2001)


Note 1: I have no idea if it would be a lot higher. Many people are convinced that “next generation” battery technology will allow “stand-alone” solar PV. In this future scenario solar PV will not add intermittancy to the grid and will, therefore, be amazing. Whether or not the economics mean this is 5 years away or 50 years away, note to the enthusiasts to check the GHG used in the production of these (future) batteries.

Note 2: The paper didn’t explicitly give the solar cell area. I calculated it from a few different numbers they gave and it appears to be 150m², which gives an annual average surface solar radiation of 1770 KWh/m². Consulting a contour map of SE Asia shows that this value might be correct. For the purposes of the comparison it isn’t exactly critical.

Note 3: Putting of 1 tonne of methane into the atmosphere causes a different (top of atmosphere) radiation change from 1 tonne of CO2. To make life simpler, given that CO2 is the primary anthropogenic GHG, all GHGs are converted into “equivalent CO2”.

This blog is about climate science.

I wanted to take a look at Renewable Energy because it’s interesting and related to climate science in an obvious way. Information from media sources confirms my belief that 99% of what is produced by the media is rehashed press releases from various organizations with very little fact checking. (Just a note for citizens alarmed by this statement – they are still the “go to source” for the weather, footage of disasters and partly-made-up stories about celebrities).

Regular readers of this blog know that the articles and discussion so far have only been about the science – what can be proven, what evidence exists, and so on. Questions about motives, about “things people might have done”, and so on, are not of interest in the climate discussion (not for this blog). There are much better blogs for that – with much larger readerships.

Here’s an extract from About this Blog:

Opinions are often interesting and sometimes entertaining. But what do we learn from opinions? It’s more useful to understand the science behind the subject. What is this particular theory built on? How long has the theory been “established”? What lines of evidence support this theory? What evidence would falsify this theory? What do opposing theories say?
Anything else?
This blog will try and stay away from guessing motives and insulting people because of how they vote or their religious beliefs. However, this doesn’t mean we won’t use satire now and again as it can make the day more interesting.

The same principles will apply for this discussion about renewables. Our focus will be on technical and commercial aspects of renewable energy, with a focus on evidence rather than figuring it out from “motive attribution”. And wishful thinking –  wonderful though it is for reducing personal stress – will be challenged.

As always, the moderator reserves the right to remove comments that don’t meet these painful requirements.

Here’s a claim about renewables from a recent media article:

By Bloomberg New Energy Finance’s most recent calculations a new wind farm in Australia would cost $74 a megawatt hour..

..”Wind is already the cheapest, and solar PV [photovoltaic panels] will be cheaper than gas in around two years, in 2017. We project that wind will continue to decline in cost, though at a more modest rate than solar. Solar will become the dominant source in the longer term.”

I couldn’t find any evidence in the article that verified the claim. Only that it came from Bloomberg New Energy Finance and was the opposite of a radio shock jock. Generally I favor my dogs’ opinions over opinionated media people (unless it is about the necessity of an infinite supply of Schmackos starting now, right now). But I have a skeptical mindset and not knowing the wonderful people at Bloomberg I have no idea whether their claim is rock-solid accurate data, or “wishful thinking to promote their products so they can make lots of money and retire early”.

Calculating the cost of anything like this is difficult. What is the basis of the cost calculation? I don’t know if the claim in BNEF’s calculation is “accurate” – but without context it is not such a useful number. The fact that BNEF might have some vested interest in a favorable comparison over coal and gas is just something I assume.

But, like with climate science, instead of discussing motives and political stances, we will just try and figure out how the numbers stack up. We won’t be pitting coal companies (=devils or angels depending on your political beliefs) against wind turbine producers (=devils or angels depending on your political beliefs) or against green activists (=devils or angels depending on your political beliefs).

Instead we will look for data – a crazy idea and I completely understand how very unpopular it is. Luckily, I’m sure I can help people struggling with the idea to find better websites on which to comment.

Calculating the Cost

I’ve read the details of a few business plans and I’m sure that most other business plans also have the same issue – change a few parameters (=”assumptions”, often “reasonable assumptions”) and the outlook goes from amazing riches to destitution and bankruptcy.

The cost per MWHr of wind energy will depend on a few factors:

  • cost of buying a wind turbine
  • land acquisition/land rental costs
  • installation cost
  • grid connection costs
  • the “backup requirement” aka “capacity credit”
  • cost of capital
  • lifetime of equipment
  • maintenance costs
  • % utilization (output energy / nameplate capacity)

And of course, in any discussion about “the future”, favorable assumptions can be made about “the next generation”. Is the calculation of $74/MWHr based on what was shipped 5 years ago and its actuals, or what is suggested for a turbine purchased next year?

If you want wind to look better than gas or coal – or the converse – there are enough variables to get the result you want. I’ll be amazed if you can’t change the relative costs by a factor of 5 by playing around with what appear to be reasonable assumptions.

Perhaps the data is easy to obtain. I’m sure many readers have some or all of this data to hand.

Moore’s Law and Other Industries

Most people are familiar with the now legendary statement from the 1960s about semiconductor performance doubling every 18 months. This revolution is amazing. But it’s unusual.

There are a lot of economies of scale from mass production in a factory. But mostly limiting cases are reached pretty quickly, after which cost reductions of a few percent a year are great results – rather than producing the same product for 1% of what it cost just 10 years before. Semiconductors are the exception.

When a product is made from steel alloys, carbon fiber composites or similar materials we can’t expect Moore’s law to kick in. On the other hand, products that rely on a combination of software, electronic components and “traditional materials” and have been produced on small scales up until now can expect major cost reductions from amortizing costs (software, custom chips, tooling, etc) and general economies of scale (purchasing power, standardizing processes, etc).

In some industries, rapid growth actually causes cost increases. If you want an experienced team to provide project management, installation and commissioning services you might find that the boom in renewables is driving those costs up, not down.

A friend of mine working for a natural gas producer in Queensland, Australia recounted the story of the cost of building a dam a few years ago. Long story short, the internal estimates ranged from $2M to $7M, but when the tenders came in from general contractors the prices were $10M to $25M. The reason was a combination of:

  • escalating contractor costs (due to the boom)
  • compliance with new government environmental regulations
  • compliance with the customer’s many policies / OH&S requirements
  • the contractual risk due to all of the above, along with the significant proliferation of contract terms (i.e., will we get sued, have we taken on liabilities we don’t understand, etc)

The point being that industry insiders – i.e., the customer – with a strong vested interest in understanding current costs was out by a factor of more than three in a traditional enterprise. This kind of inaccuracy is unusual but it can happen when the industry landscape is changing quickly.

Even if you have signed a fixed price contract with an EPC you can only be sure this is the minimum you will be paying.

The only point I’m making is that a lot of costs are unknown even by experienced people in the field. Companies like BNEF might make some assumptions but it’s a low stress exercise when someone else will be paying the actual bills.

Intermittency & Grid Operators

We will discuss this further in future articles. This is a key issue between renewables and fossil fuel / nuclear power stations. The traditional power stations can create energy when it is needed. Wind and solar – mainstays of the renewable revolution – create energy when the sun shines and the wind blows.

As a starting point for any discussion let’s assume that storing energy is massively uneconomic. While new developments might be available “around the corner”, storing energy is very expensive. The only real mechanism is pumped hydro schemes. Of course, we can discuss this.

Grid operators have a challenge – balance demand with supply (because storage capacity is virtually zero). Demand is variable and although there is some predictability, there are unexpected changes even in the short term.

The demand curve depends on the country. For example, the UK has peak demand in the winter evenings. Wealthy hotter countries have peak demand in the summer in the middle of the day (air-conditioning).

There are two important principles:

  • Grid operators already have to deal with intermittency because conventional power stations go off-line with planned outages and with unplanned, last minute, outages
  • Renewables have a “capacity credit” that is usually less than their expected output

The first is a simple one. An example is the Sizewell B nuclear power station in the UK supplying about 1GW [fixed] out of 80GW of total grid supply. From time to time it shuts down and the grid operator gets very little notice. So grid operators already have to deal with this. They use statistical calculations to ensure excess supply during normal operation, based on an acceptable “loss of load probability”. Total electricity demand is variable and supply is continually adjusted to match that demand. Of course, the scale of intermittency from large penetration of renewables may present challenges that are difficult to deal with by comparison with current intermittency.

The second is the difficult one. Here’s an example from a textbook by Godfrey, that’s actually a collection of articles on (mainly) UK renewables:



The essence of the calculation is a probabilistic one. At small penetration levels, the energy input from wind power displaces the need for energy generation from traditional sources. But as the percentage of wind power increases, the “potential down time” causes more problems – requiring more backup generation on standby. In the calculations above, wind going from 0.5 GW to 25 GW only saves 4 GW in conventional “capacity”. This is the meaning of capacity credit – adding 25 GW of wind power (under this simulation) provides a capacity credit of only 4 GW. So you can’t remove 25 GW of conventional from the grid, you can only remove 4 GW of conventional power.

Now the calculation of capacity credit depends on the specifics of the history of wind speeds in the region. Increasing the geographical spread of wind power generation produces better results, dependent on the lower correlation of wind speeds across larger regions. Different countries get different results.

So there’s an additional cost with wind power that someone has to pay for – which increases along with the penetration of wind power. In the immediate future this might not be a problem because perhaps the capacity already exists and is just being put on standby. However, at some stage these older plants will be at end of life and conventional plants will need to be built to provide backup.

Many calculations exist of the estimated $/MWh from providing such a backup. We will dig into those in future articles. My initial impression is that there are a lot of unknowns in the real cost of backup supply because for much potential backup supply the lifetime / maintenance impact of frequent start-stops is unclear. A lot of this is thermal shock issues – each thermal cycle costs $X.. (based on the design of the plant to handle so many thousand starts before a major overhaul is needed).

The Other Side of the Equation – Conventional Power

It will also be interesting to get some data around conventional power. Right now, the cost of displacing conventional power is new investment in renewables, but keeping conventional power is not free. Every existing station has a life and will one day need to be replaced (or demand will need to be reduced). It might be a deferred cost but it will still be a cost.

$ and GHG emissions

There is a cost to adding 1GW of wind power. There is a cost to adding 1GW of solar power. There is also a GHG cost – that is, building a solar panel or a wind turbine is not energy free and must be producing GHGs in the process. It would be interesting to get some data on this also.

Conclusion – Introduction

I wrote this article because finding real data is demanding and many websites focused on the topic are advocacy-based with minimal data. Their starting point is often the insane folly and/or mendacious intent of “the other side”. The approach we will take here is to gather and analyze data.. As if the future of the world was not at stake. As if it was not a headlong rush into lunacy to try and generate most energy from renewables.. As if it was not an unbelievable sin to continue to create electricity from fossil fuels..

This approach might allow us to form conclusions from the data rather than the reverse.

Let’s see how this approach goes.

I am hoping many current (and future) readers can contribute to the discussion – with data, uncertainties, clarifications.

I’m not expecting to be able to produce “a number” for windpower or solar power. I’m hopeful that with some research, analysis and critical questions we might be able to summarize some believable range of values for the different elements of building a renewable energy supply, and also quantify the uncertainties.

Most of what I will write in future articles I don’t yet know. Perhaps someone already has a website where this project is already complete and in my Part Two will just point readers there..


Renewable Electricity and the Grid : The Challenge of Variability, Godfrey Boyle, Earthscan (2007)

In Part Nine – Data I – Ts vs OLR we looked at the monthly surface temperature (“skin temperature”) from NCAR vs OLR measured by CERES. The slope of the data was about 2 W/m² per 1K surface temperature change. Commentators pointed out that this was really the seasonal relationship – it probably didn’t indicate anything further.

In Part Ten we looked at anomaly data: first where monthly means were removed; and then where daily means were removed. Mostly the data appeared to be a big noisy scatter plot with no slope. The main reason that I could see for this lack of relationship was that anomaly data didn’t “keep going” in one direction for more than a few days. So it’s perhaps unreasonable to expect that we would find any relationship, given that most circulation changes take time.

We haven’t yet looked at regional versions of Ts vs OLR, the main reason is I can’t yet see what we can usefully plot. A large amount of heat is exported from the tropics to the poles and so without being able to itemize the amount of heat lost from a tropical region or the amount of heat gained by a mid-latitude or polar region, what could we deduce? One solution is to look at the whole globe in totality – which is what we have done.

In this article we’ll look at the mean global annual data. We only have CERES data for complete years from 2001 to 2013 (data wasn’t available to end of the 2014 when I downloaded it).

Here are the time-series plots for surface temperature and OLR:

Global annual Ts vs year & OLR  vs year 2001-2013

Figure 1

Here is the scatter plot of the above data, along with the best-fit linear interpolation:

Global annual Ts vs OLR 2001-2013

Figure 2

The calculated slope is similar to the results we obtained from the monthly data (which probably showed the seasonal relationship). This is definitely the year to year data, but also gives us a slope that indicates positive feedback. The correlation is not strong, as indicated by the R² value of 0.37, but it exists.

As explained in previous posts, a change of 3.6 W/m² per 1K is a “no feedback” relationship, where a uniform 1K change in surface & atmospheric temperature causes an OLR increase of 3.6 W/m² due to increased surface and atmospheric radiation – a greater increase in OLR would be negative feedback and a smaller increase would be positive feedback (e.g. see Part Eight with the plot of OLR changes vs latitude and height, which integrated globally gives 3.6 W/m²).

The problem of the”no feedback” calculation is perhaps a bit more complicated and I want to dig into this calculation at some stage.

I haven’t looked at whether the result is sensitive to the date of the start of year. Next, I want to look at the changes in humidity, especially upper tropospheric water vapor, which is a key area for radiative changes. This will be a bit of work, because AIRS data comes in big files (there is a lot of data).

[I was going to post this new article not long after the last article in the series, but felt I was missing something important and needed to think about it. Instead I’ve not had any new insights and am posting for comment.]

In Part Nine – Data I, we looked at the relationship between Ts (surface temperature) and OLR (outgoing longwave radiation), for reasons all explained there.

The relationship shown there appears to be primarily the seasonal relationship, which looks like a positive feedback due to the 2W/m² per 1K temperature increase. What about the feedback on a different timescale from the seasonal relationship?

From the 2001-2013 data, here is the monthly mean and the daily mean for both Ts and OLR:

Monthly mean & daily mean for Ts & OLR

Figure 1

If we remove the monthly mean from the data, here are those same relationships (shown in the last article as anomalies from the overall 2001-2013 mean):

OLR vs Ts - NCAR -CERES-monthlymeansremoved

Figure 2 – Click to Expand

On a lag of 1 day there is a possible relationship with a low correlation – and the rest of the lags show no relationship at all.

Of course, we have created a problem with this new dataset – as the lag increases we are “jumping boundaries”. For example, on the 7-day lag all of the Ts data in the last week of April is being compared with the OLR data in the first week of May. With slowly rising temperatures, the last week of April will be “positive temperature data”, but the first week of May will be “negative OLR data”. So we expect 1/4 of our data to show the opposite relationship.

So we can show the data with the “monthly boundary jumps removed” – which means we can only show lags of say 1-14 days (with 3% – 50% of the data cut out); and we can also show the data as anomalies from the daily mean. Both have the potential to demonstrate the feedback on shorter timescales than the seasonal cycle.

First, here is the data with daily means removed:

OLR vs Ts - NCAR -CERES-dailymeansremoved

Figure 3 – Click to Expand

Second, here is the data with the monthly means removed as in figure 2, but this time ensuring that no monthly boundaries are crossed (so some of the data is removed to ensure this):

OLR vs Ts - NCAR -CERES-monthlymeansremoved-noboundary

Figure 4 – Click to Expand

So basically this demonstrates no correlation between change in daily global OLR and change in daily global temperature on less than seasonal timescales. (Or “operator error” with the creation of my anomaly data). This is excluding (because we haven’t tested it here) the very short timescale of day to night change.

This was surprising at first sight.

That is, we see global Ts increasing on a given day but we can’t distinguish any corresponding change in global OLR from random changes, at least until we get to seasonal time periods? (See graph in last article).

Then what is probably the reason came into view. Remember that this is anomaly data (daily global temperature with monthly mean subtracted). This bar graph demonstrates that when we are looking at anomaly data, most of the changes in global Ts are reversed the next day, or usually within a few days:

Days temperature goes in same direction

Figure 5

This means that we are unlikely to see changes in Ts causing noticeable changes in OLR unless the climate response we are looking for (humidity and cloud changes) occurs within a day or two.

That’s my preliminary thinking, looking at the data – i.e., we can’t expect to see much of a relationship, and we don’t see any relationship.

One further point – explained in much more detail in the (short) series Measuring Climate Sensitivity – is that of course changes in temperature are not caused by some mechanism that is independent of radiative forcing.

That is, our measurement problem is compounded by changes in temperature being first caused by fluctuations in radiative forcing (the radiation balance) and ocean heat changes and then we are measuring the “resulting” change in the radiation balance resulting from this temperature change:

Radiation balance & ocean heat balance => Temperature change => Radiation balance & ocean heat balance

So we can’t easily distinguish the net radiation change caused by temperature changes from the radiative contribution to the original temperature changes.

I look forward to readers’ comments.

Articles in the Series

Part One – introducing some ideas from Ramanathan from ERBE 1985 – 1989 results

Part One – Responses – answering some questions about Part One

Part Two – some introductory ideas about water vapor including measurements

Part Three – effects of water vapor at different heights (non-linearity issues), problems of the 3d motion of air in the water vapor problem and some calculations over a few decades

Part Four – discussion and results of a paper by Dessler et al using the latest AIRS and CERES data to calculate current atmospheric and water vapor feedback vs height and surface temperature

Part Five – Back of the envelope calcs from Pierrehumbert – focusing on a 1995 paper by Pierrehumbert to show some basics about circulation within the tropics and how the drier subsiding regions of the circulation contribute to cooling the tropics

Part Six – Nonlinearity and Dry Atmospheres – demonstrating that different distributions of water vapor yet with the same mean can result in different radiation to space, and how this is important for drier regions like the sub-tropics

Part Seven – Upper Tropospheric Models & Measurement – recent measurements from AIRS showing upper tropospheric water vapor increases with surface temperature

Part Eight – Clear Sky Comparison of Models with ERBE and CERES – a paper from Chung et al (2010) showing clear sky OLR vs temperature vs models for a number of cases

Part Nine – Data I – Ts vs OLR – data from CERES on OLR compared with surface temperature from NCAR – and what we determine

Part Ten – Data II – Ts vs OLR – more on the data


Get every new post delivered to your Inbox.

Join 389 other followers