Feeds:
Posts
Comments

For some countries – cold, windy ones like England – wind power appears to offer the best opportunity for displacing GHG-emitting electricity generation. In most developed countries renewable electricity generation from hydro is “tapped out” – i.e., there is no opportunity for developing further hydroelectric power.

There’s a lot of confusion about wind power. Some of this we looked at briefly in earlier articles.

Nameplate & Actual

The nameplate is not what anyone (involved in the project) is expecting to get out of it.

So if you buy “10GW” of wind farms you aren’t expecting 10 GW x 8,760 (thanks to DeWitt Payne for updating me on how many hours there are in a year) = 87.6 TWh of annual electricity generation. Depending on the country, the location, the turbines and turbine height you will get an “average utilization”. In the UK that might be something like 30%, or even a little higher. So for 10GW of wind farms – everyone (involved in the project) is expecting something like 26 TWh of annual electricity generation (10 x 8760 x 0.3 / 1000).

We could say, on average the wind farm will produce 3GW of power. That’s just another way of writing 26 TWh annually. So 10GW of nameplate wind power does not need “10 GW of backup” or “7 GW of backup”. Does it need “3GW of backup”? Let’s look at capacity credit.

Just before we do, if you are new to renewables, whenever you see statements, press releases and discussions about “X MW of wind power being added” check whether it is nameplate power or actual expected power. Often it is secondarily described in terms of TWh or GWh – this is the actual energy expected over the year from the wind farm or project.

Capacity Credit

The capacity credit is the “credit” the operator gives you for providing “capacity” when it is in most demand. Operators have peaks and troughs in demand. There are lots of ways of looking at this, here is one example from Gross et al 2006, showing the time of day variation of demand for different seasons in the UK. We can see winter is the time of peak demand:

From Gross et al 2006

From Gross et al 2006

Figure 1

If you have a nuclear power station it probably runs 90% of the time. Some of the off-line time is planned outages for maintenance, upgrades, replacement of various items. Some of the off-line time is unplanned outages, where the grid operator gets 10 minutes notice that “sorry Sizewell B is going off line, can’t chat now, have a great day”, taking out over 1GW of capacity. So the capacity credit for nuclear reflects the availability and also the fact that the plant is “dispatchable” – apart from unplanned outages it will run when you want it to run.

The grid of each country (or region within a country) is a system. Because all of the generation within most of the UK is connected together, Sizewell B doesn’t need to be backed up with its own 1GW of coal-fired power stations. All you need is to have sufficient excess capacity to cope with peak demand given the likelihood of any given plant(s) going off line.

It’s a pool of resources to cope with:

  • a varying level of demand, and
  • a certain amount of outage from any given resource

Wind is “intermittent” (likewise for solar). So you can’t dispatch it when you need it. Everyone (involved in producing power, planning power, running the grid) knows this. Everyone (“”) knows that sometimes the wind turns off.

If you add lots of wind power – let’s say a realistic 3GW of wind, from 10GW of nameplate capacity – the capacity credit isn’t 90% of 3GW like you get for a nuclear power station. It is a lot smaller. This reflects the fact that at times of peak demand there might be no wind power (or almost no wind power). However, wind does have some capacity credit.

This is a statistical calculation – for the UK, the winter quarter is used to calculate capacity credit (because it is the time of maximum demand). The value depends on the wind penetration, that is, how much energy is expected from the wind from that period. For low penetrations of wind, say 500 MW, you get full capacity credit (capacity credit = 500MW). For higher penetrations it changes. Let’s say wind power provides 20% of total demand. Total demand averages about 40GW in the UK so wind power would be producing an average 8GW. For significant penetrations of wind power you get a low percentage of the output as capacity credit. The value is calculated from the geographical spread and statistical considerations, and it might be 10-20% of the expected wind power. Let’s say 8GW of output power (averaged over the year) gets 0.8GW – 1.6GW of “capacity credit”.

This means that when calculating how much aggregate supply is available windpower gets a tick in the box for 0.8GW – 1.6GW (depending on the calculation of credit). This is true even though there are times when the wind power is zero. How can it get capacity credit above zero when sometimes its power is zero? Because it is a statistical availability calculation. How can Sizewell B get a capacity credit when sometimes it has an unplanned outage? We can’t rely on it either.

The point is, hopefully it is clear, sorry for laboring it – when the wind is zero, Sizewell B and another 60GW of capacity are probably available. (If it’s not clear, please ask, I’m sure I can paint a picture with an appropriate graph or something).

Low Capacity Credit Doesn’t Mean Low Benefit – And What We Do About Low Capacity Credit

Let’s say the capacity credit for wind was zero, just for sake of argument. Even then, wind still has a benefit (it has a cost as well). Its benefit comes from the fact that the marginal cost of energy is zero (neglecting O&M costs). And the GHG emissions are zero from all the energy produced. It has displaced GHG-emitting electricity generation.

What we do about the low capacity credit is we add – or retain – GHG-emitting conventional backup. The grid operator, or the market (depends on the country in question), has the responsibility/motivation to provide backup. Running a conventional station less often, and keeping a station part running, but not at full load – these reduce efficiency.

Let’s say we produce 70 TWh of electricity from wind (20% of UK electricity requirement of 350 TWh). Wonderful. We have displaced 70 TWh of GHG emitting power. But we haven’t. We have kept some GHG emitting power stations “warmed up” or “operational at part load” and so we might have displaced 65 TWh or 60 TWh (or some value) of GHG emitting power stations because we ran the conventional generators less efficiently than before.

We will look at the numbers in a later article.

So wind has benefit even though it is not “dispatchable”, even though sometimes at peak demand it produces zero energy.

Statistics of Wind and Forecast Time Horizons

Let’s suppose that even though wind is not “dispatchable” we had a perfect forecast of wind speeds around the region for the next 12 months. This would mean we could predict the power from the wind turbines for every hour of the day for the next 365 days.

In this imaginary case, power plant could be easily scheduled to be running at the right times to cover the lack of wind power. We could make sure that major plants did not have outages in the periods of prolonged low wind speeds. The efficiency of our “backup” generation would be almost as perfect as before wind power was introduced. So if we produced 70 TWh of wind energy we would displace just about 70 TWh of conventional GHG emitting generation. We would also probably need less excess capacity in the system because one area of uncertainty had been removed.

Of course we don’t have that. But at the same time, our forecast horizon is not zero.

The unexpected variability of wind changes with the time horizon we are concerned about. Let’s put it another way, if we are getting 1.5 GW from all of our wind farms right now, the chance of it dropping to 0 GW 10 minutes from now is very small. The chance of it being 0 GW 1 hour from now is quite small. But the chance of it being 0 GW in 4 hours might be quite a bit higher.

I hope readers are impressed with the definitive precision with which I nailed the actual probabilities there..

There are many dependencies – the location of the wind farms (the geographical spread), the actual country in question and the season and time of day under consideration.

We’ve all experienced the wind in a location dropping to nothing in an instant. But as you install more turbines over a wider area the output variance over a given time period reduces. A few graphs from Boyle (2007) should illuminate the subject.

Here is a comparison of 1 hour changes between a single wind farm and half of Denmark:

Boyle 2010 Denmark hourly change single vs all

Figure 2

Here is a time-series simulation of a given 1000MW capacity in one location (single farm) vs that same capacity spread across the UK:

Boyle 2010 Wind power single vs distributed

Figure 3

Here is an example from the actual output of the wind power network in Germany:

From Boyle 2010

Figure 4

At some stage I will dig out some more recent actuals. The author of that chapter comments:

Care should be taken in drawing parallels, however, between experiences in Germany and Denmark and the situation elsewhere, such as in the UK. Wind conditions over the whole British electricity supply system should be assumed to be different unless proved otherwise. Differences in latitude and longitude, the presence of oceans, as well as the area covered by the wind power generation industry make comparisons difficult. The British wind industry, for example, has a longer north–south footprint than in Denmark, while in Germany the wind farms have a strong east–west configuration.

 

Here is an example from Gross et al (2006) of variations across 1, 2 and 4 hours:

From Gross et al 2006

From Gross et al 2006

Figure 5

Here’s another breakdown of how the UK wind output varies, this time as a probability distribution:

Boyle 2010 PD of wind power in UK

Figure 6

In another paper on the UK, Strbac et al 2007:

Standard deviations of the change in wind output over 0.5hr and 4hr time horizons were found to be 1.4% and 9.3% of the total installed wind capacity, respectively. If, for example, the installed capacity of wind generation is 10 GW (given likely locations of wind generation), standard deviations of the change in wind generation outputs were estimated to be 140 MW and 930 MW over the 0.5-h and 4-h time horizons, respectively.

What this means for a grid operator is that predictability changes with the time horizon. This matters because their job is to match supply and demand and if the wind is going to be high, less conventional stations are needed to be “warmed up”. If the wind is going to be low, more conventional stations are needed. But if we didn’t know anything in advance – that is, if we could get anything between 0GW to 10GW with just 30 minutes notice – it would present a much bigger problem.

Closing the Gate, Spinning Reserves and Frequency

The grid operator has to match supply and demand (see note for an extended extract on how this works).

Demand varies, but must be met – except for some (typically) larger industrial customers who have agreed contracts to turn off their plant under certain conditions, such as when demand is high.

The grid operator has a demand forecast based on things like “reviewing the past” and as a result enters into contracts for the hour ahead for supply. This is the case in the UK. Other countries have different rules and time periods, but the same principles apply: the grid operator “closes the gate”. To me this is not an intuitive term because he/she has contracts for flexible supply and reserves – in case demand is above what is expected or contracted plant goes offline. So gate closure means the contract position is fixed for the next time period.

However, the actual problem is to meet demand and to do this flexible plant is up and running and part loaded. Some load matching is done automatically. This happens via frequency. If you increase the load on the system the frequency starts to fall. Reserve plant increases its output automatically as the frequency falls (and the converse). This is how the very short term supply-demand matching takes place.

So the uncertainty about the wind output over the next hour is the key for the UK grid operator. It is a key factor in changing the cost of reserves as wind power penetration increases. If the gate closure was for the next 12 hours it should be clear that the cost to the grid operator of matching supply and demand would increase – given that the uncertainty about wind is higher the longer the time period in question.

Whether one hour or 12 hours gate closure makes a huge difference in overall cost of supply is likely a very complicated one, and not one I expect we can uncover easily, or at all. The market mechanism in the UK is around the 1 hour gate closure and so suppliers have all creating pricing models based on this.

Grid Stability – SNSP and Fault Ride-Through

System Non-Synchronous Penetration (SNSP) and fault ride-through capability are important for wind power. Basically wind power has different characteristics from existing conventional plant and has the potential to bring the grid down. We will look at the important question of what wind power does to the stability of the grid in a subsequent article.

Articles in this Series

Renewable Energy I – Introduction

Renewables II – Solar and Free Lunches – Solar power

Renewables III – US Grid Operators’ Opinions – The grid operators’ concerns

Renewables IV – Wind, Forecast Horizon & Backups – Some more detail about wind power – what do we do when the wind goes on vacation

Renewables V – Grid Stability As Wind Power Penetration Increases – Brief simplified discussion of Fault ride-through and System Non-Synchronous Penetration (SNSP)

References

Renewable Electricity and the Grid : The Challenge of Variability, Godfrey Boyle, Earthscan (2007) – textbook

The Costs and Impacts of Intermittency: An assessment of the evidence on the costs and impacts of intermittent generation on the British electricity network, Gross et al, UK Energy Research Centre (2006) – free research paper

Impact of wind generation on the operation and development of the UK electricity systems, Goran Strbac, Anser Shakoor, Mary Black, Danny Pudjianto & Thomas Bopp, Electric Power Systems Research (2007)

Notes

Extract from Gross et al 2006 explaining the UK balancing in a little detail – the whole document is free and well-worth spending the time to read:

The supply of electricity is unlike the supply of other goods. Electricity cannot be readily stored in large amounts and so the supply system relies on exact second-by-second matching of the power generation to the power consumption. Some demand falls into a special category and can be manipulated by being reduced or moved in time.

Most demand, and virtually all domestic demand, expects to be met at all times.

It is the supply that is adjusted to maintain the balance between supply and demand in a process known as system balancing.

There are several aspects of system balancing. In the UK system, contracts will be placed between suppliers and customers (with the electricity wholesalers buying for small customers on the basis of predicted demand) for selling half hour blocks of generation to matching blocks of consumption. These contracts can be long standing or spot contracts.

An hour ahead of time these contract positions must be notified to the system operator which in Great Britain is National Grid Electricity Transmission Limited. This hour-ahead point (some countries use as much as twenty-four hour ahead) is known as gate closure.

At gate closure the two-sided market of suppliers and consumers ceases. (National Grid becomes the only purchaser of generation capability after gate closure and its purpose in doing so is to ensure secure operation of the system.) What actually happens when the time comes to supply the contracted power will be somewhat different to the contracted positions declared at gate closure. Generators that over or under supply will be obliged to make good the difference at the end of the half hour period by selling or buying at the system sell price or system buy price. Similar rules apply to customers who under or over consume.

This is known as the balancing mechanism and the charges as balancing system charges. This resolves the contractual issues of being out-of- balance but not the technical problems.

If more power is consumed than generated then all of the generators (which are synchronised such that they all spin at the same speed) will begin to slow down. Similarly, if the generated power exceeds consumption then the speed will increase. The generator speeds are related to the system frequency. Although the system is described as operating at 50 Hz, in reality it operates in a narrow range of frequency centred on 50 Hz. It is National Grid’s responsibility to maintain this frequency using “primary response” plant (defined below). This plant will increase or decrease its power output so that supply follows demand and the frequency remains in its allowed band. The cost of running the primary response plant can be recovered from the balancing charges levied on those demand or supply customers who did not exactly meet their contracted positions. It is possible that a generator or load meets its contract position by consuming the right amount of energy over the half hour period but within that period its power varied about the correct average value. Thus the contract is satisfied but the technical issue of second-by-second system balancing remains..

..Operating reserve is generation capability that is put in place following gate closure to ensure that differences in generation and consumption can be corrected. The task falls first to primary response.

This is largely made up of generating plant that is able to run at much less than its rated power and is able to very quickly increase or decrease its power generation in response to changes in system frequency. Small differences between predicted and actual demand are presently the main factor that requires the provision of primary response. There can also be very large but infrequent factors that need primary response such as a fault at a large power station suddenly removing some generation or an unpredicted event on TV changing domestic consumption patterns.

The primary response plant will respond to these large events but will not then be in a position to respond to another event unless the secondary response plant comes in to deal with the first problem and allow the primary response plant to resume its normal condition of readiness. Primary response is a mixture of measures. Some generating plant can be configured to automatically respond to changes in frequency. In addition some loads naturally respond to frequency and other loads can be disconnected (shed) according to prior agreement with the customers concerned in response to frequency changes.

Secondary response is normally instructed in what actions to take by the system operator and will have been contracted ahead by the system operator. The secondary reserve might be formed of open-cycle gas-turbine power stations that can start and synchronise to the system in minutes. In the past in the UK and presently in other parts of the world, the term spinning reserve has been used to describe a generator that is spinning and ready at very short notice to contribute power to the system. Spinning reserve is one example of what in this report is called primary response. Primary response also includes the demand side actions noted in discussing system frequency..

In Part I we had a brief look at the question of intermittency – renewable energy is mostly not “dispatchable”, that is, you can’t choose when it is available. Sometimes wind energy is there at the right time, but sometimes when energy demand is the highest, wind energy is not available.

The statistical availability depends on the renewable source and the country using it. For example, solar is a pretty bad solution for England where the sun is a marvel to behold on those few blessed days it comes out (we all still remember 1976 when it was more than one day in a row), but not such a bad solution in Texas or Arizona where the peak solar output often arrives on days when peak electricity demand hits – hot summer days when everyone turns on their air-conditioning.

The question of how often the renewable source is available is an important one, but is a statistical question.

Lots of confusion surrounds the topic. A brief summary of reality:

  1. The wind does always blow “somewhere”, but if we consider places connected to the grid of the country in question the wind will often not be blowing anywhere, or if it is “blowing” the output of the wind turbines will be a fraction of what is needed. The same applies to solar. (We will look at details of the statistics in later articles).
  2. The fact that at some times of peak demand there will be little or no wind or solar power doesn’t mean it provides no benefit – you simply need to “backup” the wind / solar with a “dispatchable” plant, i.e. currently a conventional plant. If you are running on wind “some of the time” you are displacing a conventional plant and saving GHG emissions, even if “other times” you are running with conventional power. A wind farm doesn’t need “a dedicated backup”, that is the wrong way to think about it, instead there needs to be sufficient “dispatchable” resources somewhere in the grid available for use when intermittent sources are not running.
  3. The costs and benefits are the key and need to be calculated.

However, the problem of intermittency depends on many factors including the penetration of renewables. That is, if you produce 1% of the region’s electricity from renewables the intermittency problem is insignificant. If you produce 20% it is significant and needs attention. If you produce 40% from renewables you might have a difficult problem. (We’ll have a look at Denmark at some stage).

Remember (or learn) that grid operators already have to deal with intermittency – power plants have planned and, even worse, unplanned outages. Demand moves around, sometimes in unexpected ways. Grid operators have to match supply and demand otherwise it is a bad outcome. So – to some extent – they have to deal with this conundrum.

What do grid operators think about the problem of integrating intermittent renewables, i.e., wind and solar into the grid? It’s always instructive to get the perspectives of people who do the actual work – in this case, of balancing supply and demand every day.

Here’s an interesting (free) paper: The intermittency of wind, solar, and renewable electricity generators: Technical barrier or rhetorical excuse? Benjamin K. Sovacool. As always I recommend reading the paper for yourself. Here is the abstract:

A consensus has long existed within the electric utility sector of the United States that renewable electricity generators such as wind and solar are unreliable and intermittent to a degree that they will never be able to contribute significantly to electric utility supply or provide baseload power. This paper asks three interconnected questions:

  1. What do energy experts really think about renewables in the United States?
  2. To what degree are conventional baseload units reliable?
  3. Is intermittency a justifiable reason to reject renewable electricity resources?

To provide at least a few answers, the author conducted 62 formal, semi-structured interviews at 45 different institutions including electric utilities, regulatory agencies, interest groups, energy systems manufacturers, nonprofit organizations, energy consulting firms, universities, national laboratories, and state institutions in the United States.

In addition, an extensive literature review of government reports, technical briefs, and journal articles was conducted to understand how other countries have dealt with (or failed to deal with) the intermittent nature of renewable resources around the world. It was concluded that the intermittency of renewables can be predicted, managed, and mitigated, and that the current technical barriers are mainly due to the social, political, and practical inertia of the traditional electricity generation system.

Many comments and opinions from grid operators are provided in this interesting paper. Here is one from California:

Some system operators state that the intermittence of some renewable technologies greatly complicates forecasting. David Hawkins of the California Independent Systems Operator (ISO) notes that:

Wind, for instance, can be forecasted and has predictable patterns during some periods of the year. California uses wind as an energy resource but it has a low capacity factor for meeting summer peak-loads. The total summer peak-load is 45,000 MW of load, but in January daily peak-loads are 29,000 MW, meaning that 16,000 MW of our system load is weather sensitive. In the winter and spring months, big storms come into California which creates dramatic changes in wind. We have seen ramps as large as 800 MW of wind energy increases in 30 min, which can be quite challenging“.

..A report from the California ISO found that relying on wind energy excessively complicated each of the five types of forecasts. As the study concluded, ‘‘although wind generator output can be forecast a day in advance, forecast errors of 20–50% are not uncommon’’

And a little later:

For instance, California Energy Commissioner Arthur Rosenfeld comments that:

Germany had to build a huge reserve margin (close to 50 percent) to back up its wind. People show lots of pictures of wind turbines in Germany, yet you never see the standby power plants in the picture. This is precisely why utilities fear wind: the cost per kWh of wind on the grid looks good only without the provision of large margins of standby power“.

Thomas Grahame, a senior researcher at the U.S. Department of Energy’s Office of Fossil Fuels, comments that:

‘‘when intermittent sources become a substantial part of the electricity generated in a region, the ability to integrate the resource into the grid becomes considerably more complex and expensive. It might require the use of electricity storage technologies, which will add to cost. Additionally, new transmission lines will also be needed to bring the new power to market. Both of these add to the cost’’

The author looks at issues surrounding conventional unplanned outages, at the risks and costs involved in the long cycle of building a new plant plus getting it online – versus the rapid deployment opportunities with wind and solar.

I’m aware of various studies that show that up to 20% wind is manageable on a grid, but above that issues may arise (e.g. Gross et al., 2006). There are, of course, large numbers of studies with many different findings – my recommendation for placing any study in context is first ask “what percentage of renewable penetration was this study considering”. (There are many other questions as well – change the circumstances and assumptions and your answers are different).

The author of this paper is more convinced that any issues are minor and the evidence all points in one direction:

Perhaps incongruously, no less than nine studies show that the variability of renewables becomes easier to manage the more they are deployed (not the other way around, as some utilities suggest). In one study conducted by the Imperial College of London, researchers assessed the impact that large penetration rates (i.e., above 20 percent) of renewable energy would have on the power system in the United Kingdom. The study found that the benefits of integrating renewables would far exceed their costs, and that ‘‘intermittent generation need not compromise electricity system reliability at any level of penetration foreseeable in Britain over the next 20 years.’’ Let me repeat this conclusion for emphasis: renewable energy technologies can be integrated at any level of foreseeable penetration without compromising grid stability or system reliability.

Unfortunately, there was no reference provided for this.

Claiming that the variability of renewable energy technologies means that the costs of managing them are too great has no factual basis in light of the operating experience of renewables in Denmark, Germany, the United Kingdom, Canada, and a host of renewable energy sites in the United States.

As I commented earlier I recommend readers interested in the subject to read the whole paper instead of just my extracts. It’s an interesting and easy read.

I can’t agree that the author has conclusively, or even tentatively, demonstrated that wind & solar (intermittent renewables) can be integrated into a grid to any arbitrary penetration level.

In fact most of the evidence cited in his paper is at penetration levels of 20% or less. Germany is cited because the country “is seeking to generate 100 percent of its electricity from renewables by 2030”, which doesn’t quite stand as evidence (and it would be uncharitable to comment on the current coal-fired power station boom in Germany). Denmark I would like to look at in a later article – is it a special case, or has it demonstrated the naysayers all to be wrong? We will see.

The penetration level is the key, combined with the technology and the country in question. It’s a statistical question. Conceptually it is not very difficult. Analyze meteorological data and/or actuals for wind and solar power generation in the region in question over a sufficiently long time and produce data in the format required for different penetration levels:

  • minimums at times of peak demand
  • length of time power from X MW capacity is below Y% of X MW capacity & how often this occurs & as a function of peak demand times
  • ..and so on

This does mean – it should be obvious – that each region and country will get different answers with different technologies. Linking together different regions with sufficient redundant transmission capacity is also not trivial, neither is “adding sufficient storage”.

If the solution to the problem is an un-costed redundant transmission line, we need to ask how much it will cost. The answer might be surprisingly high to many readers. If the solution to the problem is “next-generation storage” then the question is “will your solution work without this next-generation storage and what specification & cost are required?”

Of course, I would like to suggest another perspective to keep in mind with the discussion on renewables: the sunk cost of the existing power generation, transmission and distribution network is extremely high, and more than a century of incremental improvement and dispersion of knowledge and practical experience has led us to today – with obviously much lower marginal costs of using and expanding conventional power. But, we are where we are. What I hope to shed some light on in this series is what renewables actually cost, what benefits they bring and what practical difficulties exist in expanding renewables.

The author concludes:

Conventional power systems suffer variability and reliability problems, just to a different degree than renewables. Conventional power plants operating on coal, natural gas, and uranium are subject to an immense amount of variability related to construction costs, short-term supply and demand imbalances, long term supply and demand fluctuations, growing volatility in the price of fuels, and unplanned outages.

Contrary to proclamations stating otherwise, the more renewables that get deployed, the more – not less – stable the system becomes. Wind- and solar-produced power is very effective when used in large numbers in geographically spaced locations (so the law of averages yields a relative constant supply).

The issue, therefore, is not one of variability or intermittency per se, but how such variability and intermittency can best be managed, predicted, and mitigated.

Given the preponderance of evidence referenced here in favor of integrating renewables, utility and operator objections to them may be less about technical limitations and more about tradition, familiarity, and arranging social and political order.

The work and culture of people employed in the electricity industry promote ‘‘business as usual’’ and tend to culminate in dedicated constituencies that may resist change.

Managers of the system obviously prefer to maintain their domain and, while they may seek increased efficiencies and profits, they do not want to see the introduction of new and disruptive ‘‘radical’’ technologies that may reduce their control over the system.

In essence, the current ‘‘technical’’ barriers to large-scale integration of wind, solar, and other renewables may not be technical at all, and more about the social, political, and practical inertia of the traditional electricity generation system.

I’ve never met a grid operator, but I have worked with many people in technical disciplines in a variety of fields – in operations, production, maintenance, technical support, engineering and design. This includes critical infrastructure and the fields include process plants, energy, telecommunications networks, as well as private and municipal. You get a mix of personality types. Faced with a new challenge some relish the opportunity (more skills, more employable, promotion & pay opportunities, just the chance to learn and do something new). Others are reluctant and resist.

The author of the paper didn’t have so many doubts about this subject – other studies have concluded it will all work fine so the current grid operators are trapped in the past.

If I was asking lots of people in the field doing the actual job about the technical feasibility of a new idea and they unanimously said it would be a real problem, I would be concerned.

I would be interested to know why grid operators in the US that the author interviewed are resistant to intermittent renewables. Perhaps they understand the problem better than the author. Perhaps they don’t. It’s hard to know. The evidence Sovacool brings forward includes the fact that grid operators currently have to deal with unplanned outages. I suspect they are aware of this problem more keenly than Sovacool because it is their current challenge.

Perhaps US grid operators think there are no real technical challenges but expect that no one will pay for the standby generation required. Or they have an idea what the system upgrade costs are and just expect that this is a cost too high to bear. It’s not clear from this paper. I did peruse his PhD thesis that this paper was drawn from but didn’t get a lot more enlightenment.

However, it’s an interesting paper to get some background on the US grid.

References

The intermittency of wind, solar, and renewable electricity generators: Technical barrier or rhetorical excuse? Benjamin K. Sovacool, Utilities Policy (2009)

[Later note, Sep 2015, it’s clear – as can be seen in the later comments that follow the article – there is a difference between a number of papers that cannot be explained by ‘improved efficiencies in manufacturing’ or ‘improved solar-electricity conversion efficiencies’. The discrepancies are literally one group making a large mistake and taking “energy input” to be electricity input rather than fuel to put into power stations to create electricity – or the reverse. I suspect that the paper I highlight below is making the mistake, in which case this article is out by a factor of 3 against solar being a free lunch. In due course, I will try to fight through all the papers again to get to the bottom of it. I also have not been able to confirm that any of the papers really account for building all the new factories that manufacture the solar panels (instead perhaps they are just considering the marginal electricity use to make each solar cell).]

There are lots of studies of the energy and GHG input into production of solar panels. I’ve read some and wanted to highlight one to look at some of the uncertainties.

Lu & Yang 2010 looked at the energy required to make, transport and install a (nominal) 22 KW solar panel on a roof in Hong Kong – and what it produced in return. Here is the specification of the module (the system had 125 modules):

Lu & Wang 2010 - Solar panel details

For the system’s energy efficiency, the average energy efficiency of a Sunny Boy inverter is assumed 94%, and other system losses are assumed 5%.

This is a grid-connected solar panel – that is, it is a solar panel with an inverter to produce the consumer a.c. voltage, and excess power is fed into the grid. If it had the expensive option of battery storage so it was self-contained, the energy input (to manufacture) would be higher (note 1).

For stand-alone (non-rooftop) systems the energy used in producing the structure becomes greater.

Here’s the pie chart of the estimated energy consumed in different elements of the process:

From Lu & Yang (2010)

From Lu & Yang (2010)

A big part of the energy is consumed in producing the silicon, with a not insignificant amount for slicing it into wafers. BOS = “balance of system” and we see this is also important. This is the mechanical structure and the inverter, cabling, etc.

The total energy per meter squared:

  • silicon purification and processing – 666 kWh
  • slicing process – 120 kWh
  • fabricating PV modules – 190 kWh
  • rooftop supporting structure – 200 kWh
  • production of inverters – 33 kWh
  • other energy used in system operation and maintenance, electronic components, cables and miscellaneous – 125 kWh

Transportation energy use turned out pretty small as might be expected (and is ignored in the total).

Therefore, the total energy consumed in producing and installing the 22 kW grid-connected PV system is 206,000 kWh, with 29% from BOS, and 71% from PV modules.

What does it produce? Unfortunately the data for the period is calculated not measured due to issues with the building management system (the plan was to measure the electrical production, however, it appears some data points have been gathered).

Now there’s a few points that have an impact on solar energy production. This isn’t comprehensive and is not from their paper:

  • Solar cells rated values are taken at 25ºC, but when you have sunlight on a solar cell, i.e., when it’s working, it can be running at a temperature of up to 50ºC. The loss due to temperature is maybe 12 – 15% (I am not clear how accurate this number is).
  • Degradation per year is between 0.5% and 1% depending on the type of silicon used (I don’t know how reliable these numbers are at 15 years out)
  • Dust reduces energy production. It’s kind of obvious but unless someone is out there washing it on a regular basis you have some extra, unaccounted losses.
  • Inverter quality

Obviously we need to calculate what the output will be. Most locations, and Hong Kong is no exception, have a pretty well-known solar W/m² at the surface. The angle of the solar cells has a very significant impact. This installation was at 22.5º – close to the best angle of 30º to maximize solar absorption.

Lu & Yang calculate:

For the 22 kW roof-mounted PV system, facing south with a tilted angle of 22.5, the annual solar radiation received by the PV array is 266,174 kWh using the weather data from 1996 to 2000, and the annual energy output (AC electricity) is 28,154 kWh. The average efficiency of the PV modules on an annual basis is 10.6%, and the rated standard efficiency of the PV modules from manufacturer is 13.3%. The difference can be partly due to the actual higher cell operating temperature.

The energy output of the PV system could be significantly affected by the orientations of the PV modules. Therefore, different orientations of PV arrays and the corresponding annual energy output are investigated for a similar size PV system in Hong Kong, as given in Table 3. Obviously, for the same size PV system, the energy output could be totally different if the PV modules are installed with different orientations or inclined angles. If the 22 kW PV system is installed on vertical south-facing facade, the system power output is decreased by 45.1% compared that of the case study.

So the energy used will be returned in approximately 7.3 years.

Energy in = 206 MWh. Energy out = 28 MWh per year.

Location Location

Let’s say we put that same array on a rooftop in Germany, the poster-child for solar takeup. The annual solar radiation received by the PV array is about 1000 KWh per m², about 60% of the value in HK (note 2).

Energy in = 206 MWh. Energy out in Germany = 15.8 MWh per year (13 years payback).

I did a quick calculation using 13.3% module efficiency (rated performance at 25ºC), a 15% loss due to the high temperature of the module being in the direct sunlight (when it is producing most of its electricity), an inverter & cabling efficiency of 90% and a 0.5% loss per year of solar efficiency. Imagine no losses from dust. Here is the year by year production – assumes 1000 kWhr solar radiation annually and 150 m² PV cells:

Screen Shot 2015-08-02 at 6.10.33 pm

Screen Shot 2015-08-02 at 6.14.11 pm

Here we get to energy payback at end of year 14.

I’m not sure if anyone has done a survey of the angle of solar panels placed on residential rooftops but if the angle is 10º off its optimum value we will see very roughly something towards a 10% loss in efficiency. Add in some losses for dust (pop quiz – how many people have seen residents cleaning their solar panels on the weekend?) What’s the real long term energy efficiency of a typical economical consumer solar inverter? It’s easy to see the energy payback moving around significantly in real life.

Efficiency Units – g CO2e / kWh and Miles per Gallon

When considering the GHG production in generating electricity, there is a conventional unit – amount of CO2 equivalent per unit of electricity produced. This is usually grams of CO2 equivalent (note 3) per KWh (a kilowatt hour is 3.6 MJ, i.e., 1000J per second for 3,600 seconds).

This is a completely useless unit to quote for solar power.

Imagine, if you will, the old school (new school and old school in the US) measurement of car efficiency – miles per gallon. You buy a Ford Taurus in San Diego, California and it gets you 28 miles per gallon. You move to Portland, Maine and now it’s doing 19 miles per gallon. It’s the exact same car. Move back to San Diego and it gets 28 miles per gallon again.

You would conclude that the efficiency metric was designed by ..

I’m pretty sure my WiFi router uses just about the same energy per GBit of data regardless of whether I move to Germany, or go and live at the equator. And equally, even though it is probably designed to sit flat, if I put it on its side it will still have the same energy efficiency to within a few percent. (Otherwise energy per GBit would not be a useful efficiency metric).

This is not the case with solar panels.

With solar panels the metric you want to know is how much energy was consumed in making it and where in the world most of the production took place (especially the silicon process). Once you have that data you can consider where in the world this technology will sit, at what angle, the efficiency of the inverter that is connected and how much dust accumulates on those beautiful looking panels. And from that data you can work out the energy efficiency.

And from knowing where in the world it was produced you can work out, very approximately (especially if it was in China) how much GHGs were produced in making your panel. Although I wonder about that last point..

The key point on efficiency in case it’s not obvious (apologies for laboring the point):

  • the solar panel cost = X KWh of electricity to make – where X is a fixed amount (but hard to figure out)
  • the solar panel return = Y KWhr per year of electricity – where Y is completely dependent on location and installed angle (but much easier to figure out)

The payback can never be expressed as g CO2e/KWh without stating the final location. And the GHG reduction can never be expressed without stating the manufacturing location and the final location.

Moving the Coal-Fired Power Station

Now let’s consider that all energy is not created equally.

Let’s suppose that instead of the solar panel being produced in an energy efficient country like Switzerland, it’s produced in China. I can find the data on electricity production and on GHG emissions but China also creates massive GHG emissions from things like cement production so I can’t calculate the GHG efficiency of their electricity production. And China statistics have more question marks than some other places in the world. Maybe one of our readers can provide this data?

Let’s say a GHG-conscious country is turning off efficient (“efficient” from a conventional fossil-fuel perspective) gas-fired power stations and promoting solar energy into the grid. And the solar panels are produced in China.

Now while the energy payback stays the same, the GHG payback might be moving to the 20 year mark or beyond – because 1 KWh “cost” came from coal-fired power stations and 1 KWh return displaced energy from gas-fired power stations. Consider the converse, if we have solar panels made in an (GHG) energy efficient country and shipped to say Arizona (lots of sun) to displace coal-fired power it will be a much better equation. (I have no idea if Arizona gets energy from coal but last time I was there it was very sunny).

But if we ship solar panels from China to France to displace nuclear energy, I’m certain we are running a negative GHG balance.

Putting solar panels in high latitude countries and not considering the country of origin might look nice – and it certainly moves the GHG emissions off your country’s balance sheet – but it might not be as wonderful as many people believe.

It’s definitely not free.

Other Data Points

How much energy is consumed in producing the necessary parts?

This is proprietary data for many companies.

Those very large forward-thinking companies that might end up losing business if important lobby groups took exception to their business practices, or if a major government black-listed them, have wonderful transparency. A decade or so ago I was taken on a tour through one of the factories of a major pump company in Sweden. I have to say it was quite an experience. The factory workers volunteer to take the continual stream of overseas visitors on the tour and all seem passionate about many aspects including the environmental credentials of their company – “the creek water that runs through the plant is cleaner at the end than when it comes into the plant”.

Now let’s picture a solar PV company which has just built its new factory next to a new coal-fired power station in China. You are the CEO or the marketing manager. An academic researcher calls to get data on the energy efficiency of your manufacturing process. Your data tells you that you consume a lot more power than the datapoints from Siemens and other progressive companies that have been published. Do you return the call?

There must be a “supplier selection” bias given the data is proprietary and providing the data will lead to more or less sales depending on the answer.

Perhaps I am wrong and the renewables focus of countries serious about reducing GHGs means that manufacturers are only put on the approved list for subsidies and feed-in tariffs when their factory has been thoroughly energy audited by an independent group?

In a fairly recent paper, Peng et al (2013) – whose two coauthors appear to be the same authors of this paper we reviewed – noted that mono-silicon (the solar type used in this study) has the highest energy inputs. They review a number of studies that appear to show significantly better energy paybacks. We will probably look at that paper in a subsequent article, but I did notice a couple of interesting points.

Many studies referenced are from papers from 15 years ago which contain very limited production data (e.g. one value from one manufacturer). They comment on Knapp & Jester (2001) who show much higher values than other studies (including this one) and comment “The results of both embodied energy and EBPT are very high, which deviate from the previous research results too much.” However, Knapp & Jester appeared to be very thorough:

This is instead a chiefly empirical endeavor, utilizing measured energy use, actual utility bills, production data and complete bill of materials to determine process energy and raw materials requirements. The materials include both direct materials, which are part of the finished product such as silicon, glass and aluminum, and indirect materials, which are used in the process but do not end up in the product such as solvents, argon, or cutting wire, many of which turn out to be significant.

All data are based on gross inputs, fully accounting for all yield losses without requiring any yield assumptions. The best available estimates for embodied energy content for these materials are combined with materials use to determine the total embodied and process energy requirements for each major step of the process..

..Excluded from the analysis are (a) energy embodied in the equipment and the facility itself, (b) energy needed to transport goods to and from the facility, (c) energy used by employees in commuting to work, and (d) decommissioning and disposal or other end-of-life energy requirements.

Perhaps Knapp & Jester got much higher results because their data was more complete? Perhaps they got much higher results because their data was wrong. I’m suspicious.. and by the way they didn’t include the cost of building the factory in their calculations.

A long time ago I worked in the semiconductor industry and the cost of building new plants was a lot higher than the marginal cost of making wafers and chips. That was measured in $ not kWh so I have no idea on the fixed/marginal kHr cost of making semiconductors for solar PV cells.

Conclusion

One other point to consider, the GHG emissions of solar panels all occur at the start. The “recovered” GHG emissions of displaced conventional power are year by year.

Solar power is not a free lunch even though it looks like one. There appears to be a lot of focus on the subject so perhaps more definitive data in the near future will enable countries to measure their decarbonizing efforts with some accuracy. If governments giving subsidies for solar power are not getting independent audits of solar PV manufacturers they should be.

In case some readers think I’m trying to do a hatchet job on solar, I’m not.

I’m collecting and analyzing data and two things are crystal clear:

  • accurate data is not easily obtained and there may be a selection bias with inefficient manufacturers not providing data into these studies
  • the upfront “investment” in GHG emissions might result in a wonderful payback in reduction of long-term emissions, but change a few assumptions, especially putting solar panels into high-latitude energy-efficient countries, and it might turn out to be a very poor GHG investment

References

Environmental payback time analysis of a roof-mounted building-integrated photovoltaic (BIPV) system in Hong Kong, L. Lu, H.X. Yang, Applied Energy (2010)

Review on life cycle assessment of energy payback and greenhouse gas emission of solar photovoltaic systems, Jinqing Peng, Lin Lu & Hongxing Yang, Renewable and Sustainable Energy Reviews (2013)

Empirical investigation of energy payback time for photovoltaic modules, Knapp & Jester, Solar Energy (2001)

Notes

Note 1: I have no idea if it would be a lot higher. Many people are convinced that “next generation” battery technology will allow “stand-alone” solar PV. In this future scenario solar PV will not add intermittancy to the grid and will, therefore, be amazing. Whether or not the economics mean this is 5 years away or 50 years away, note to the enthusiasts to check the GHG used in the production of these (future) batteries.

Note 2: The paper didn’t explicitly give the solar cell area. I calculated it from a few different numbers they gave and it appears to be 150m², which gives an annual average surface solar radiation of 1770 KWh/m². Consulting a contour map of SE Asia shows that this value might be correct. For the purposes of the comparison it isn’t exactly critical.

Note 3: Putting of 1 tonne of methane into the atmosphere causes a different (top of atmosphere) radiation change from 1 tonne of CO2. To make life simpler, given that CO2 is the primary anthropogenic GHG, all GHGs are converted into “equivalent CO2”.

This blog is about climate science.

I wanted to take a look at Renewable Energy because it’s interesting and related to climate science in an obvious way. Information from media sources confirms my belief that 99% of what is produced by the media is rehashed press releases from various organizations with very little fact checking. (Just a note for citizens alarmed by this statement – they are still the “go to source” for the weather, footage of disasters and partly-made-up stories about celebrities).

Regular readers of this blog know that the articles and discussion so far have only been about the science – what can be proven, what evidence exists, and so on. Questions about motives, about “things people might have done”, and so on, are not of interest in the climate discussion (not for this blog). There are much better blogs for that – with much larger readerships.

Here’s an extract from About this Blog:

Opinions
Opinions are often interesting and sometimes entertaining. But what do we learn from opinions? It’s more useful to understand the science behind the subject. What is this particular theory built on? How long has the theory been “established”? What lines of evidence support this theory? What evidence would falsify this theory? What do opposing theories say?
Anything else?
This blog will try and stay away from guessing motives and insulting people because of how they vote or their religious beliefs. However, this doesn’t mean we won’t use satire now and again as it can make the day more interesting.

The same principles will apply for this discussion about renewables. Our focus will be on technical and commercial aspects of renewable energy, with a focus on evidence rather than figuring it out from “motive attribution”. And wishful thinking –  wonderful though it is for reducing personal stress – will be challenged.

As always, the moderator reserves the right to remove comments that don’t meet these painful requirements.

Here’s a claim about renewables from a recent media article:

By Bloomberg New Energy Finance’s most recent calculations a new wind farm in Australia would cost $74 a megawatt hour..

..”Wind is already the cheapest, and solar PV [photovoltaic panels] will be cheaper than gas in around two years, in 2017. We project that wind will continue to decline in cost, though at a more modest rate than solar. Solar will become the dominant source in the longer term.”

I couldn’t find any evidence in the article that verified the claim. Only that it came from Bloomberg New Energy Finance and was the opposite of a radio shock jock. Generally I favor my dogs’ opinions over opinionated media people (unless it is about the necessity of an infinite supply of Schmackos starting now, right now). But I have a skeptical mindset and not knowing the wonderful people at Bloomberg I have no idea whether their claim is rock-solid accurate data, or “wishful thinking to promote their products so they can make lots of money and retire early”.

Calculating the cost of anything like this is difficult. What is the basis of the cost calculation? I don’t know if the claim in BNEF’s calculation is “accurate” – but without context it is not such a useful number. The fact that BNEF might have some vested interest in a favorable comparison over coal and gas is just something I assume.

But, like with climate science, instead of discussing motives and political stances, we will just try and figure out how the numbers stack up. We won’t be pitting coal companies (=devils or angels depending on your political beliefs) against wind turbine producers (=devils or angels depending on your political beliefs) or against green activists (=devils or angels depending on your political beliefs).

Instead we will look for data – a crazy idea and I completely understand how very unpopular it is. Luckily, I’m sure I can help people struggling with the idea to find better websites on which to comment.

Calculating the Cost

I’ve read the details of a few business plans and I’m sure that most other business plans also have the same issue – change a few parameters (=”assumptions”, often “reasonable assumptions”) and the outlook goes from amazing riches to destitution and bankruptcy.

The cost per MWHr of wind energy will depend on a few factors:

  • cost of buying a wind turbine
  • land acquisition/land rental costs
  • installation cost
  • grid connection costs
  • the “backup requirement” aka “capacity credit”
  • cost of capital
  • lifetime of equipment
  • maintenance costs
  • % utilization (output energy / nameplate capacity)

And of course, in any discussion about “the future”, favorable assumptions can be made about “the next generation”. Is the calculation of $74/MWHr based on what was shipped 5 years ago and its actuals, or what is suggested for a turbine purchased next year?

If you want wind to look better than gas or coal – or the converse – there are enough variables to get the result you want. I’ll be amazed if you can’t change the relative costs by a factor of 5 by playing around with what appear to be reasonable assumptions.

Perhaps the data is easy to obtain. I’m sure many readers have some or all of this data to hand.

Moore’s Law and Other Industries

Most people are familiar with the now legendary statement from the 1960s about semiconductor performance doubling every 18 months. This revolution is amazing. But it’s unusual.

There are a lot of economies of scale from mass production in a factory. But mostly limiting cases are reached pretty quickly, after which cost reductions of a few percent a year are great results – rather than producing the same product for 1% of what it cost just 10 years before. Semiconductors are the exception.

When a product is made from steel alloys, carbon fiber composites or similar materials we can’t expect Moore’s law to kick in. On the other hand, products that rely on a combination of software, electronic components and “traditional materials” and have been produced on small scales up until now can expect major cost reductions from amortizing costs (software, custom chips, tooling, etc) and general economies of scale (purchasing power, standardizing processes, etc).

In some industries, rapid growth actually causes cost increases. If you want an experienced team to provide project management, installation and commissioning services you might find that the boom in renewables is driving those costs up, not down.

A friend of mine working for a natural gas producer in Queensland, Australia recounted the story of the cost of building a dam a few years ago. Long story short, the internal estimates ranged from $2M to $7M, but when the tenders came in from general contractors the prices were $10M to $25M. The reason was a combination of:

  • escalating contractor costs (due to the boom)
  • compliance with new government environmental regulations
  • compliance with the customer’s many policies / OH&S requirements
  • the contractual risk due to all of the above, along with the significant proliferation of contract terms (i.e., will we get sued, have we taken on liabilities we don’t understand, etc)

The point being that industry insiders – i.e., the customer – with a strong vested interest in understanding current costs was out by a factor of more than three in a traditional enterprise. This kind of inaccuracy is unusual but it can happen when the industry landscape is changing quickly.

Even if you have signed a fixed price contract with an EPC you can only be sure this is the minimum you will be paying.

The only point I’m making is that a lot of costs are unknown even by experienced people in the field. Companies like BNEF might make some assumptions but it’s a low stress exercise when someone else will be paying the actual bills.

Intermittency & Grid Operators

We will discuss this further in future articles. This is a key issue between renewables and fossil fuel / nuclear power stations. The traditional power stations can create energy when it is needed. Wind and solar – mainstays of the renewable revolution – create energy when the sun shines and the wind blows.

As a starting point for any discussion let’s assume that storing energy is massively uneconomic. While new developments might be available “around the corner”, storing energy is very expensive. The only real mechanism is pumped hydro schemes. Of course, we can discuss this.

Grid operators have a challenge – balance demand with supply (because storage capacity is virtually zero). Demand is variable and although there is some predictability, there are unexpected changes even in the short term.

The demand curve depends on the country. For example, the UK has peak demand in the winter evenings. Wealthy hotter countries have peak demand in the summer in the middle of the day (air-conditioning).

There are two important principles:

  • Grid operators already have to deal with intermittency because conventional power stations go off-line with planned outages and with unplanned, last minute, outages
  • Renewables have a “capacity credit” that is usually less than their expected output

The first is a simple one. An example is the Sizewell B nuclear power station in the UK supplying about 1GW [fixed] out of 80GW of total grid supply. From time to time it shuts down and the grid operator gets very little notice. So grid operators already have to deal with this. They use statistical calculations to ensure excess supply during normal operation, based on an acceptable “loss of load probability”. Total electricity demand is variable and supply is continually adjusted to match that demand. Of course, the scale of intermittency from large penetration of renewables may present challenges that are difficult to deal with by comparison with current intermittency.

The second is the difficult one. Here’s an example from a textbook by Godfrey, that’s actually a collection of articles on (mainly) UK renewables:

 

Godfrey-p19

The essence of the calculation is a probabilistic one. At small penetration levels, the energy input from wind power displaces the need for energy generation from traditional sources. But as the percentage of wind power increases, the “potential down time” causes more problems – requiring more backup generation on standby. In the calculations above, wind going from 0.5 GW to 25 GW only saves 4 GW in conventional “capacity”. This is the meaning of capacity credit – adding 25 GW of wind power (under this simulation) provides a capacity credit of only 4 GW. So you can’t remove 25 GW of conventional from the grid, you can only remove 4 GW of conventional power.

Now the calculation of capacity credit depends on the specifics of the history of wind speeds in the region. Increasing the geographical spread of wind power generation produces better results, dependent on the lower correlation of wind speeds across larger regions. Different countries get different results.

So there’s an additional cost with wind power that someone has to pay for – which increases along with the penetration of wind power. In the immediate future this might not be a problem because perhaps the capacity already exists and is just being put on standby. However, at some stage these older plants will be at end of life and conventional plants will need to be built to provide backup.

Many calculations exist of the estimated $/MWh from providing such a backup. We will dig into those in future articles. My initial impression is that there are a lot of unknowns in the real cost of backup supply because for much potential backup supply the lifetime / maintenance impact of frequent start-stops is unclear. A lot of this is thermal shock issues – each thermal cycle costs $X.. (based on the design of the plant to handle so many thousand starts before a major overhaul is needed).

The Other Side of the Equation – Conventional Power

It will also be interesting to get some data around conventional power. Right now, the cost of displacing conventional power is new investment in renewables, but keeping conventional power is not free. Every existing station has a life and will one day need to be replaced (or demand will need to be reduced). It might be a deferred cost but it will still be a cost.

$ and GHG emissions

There is a cost to adding 1GW of wind power. There is a cost to adding 1GW of solar power. There is also a GHG cost – that is, building a solar panel or a wind turbine is not energy free and must be producing GHGs in the process. It would be interesting to get some data on this also.

Conclusion – Introduction

I wrote this article because finding real data is demanding and many websites focused on the topic are advocacy-based with minimal data. Their starting point is often the insane folly and/or mendacious intent of “the other side”. The approach we will take here is to gather and analyze data.. As if the future of the world was not at stake. As if it was not a headlong rush into lunacy to try and generate most energy from renewables.. As if it was not an unbelievable sin to continue to create electricity from fossil fuels..

This approach might allow us to form conclusions from the data rather than the reverse.

Let’s see how this approach goes.

I am hoping many current (and future) readers can contribute to the discussion – with data, uncertainties, clarifications.

I’m not expecting to be able to produce “a number” for windpower or solar power. I’m hopeful that with some research, analysis and critical questions we might be able to summarize some believable range of values for the different elements of building a renewable energy supply, and also quantify the uncertainties.

Most of what I will write in future articles I don’t yet know. Perhaps someone already has a website where this project is already complete and in my Part Two will just point readers there..

References

Renewable Electricity and the Grid : The Challenge of Variability, Godfrey Boyle, Earthscan (2007)

In Part Nine – Data I – Ts vs OLR we looked at the monthly surface temperature (“skin temperature”) from NCAR vs OLR measured by CERES. The slope of the data was about 2 W/m² per 1K surface temperature change. Commentators pointed out that this was really the seasonal relationship – it probably didn’t indicate anything further.

In Part Ten we looked at anomaly data: first where monthly means were removed; and then where daily means were removed. Mostly the data appeared to be a big noisy scatter plot with no slope. The main reason that I could see for this lack of relationship was that anomaly data didn’t “keep going” in one direction for more than a few days. So it’s perhaps unreasonable to expect that we would find any relationship, given that most circulation changes take time.

We haven’t yet looked at regional versions of Ts vs OLR, the main reason is I can’t yet see what we can usefully plot. A large amount of heat is exported from the tropics to the poles and so without being able to itemize the amount of heat lost from a tropical region or the amount of heat gained by a mid-latitude or polar region, what could we deduce? One solution is to look at the whole globe in totality – which is what we have done.

In this article we’ll look at the mean global annual data. We only have CERES data for complete years from 2001 to 2013 (data wasn’t available to end of the 2014 when I downloaded it).

Here are the time-series plots for surface temperature and OLR:

Global annual Ts vs year & OLR  vs year 2001-2013

Figure 1

Here is the scatter plot of the above data, along with the best-fit linear interpolation:

Global annual Ts vs OLR 2001-2013

Figure 2

The calculated slope is similar to the results we obtained from the monthly data (which probably showed the seasonal relationship). This is definitely the year to year data, but also gives us a slope that indicates positive feedback. The correlation is not strong, as indicated by the R² value of 0.37, but it exists.

As explained in previous posts, a change of 3.6 W/m² per 1K is a “no feedback” relationship, where a uniform 1K change in surface & atmospheric temperature causes an OLR increase of 3.6 W/m² due to increased surface and atmospheric radiation – a greater increase in OLR would be negative feedback and a smaller increase would be positive feedback (e.g. see Part Eight with the plot of OLR changes vs latitude and height, which integrated globally gives 3.6 W/m²).

The problem of the”no feedback” calculation is perhaps a bit more complicated and I want to dig into this calculation at some stage.

I haven’t looked at whether the result is sensitive to the date of the start of year. Next, I want to look at the changes in humidity, especially upper tropospheric water vapor, which is a key area for radiative changes. This will be a bit of work, because AIRS data comes in big files (there is a lot of data).

[I was going to post this new article not long after the last article in the series, but felt I was missing something important and needed to think about it. Instead I’ve not had any new insights and am posting for comment.]

In Part Nine – Data I, we looked at the relationship between Ts (surface temperature) and OLR (outgoing longwave radiation), for reasons all explained there.

The relationship shown there appears to be primarily the seasonal relationship, which looks like a positive feedback due to the 2W/m² per 1K temperature increase. What about the feedback on a different timescale from the seasonal relationship?

From the 2001-2013 data, here is the monthly mean and the daily mean for both Ts and OLR:

Monthly mean & daily mean for Ts & OLR

Figure 1

If we remove the monthly mean from the data, here are those same relationships (shown in the last article as anomalies from the overall 2001-2013 mean):

OLR vs Ts - NCAR -CERES-monthlymeansremoved

Figure 2 – Click to Expand

On a lag of 1 day there is a possible relationship with a low correlation – and the rest of the lags show no relationship at all.

Of course, we have created a problem with this new dataset – as the lag increases we are “jumping boundaries”. For example, on the 7-day lag all of the Ts data in the last week of April is being compared with the OLR data in the first week of May. With slowly rising temperatures, the last week of April will be “positive temperature data”, but the first week of May will be “negative OLR data”. So we expect 1/4 of our data to show the opposite relationship.

So we can show the data with the “monthly boundary jumps removed” – which means we can only show lags of say 1-14 days (with 3% – 50% of the data cut out); and we can also show the data as anomalies from the daily mean. Both have the potential to demonstrate the feedback on shorter timescales than the seasonal cycle.

First, here is the data with daily means removed:

OLR vs Ts - NCAR -CERES-dailymeansremoved

Figure 3 – Click to Expand

Second, here is the data with the monthly means removed as in figure 2, but this time ensuring that no monthly boundaries are crossed (so some of the data is removed to ensure this):

OLR vs Ts - NCAR -CERES-monthlymeansremoved-noboundary

Figure 4 – Click to Expand

So basically this demonstrates no correlation between change in daily global OLR and change in daily global temperature on less than seasonal timescales. (Or “operator error” with the creation of my anomaly data). This is excluding (because we haven’t tested it here) the very short timescale of day to night change.

This was surprising at first sight.

That is, we see global Ts increasing on a given day but we can’t distinguish any corresponding change in global OLR from random changes, at least until we get to seasonal time periods? (See graph in last article).

Then what is probably the reason came into view. Remember that this is anomaly data (daily global temperature with monthly mean subtracted). This bar graph demonstrates that when we are looking at anomaly data, most of the changes in global Ts are reversed the next day, or usually within a few days:

Days temperature goes in same direction

Figure 5

This means that we are unlikely to see changes in Ts causing noticeable changes in OLR unless the climate response we are looking for (humidity and cloud changes) occurs within a day or two.

That’s my preliminary thinking, looking at the data – i.e., we can’t expect to see much of a relationship, and we don’t see any relationship.

One further point – explained in much more detail in the (short) series Measuring Climate Sensitivity – is that of course changes in temperature are not caused by some mechanism that is independent of radiative forcing.

That is, our measurement problem is compounded by changes in temperature being first caused by fluctuations in radiative forcing (the radiation balance) and ocean heat changes and then we are measuring the “resulting” change in the radiation balance resulting from this temperature change:

Radiation balance & ocean heat balance => Temperature change => Radiation balance & ocean heat balance

So we can’t easily distinguish the net radiation change caused by temperature changes from the radiative contribution to the original temperature changes.

I look forward to readers’ comments.

Articles in the Series

Part One – introducing some ideas from Ramanathan from ERBE 1985 – 1989 results

Part One – Responses – answering some questions about Part One

Part Two – some introductory ideas about water vapor including measurements

Part Three – effects of water vapor at different heights (non-linearity issues), problems of the 3d motion of air in the water vapor problem and some calculations over a few decades

Part Four – discussion and results of a paper by Dessler et al using the latest AIRS and CERES data to calculate current atmospheric and water vapor feedback vs height and surface temperature

Part Five – Back of the envelope calcs from Pierrehumbert – focusing on a 1995 paper by Pierrehumbert to show some basics about circulation within the tropics and how the drier subsiding regions of the circulation contribute to cooling the tropics

Part Six – Nonlinearity and Dry Atmospheres – demonstrating that different distributions of water vapor yet with the same mean can result in different radiation to space, and how this is important for drier regions like the sub-tropics

Part Seven – Upper Tropospheric Models & Measurement – recent measurements from AIRS showing upper tropospheric water vapor increases with surface temperature

Part Eight – Clear Sky Comparison of Models with ERBE and CERES – a paper from Chung et al (2010) showing clear sky OLR vs temperature vs models for a number of cases

Part Nine – Data I – Ts vs OLR – data from CERES on OLR compared with surface temperature from NCAR – and what we determine

Part Ten – Data II – Ts vs OLR – more on the data

In the last article we looked at a paper which tried to unravel – for clear sky only – how the OLR (outgoing longwave radiation) changed with surface temperature. It did the comparison by region, by season and from year to year.

The key point for new readers to understand – why are we interested in how OLR changes with surface temperature? The concept is not so difficult. The practical analysis presents more problems.

Let’s review the concept – and for more background please read at least the start of the last article: if we increase the surface temperature, perhaps due to increases in GHGs, but it could be due to any reason, what happens to outgoing longwave radiation? Obviously, we expect OLR to increase. The real question is how by how much?

If there is no feedback then OLR should increase by about 3.6 W/m² for every 1K in surface temperature (these values are global averages):

  • If there is positive feedback, perhaps due to more humidity, then we expect OLR to increase by less than 3.6 W/m² – think “not enough heat got out to get things back to normal”
  • If there is negative feedback, then we expect OLR to increase by more than 3.6 W/m². In the paper we reviewed in the last article the authors found about 2 W/m² per 1K increase – a positive feedback, but were only considering clear sky areas

One reader asked about an outlier point on the regression slope and whether it affected the result. This motivated me to do something I have had on my list for a while now – get “all of the data” and analyse it. This way, we can review it and answer questions ourselves – like in the Visualizing Atmospheric Radiation series where we created an atmospheric radiation model (first principles physics) and used the detailed line by line absorption data from the HITRAN database to calculate how this change and that change affected the surface downward radiation (“back radiation”) and the top of atmosphere OLR.

With the raw surface temperature, OLR and humidity data “in hand” we can ask whatever questions we like and answer these questions ourselves..

NCAR reanalysis, CERES and AIRS

CERES and AIRS – satellite instruments – are explained in CERES, AIRS, Outgoing Longwave Radiation & El Nino.

CERES measures total OLR in a 1ºx 1º grid on a daily basis.

AIRS has a “hyper-spectral” instrument, which means it looks at lots of frequency channels. The intensity of radiation at these many wavelengths can be converted, via calculation, into measurements of atmospheric temperature at different heights, water vapor concentration at different heights, CO2 concentration, and concentration of various other GHGs. Additionally, AIRS calculates total OLR (it doesn’t measure it – i.e. it doesn’t have a measurement device from 4μm – 100μm). It also measures parameters like “skin temperature” in some locations and calculates the same in other locations.

For the purposes of this article, I haven’t yet dug into the “how” and the reliability of surface AIRS measurements. The main point to note about satellites is they sit at the “top of atmosphere” and their ability to measure stuff near the surface depends on clever ideas and is often subverted by factors including clouds and surface emissivity. (AIRS has microwave instruments specifically to independently measure surface temperature even in cloudy conditions, because of this problem).

NCAR is a “reanalysis product”. It is not measurement, but it is “informed by measurement”. It is part measurement, part model. Where there is reliable data measurement over a good portion of the globe the reanalysis is usually pretty reliable – only being suspect at the times when new measurement systems come on line (so trends/comparisons over long time periods are problematic). Where there is little reliable measurement the reanalysis depends on the model (using other parameters to allow calculation of the missing parameters).

Some more explanation in Water Vapor Trends under the sub-heading Reanalysis – or Filling in the Blanks.

For surface temperature measurements reanalysis is not subverted by models too much. However, the mainstream surface temperature series are surely better than NCAR – I know that there is an army of “climate interested people” who follow this subject very closely. (I am not in that group).

I used NCAR because it is simple to download and extract. And I expect – but haven’t yet verified – that it will be quite close to the various mainstream surface temperature series. If someone is interested and can provide daily global temperature from another surface temperature series as an Excel, csv, .nc – or pretty much any data format – we can run the same analysis.

For those interested, see note 1 on accessing the data.

Results – Global Averages

For our starting point in this article I decided to look at global averages from 2001 to 2013 inclusive (data from CERES not yet available for the whole of 2014). This was after:

  • looking at daily AIRS data
  • creating and comparing NCAR over 8 days with AIRS 8-day averages for surface skin temperature and surface air temperature
  • creating and comparing AIRS over 8-days with CERES for TOA OLR

More on those points in later articles.

The global relationship with surface temperature and OLR is what we have a primary interest in – for the purpose of determining feedbacks. Then we want to figure out some detail about why it occurs. I am especially interested in the AIRS data because it is the only global measurement of upper tropospheric water vapor (UTWV) – and UTWV along with clouds are the key factors in the question of feedback – how OLR changes with surface temperature. For now, we will look at the simple relationship between surface temperature (“skin temperature”) and OLR.

Here is the data, shown as an anomaly from the global mean values over the period Jan 1st, 2001 to Dec 31st, 2013. Each graph represents a different lag – how does global OLR (CERES) change with global surface temperature (NCAR) on a lag of 1 day, 7 days, 14 days and so on:

OLR vs Ts - NCAR -CERES

Figure 1 – Click to Expand

The slope gives the “apparent feedback” and the R² simply reflects how much of the graph is explained by the linear trend. This last value is easily estimated just by looking at each graph.

For reference, here is the timeseries data, as anomalies, with the temperature anomaly multiplied by a factor of 3 so its magnitude is similar to the OLR anomaly:

OLR from CERES vs Ts from NCAR as timeseries

Figure 2 – Click to Expand

Note on the calculation – I used the daily data to calculate a global mean value (area-weighted) and calculated one mean value over the whole time period then subtracted it from every daily data value to obtain an anomaly for each day. Obviously we would get the same slope and R² without using anomaly data (just a different intercept on the axes).

For reference, mean OLR = 238.9 W/m², mean Ts = 288.0 K.

My first question – before even producing the graphs – was whether a lag graph shows the change in OLR due to a change in Ts or due to a mixture of many effects. That is, what is the interpretation of the graphs?

The second question – what is the “right lag” to use? We don’t expect an instant response when we are looking for feedbacks:

  • The OLR through the window region will of course respond instantly to surface temperature change
  • The OLR as a result of changing humidity will depend upon how long it takes for more evaporated surface water to move into the mid- to upper-troposphere
  • The OLR as a result of changing atmospheric temperature, in turn caused by changing surface temperature, will depend upon the mixture of convection and radiative cooling

To say we know the right answer in advance pre-supposes that we fully understand atmospheric dynamics. This is the question we are asking, so we can’t pre-suppose anything. But at least we can suggest that something in the realm of a few days to a few months is the most likely candidate for a reasonable lag.

But the idea that there is one constant feedback and one constant lag is an idea that might well be fatally flawed, despite being seductively simple. (A little more on that in note 3).

And that is one of the problems of this topic. Non-linear dynamics means non-linear results – a subject I find hard to describe in simple words. But let’s say – changes in OLR from changes in surface temperature might be “spread over” multiple time scales and be different at different times. (I have half-written an article trying to explain this idea in words, hopefully more on that sometime soon).

But for the purpose of this article I only wanted to present the simple results – for discussion and for more analysis to follow in subsequent articles.

Articles in the Series

Part One – introducing some ideas from Ramanathan from ERBE 1985 – 1989 results

Part One – Responses – answering some questions about Part One

Part Two – some introductory ideas about water vapor including measurements

Part Three – effects of water vapor at different heights (non-linearity issues), problems of the 3d motion of air in the water vapor problem and some calculations over a few decades

Part Four – discussion and results of a paper by Dessler et al using the latest AIRS and CERES data to calculate current atmospheric and water vapor feedback vs height and surface temperature

Part Five – Back of the envelope calcs from Pierrehumbert – focusing on a 1995 paper by Pierrehumbert to show some basics about circulation within the tropics and how the drier subsiding regions of the circulation contribute to cooling the tropics

Part Six – Nonlinearity and Dry Atmospheres – demonstrating that different distributions of water vapor yet with the same mean can result in different radiation to space, and how this is important for drier regions like the sub-tropics

Part Seven – Upper Tropospheric Models & Measurement – recent measurements from AIRS showing upper tropospheric water vapor increases with surface temperature

Part Eight – Clear Sky Comparison of Models with ERBE and CERES – a paper from Chung et al (2010) showing clear sky OLR vs temperature vs models for a number of cases

Part Nine – Data I – Ts vs OLR – data from CERES on OLR compared with surface temperature from NCAR – and what we determine

Part Ten – Data II – Ts vs OLR – more on the data

References

Wielicki, B. A., B. R. Barkstrom, E. F. Harrison, R. B. Lee III, G. L. Smith, and J. E. Cooper, 1996: Clouds and the Earth’s Radiant Energy System (CERES): An Earth Observing System Experiment, Bull. Amer. Meteor. Soc., 77, 853-868   – free paper

Kalnay et al.,The NCEP/NCAR 40-year reanalysis project, Bull. Amer. Meteor. Soc., 77, 437-470, 1996  – free paper

NCEP Reanalysis data provided by the NOAA/OAR/ESRL PSD, Boulder, Colorado, USA, from their Web site at http://www.esrl.noaa.gov/psd/

Notes

Note 1: Boring Detail about Extracting Data

On the plus side, unlike many science journals, the data is freely available. Credit to the organizations that manage this data for their efforts in this regard, which includes visualization software and various ways of extracting data from their sites. However, you can still expect to spend a lot of time figuring out what files you want, where they are, downloading them, and then extracting the data from them. (Many traps for the unwary).

NCAR – data in .nc files, each parameter as a daily value (or 4x daily) in a separate annual .nc file on an (approx) 2.5º x 2.5º grid (actually T62 gaussian grid).

Data via ftp – ftp.cdc.noaa.gov. See http://www.esrl.noaa.gov/psd/data/gridded/data.ncep.reanalysis.surface.html.

You get lat, long, and time in the file as well as the parameter. Care needed to navigate to the right folder because the filenames are the same for the 4x daily and the daily data.

NCAR are using latest version .nc files (which Matlab circa 2010 would not open, I had to update to the latest version – many hours wasted trying to work out the reason for failure).

CERES – data in .nc files, you select the data you want and the time period but it has to be a less than 2G file and you get a file to download. I downloaded daily OLR data for each annual period. Data in a 1ºx 1º grid. CERES are using older version .nc so there should be no problem opening.

Data from http://ceres-tool.larc.nasa.gov/ord-tool/srbavg

AIRS – data in .hdf files, in daily, 8-day average, or monthly average. The data is “ascending” = daytime, “descending” = nighttime plus some other products. Daily data doesn’t give global coverage (some gaps). 8-day average does but there are some missing values due to quality issues. Data in a 1ºx 1º grid. I used v6 data.

Data access page – http://disc.sci.gsfc.nasa.gov/datacollection/AIRX3STD_V006.html?AIRX3STD&#tabs-1.

Data via ftp.

HDF is not trivial to open up. The AIRS team have helpfully provided a Matlab tool to extract data which helped me. I think I still spent many hours figuring out how to extract what I needed.

Files Sizes – it’s a lot of data:

NCAR files that I downloaded (skin temperature) are only 12MB per annual file.

CERES files with only 2 parameters are 190MB per annual file.

AIRS files as 8-day averages (or daily data) are 400MB per file.

Also the grid for each is different. Lat from S-pole to N-pole in CERES, the reverse for AIRS and NCAR. Long from 0.5º to 359.5º in CERES but -179.5 to 179.5 in AIRS. (Note for any Matlab people, it won’t regrid, say using interp2, unless the grid runs from lowest number to highest number).

Note 2: Checking data – because I plan on using the daily 1ºx1º grid data from CERES and NCAR, I used it to create the daily global averages. As a check I downloaded the global monthly averages from CERES and compared. There is a discrepancy, which averages at 0.1 W/m².

Here is the difference by month:

CERES-Monthly-discrepancy-by-month

Figure 3 – Click to expand

And a scatter plot by month of year, showing some systematic bias:

CERES-Monthly-discrepance-scatter-plot

Figure 4

As yet, I haven’t dug any deeper to find if this is documented – for example, is there a correction applied to the daily data product in monthly means? is there an issue with the daily data? or, more likely, have I %&^ed up somewhere?

Note 3: Extract from Measuring Climate Sensitivity – Part One:

Linear Feedback Relationship?

One of the biggest problems with the idea of climate sensitivity, λ, is the idea that it exists as a constant value.

From Cloud Feedbacks in the Climate System: A Critical Review, Stephens, Journal of Climate (2005):

The relationship between global-mean radiative forcing and global-mean climate response (temperature) is of intrinsic interest in its own right. A number of recent studies, for example, discuss some of the broad limitations of (1) and describe procedures for using it to estimate Q from GCM experiments (Hansen et al. 1997; Joshi et al. 2003; Gregory et al. 2004) and even procedures for estimating from observations (Gregory et al. 2002).

While we cannot necessarily dismiss the value of (1) and related interpretation out of hand, the global response, as will become apparent in section 9, is the accumulated result of complex regional responses that appear to be controlled by more local-scale processes that vary in space and time.

If we are to assume gross time–space averages to represent the effects of these processes, then the assumptions inherent to (1) certainly require a much more careful level of justification than has been given. At this time it is unclear as to the specific value of a global-mean sensitivity as a measure of feedback other than providing a compact and convenient measure of model-to-model differences to a fixed climate forcing (e.g., Fig. 1).

[Emphasis added and where the reference to “(1)” is to the linear relationship between global temperature and global radiation].

If, for example, λ is actually a function of location, season & phase of ENSO.. then clearly measuring overall climate response is a more difficult challenge.