Feeds:
Posts
Comments

Archive for 2017

I generally try and avoid the media as much as possible (although the 2016 Circus did suck me in) but it’s still impossible to miss claims like the following:

Climate change is already causing worsening storms, floods and droughts

Before looking at predictions for the future I thought it was worth reviewing this claim, seeing as it is so prevalent and is presented as being the current consensus of climate science.

Droughts

SREX 2012, p. 171:

There is medium confidence that since the 1950s some regions of the world have experienced more intense and longer droughts (e.g., southern Europe, west Africa) but also opposite trends exist in other regions (e.g., central North America, northwestern Australia).

The report cites Sheffield and Wood 2008 who show graphs on a variety of drought metrics from around the world over the last 50 years – click to enlarge:

From Sheffield & Wood 2008

From Sheffield & Wood 2008

Figure 1 – Click to enlarge

The results above were calculated from models based on available meteorological data. According to their analysis some places have experienced more droughts, and other places less droughts. Because they are based on models we can expect that alternative researchers may produce different results.

AR5, published a year after SREX, says, chapter 2, p. 214-215:

Because drought is a complex variable and can at best be incompletely represented by commonly used drought indices, discrepancies in the interpretation of changes can result. For example, Sheffield and Wood (2008) found decreasing trends in the duration, intensity and severity of drought globally. Conversely, Dai (2011a,b) found a general global increase in drought, although with substantial regional variation and individual events dominating trend signatures in some regions (e.g., the 1970s prolonged Sahel drought and the 1930s drought in the USA and Canadian Prairies). Studies subsequent to these continue to provide somewhat different conclusions on trends in global droughts and/ or dryness since the middle of the 20th century (Sheffield et al., 2012; Dai, 2013; Donat et al., 2013c; van der Schrier et al., 2013)..

..In summary, the current assessment concludes that there is not enough evidence at present to suggest more than low confidence in a global-scale observed trend in drought or dryness (lack of rainfall) since the middle of the 20th century, owing to lack of direct observations, geographical inconsistencies in the trends, and dependencies of inferred trends on the index choice.

Based on updated studies, AR4 conclusions regarding global increasing trends in drought since the 1970s were probably overstated.

The paper by Dai is Drought under global warming: a review, A Dai, Climate Change (2011) – for some reason I am unable to access it.

A later paper in Nature, Trenberth et al 2013 (including both Sheffield and Dai as co-authors) said:

Two recent papers looked at the question of whether large-scale drought has been increasing under climate change. A study in Nature by Sheffield et al entitled ‘Little change in global drought over the past 60 years’ was published at almost the same time that ‘Increasing drought under global warming in observations and models’ by Dai appeared in Nature Climate Change (published online in August 2012). How can two research groups arrive at such seemingly contradictory conclusions?

Another later paper on droughts, Orlowski & Seneviratne 2013, likewise shows overwhelming evidence of more droughts – click to enlarge:

From Orlowsky & Seneviratne 2013

From Orlowsky & Seneviratne 2013

Figure 2 – Click to enlarge

Floods

SREX 2012, p. 177:

Overall, there is low confidence (due to limited evidence) that anthropogenic climate change has affected the magnitude and frequency of floods, though it has detectably influenced several components of the hydrological cycle, such as precipitation and snowmelt, that may impact flood trends. The assessment of causes behind the changes in floods is inherently complex and difficult.

AR5, Chapter 2, p. 214:

AR5 WGII assesses floods in regional detail accounting for the fact that trends in floods are strongly influenced by changes in river management (see also Section 2.5.2). Although the most evident flood trends appear to be in northern high latitudes, where observed warming trends have been largest, in some regions no evidence of a trend in extreme flooding has been found, for example, over Russia based on daily river discharge (Shiklomanov et al., 2007).

Other studies for Europe (Hannaford and Marsh, 2008; Renard et al., 2008; Petrow and Merz, 2009; Stahl et al., 2010) and Asia (Jiang et al., 2008; Delgado et al., 2010) show evidence for upward, downward or no trend in the magnitude and frequency of floods, so that there is currently no clear and widespread evidence for observed changes in flooding except for the earlier spring flow in snow-dominated regions (Seneviratne et al., 2012).

In summary, there continues to be a lack of evidence and thus low confidence regarding the sign or trend in the magnitude and/or frequency of floods on a global scale.

[Note: the text in the bottom line cited says: “..regarding the sign of trend in the magnitude..” which I assume is a typo, and so I changed of into or]

Storms

SREX, p. 159:

Detection of trends in tropical cyclone metrics such as frequency, intensity, and duration remains a significant challenge..

..Natural variability combined with uncertainties in the historical data makes it difficult to detect trends in tropical cyclone activity. There have been no significant trends observed in global tropical cyclone frequency records, including over the present 40-year period of satellite observations (e.g., Webster et al., 2005). Regional trends in tropical cyclone frequency have been identified in the North Atlantic, but the fidelity of these trends is debated (Holland and Webster, 2007; Landsea, 2007; Mann et al., 2007a). Different methods for estimating undercounts in the earlier part of the North Atlantic tropical cyclone record provide mixed conclusions (Chang and Guo, 2007; Mann et al., 2007b; Kunkel et al., 2008; Vecchi and Knutson, 2008).

Regional trends have not been detected in other oceans (Chan and Xu, 2009; Kubota and Chan, 2009; Callaghan and Power, 2011). It thus remains uncertain whether any observed increases in tropical cyclone frequency on time scales longer than about 40 years are robust, after accounting for past changes in observing capabilities (Knutson et al., 2010)..

..Time series of power dissipation, an aggregate compound of tropical cyclone frequency, duration, and intensity that measures total energy consumption by tropical cyclones, show upward trends in the North Atlantic and weaker upward trends in the western North Pacific over the past 25 years (Emanuel, 2007), but interpretation of longer-term trends in this quantity is again constrained by data quality concerns.

The variability and trend of power dissipation can be related to SST and other local factors such as tropopause temperature and vertical wind shear (Emanuel, 2007), but it is a current topic of debate whether local SST or the difference between local SST and mean tropical SST is the more physically relevant metric (Swanson, 2008).

The distinction is an important one when making projections of changes in power dissipation based on projections of SST changes, particularly in the tropical Atlantic where SST has been increasing more rapidly than in the tropics as a whole (Vecchi et al., 2008). Accumulated cyclone energy, which is an integrated metric analogous to power dissipation, has been declining globally since reaching a high point in 2005, and is presently at a 40- year low point (Maue, 2009). The present period of quiescence, as well as the period of heightened activity leading up to the high point in 2005, does not clearly represent substantial departures from past variability (Maue, 2009)..

..The present assessment regarding observed trends in tropical cyclone activity is essentially identical to the WMO assessment (Knutson et al., 2010): there is low confidence that any observed long-term (i.e., 40 years or more) increases in tropical cyclone activity are robust, after accounting for past changes in observing capabilities.

AR5, Chapter 2, p. 216:

AR4 concluded that it was likely that an increasing trend had occurred in intense tropical cyclone activity since 1970 in some regions but that there was no clear trend in the annual numbers of tropical cyclones. Subsequent assessments, including SREX and more recent literature indicate that it is difficult to draw firm conclusions with respect to the confidence levels associated with observed trends prior to the satellite era and in ocean basins outside of the North Atlantic.

Lots more tropical storms:

From AR5, wg I

From AR5, wg I

Figure 3

Note that a more important metric than “how many?” is “how severe?” or a combination of both.

And for extra-tropical storms (i.e. outside the tropics), SREX p. 166:

In summary it is likely that there has been a poleward shift in the main Northern and Southern Hemisphere extratropical storm tracks during the last 50 years. There is medium confidence in an anthropogenic influence on this observed poleward shift. It has not formally been attributed.

There is low confidence in past changes in regional intensity.

And AR5, chapter 2, p. 217 & 220:

Some studies show an increase in intensity and number of extreme Atlantic cyclones (Paciorek et al., 2002; Lehmann et al., 2011) while others show opposite trends in eastern Pacific and North America (Gulev et al., 2001). Comparisons between studies are hampered because of the sensitivities in identification schemes and/ or different definitions for extreme cyclones (Ulbrich et al., 2009; Neu et al., 2012). The fidelity of research findings also rests largely with the underlying reanalyses products that are used..

..In summary, confidence in large scale changes in the intensity of extreme extratropical cyclones since 1900 is low. There is also low confidence for a clear trend in storminess proxies over the last century due to inconsistencies between studies or lack of long-term data in some parts of the world (particularly in the SH). Likewise, confidence in trends in extreme winds is low, owing to quality and consistency issues with analysed data.

Discussion

The IPCC SREX and AR5 reports were published in 2012 and 2013 respectively. There will be new research published since these reports analyzing the same data and possibly reaching different conclusions. When you have large decadal variability in poorly observed data with a small or non-existent trend then inevitably different groups will be able to reach different conclusions on these trends. And if you focus on specific regions you can demonstrate a clear and unmistakeable trend.

If you are looking for a soundbite just pick the right region.

The last 100 years have seen global warming. As this blog has made clear from the physics, more GHGs (all other things remaining equal) result in more warming. What proportion of the last 100 years is intrinsic climate variability vs the anthropogenic GHG proportion I have no idea.

The last century has seen no clear globally averaged change in floods, droughts or storms – as best as we can tell with very incomplete observing systems. Of course, some regions have definitely seen more, and some regions have definitely seen less. Whether this is different from the period from 1800-1900 or from 1700-1800 no one knows. Perhaps floods, droughts and tropical storms increased globally from 1700-1900. Perhaps they decreased. Perhaps the last 100 years have seen more variability. Perhaps not. (And in recognition of Poe’s law, I note that a few statements within the article presenting graphs did say the opposite of the graphs presented).

Articles in this Series

Impacts – I – Introduction

Impacts – II – GHG Emissions Projections: SRES and RCP

Impacts – III – Population in 2100

Impacts – IV – Temperature Projections and Probabilities

References

SREX = Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation Special Report, IPCC (2012)

Observations: Atmosphere and Surface. Chapter 2 of Working Group I to AR5, DL Hartmann et al (2013)

Global Trends and Variability in Soil Moisture and Drought Characteristics, 1950–2000, from Observation-Driven Simulations of the Terrestrial Hydrologic Cycle, Justin Sheffield & Eric Wood, Journal of Climate (2008) – free paper

Global warming and changes in drought, Kevin E Trenberth et al, Nature (2013) – free paper

Elusive drought: uncertainty in observed trends and short- and long-term CMIP5 projections, B Orlowsky & SI Seneviratne, Hydrology and Earth System Sciences (2013) – free paper

Read Full Post »

In Impacts – II – GHG Emissions Projections: SRES and RCP we looked at projections of emissions under various scenarios with the resulting CO2 (and other GHG) concentrations and resulting radiative forcing.

Why do we need these scenarios? Because even if climate models were perfect and could accurately calculate the temperature 100 years from now, we wouldn’t know how much “anthropogenic CO2” (and other GHGs) would have been emitted by that time. The scenarios allow climate modelers to produce temperature (and other climate variable) projections on the basis of each of these scenarios.

The IPCC AR5 (fifth assessment report) from 2013 says (chapter 12, p. 1031):

Global mean temperatures will continue to rise over the 21st century if greenhouse gas (GHG) emissions continue unabated.

Under the assumptions of the concentration-driven RCPs, global mean surface temperatures for 2081–2100, relative to 1986–2005 will likely be in the 5 to 95% range of the CMIP5 models:

  • 0.3°C to 1.7°C (RCP2.6)
  • 1.1°C to 2.6°C (RCP4.5)
  • 1.4°C to 3.1°C (RCP6.0)
  • 2.6°C to 4.8°C (RCP8.5)

Global temperatures averaged over the period 2081– 2100 are projected to likely exceed 1.5°C above 1850-1900 for RCP4.5, RCP6.0 and RCP8.5 (high confidence), are likely to exceed 2°C above 1850-1900 for RCP6.0 and RCP8.5 (high confidence) and are more likely than not to exceed 2°C for RCP4.5 (medium confidence). Temperature change above 2°C under RCP2.6 is unlikely (medium confidence). Warming above 4°C by 2081–2100 is unlikely in all RCPs (high confidence) except for RCP8.5, where it is about as likely as not (medium confidence).

I commented in Part II that RCP8.5 seemed to be a scenario that didn’t match up with the last 40-50 years of development. Of course, the various scenario developers give their caveats, for example, Riahi et al 2007:

Given the large number of variables and their interdependencies, we are of the opinion that it is impossible to assign objective likelihoods or probabilities to emissions scenarios. We have also not attempted to assign any subjective likelihoods to the scenarios either. The purpose of the scenarios presented in this Special Issue is, rather, to span the range of uncertainty without an assessment of likely, preferable, or desirable future developments..

Readers should exercise their own judgment on the plausibility of above scenario ‘storylines’..

To me RCP6.0 seems a more likely future (compared with RCP8.5) in a world that doesn’t have any significant attempt to tackle CO2 emissions. That is, no major change in climate policy to today’s world, but similar economic and population development (note 1).

Here is the graph of projected temperature anomalies for the different scenarios. :

From AR5, chapter 12

From AR5, chapter 12

Figure 1

That graph is hard to make out for 2100, here is the table of corresponding data. I highlighted RCP6.0 in 2100 – you can click to enlarge the table:

ar5-ch12-table12-2-temperature-anomaly-2100-499px

Figure 2 – Click to expand

Probabilities and Lists

The table above has a “1 std deviation” and a 5%-95% distribution. The graph (which has the same source data) has shading to indicate 5%-95% of models for each RCP scenario.

These have no relation to real probability distributions. That is, the range of 5-95% for RCP6.0 doesn’t equate to: “the probability is 90% likely that the average temperature 2080-2100 will be 1.4-3.1ºC higher than the 1986-2005 average”.

A number of climate models are used to produce simulations and the results from these “ensembles” are sometimes pressed into “probability service”. For some concept background on ensembles read Ensemble Forecasting.

Here is IPCC AR5 chapter 12:

Ensembles like CMIP5 do not represent a systematically sampled family of models but rely on self-selection by the modelling groups.

This opportunistic nature of MMEs [multi-model ensembles] has been discussed, for example, in Tebaldi and Knutti (2007) and Knutti et al. (2010a). These ensembles are therefore not designed to explore uncertainty in a coordinated manner, and the range of their results cannot be straightforwardly interpreted as an exhaustive range of plausible outcomes, even if some studies have shown how they appear to behave as well calibrated probabilistic forecasts for some large-scale quantities. Other studies have argued instead that the tail of distributions is by construction undersampled.

In general, the difficulty in producing quantitative estimates of uncertainty based on multiple model output originates in their peculiarities as a statistical sample, neither random nor systematic, with possible dependencies among the members and of spurious nature, that is, often counting among their members models with different degrees of complexities (different number of processes explicitly represented or parameterized) even within the category of general circulation models..

..In summary, there does not exist at present a single agreed on and robust formal methodology to deliver uncertainty quantification estimates of future changes in all climate variables. As a consequence, in this chapter, statements using the calibrated uncertainty language are a result of the expert judgement of the authors, combining assessed literature results with an evaluation of models demonstrated ability (or lack thereof) in simulating the relevant processes (see Chapter 9) and model consensus (or lack thereof) over future projections. In some cases when a significant relation is detected between model performance and reliability of its future projections, some models (or a particular parametric configuration) may be excluded but in general it remains an open research question to find significant connections of this kind that justify some form of weighting across the ensemble of models and produce aggregated future projections that are significantly different from straightforward one model–one vote ensemble results. Therefore, most of the analyses performed for this chapter make use of all available models in the ensembles, with equal weight given to each of them unless otherwise stated.

And from one of the papers cited in that section of chapter 12, Jackson et al 2008:

In global climate models (GCMs), unresolved physical processes are included through simplified representations referred to as parameterizations.

Parameterizations typically contain one or more adjustable phenomenological parameters. Parameter values can be estimated directly from theory or observations or by “tuning” the models by comparing model simulations to the climate record. Because of the large number of parameters in comprehensive GCMs, a thorough tuning effort that includes interactions between multiple parameters can be very computationally expensive. Models may have compensating errors, where errors in one parameterization compensate for errors in other parameterizations to produce a realistic climate simulation (Wang 2007; Golaz et al. 2007; Min et al. 2007; Murphy et al. 2007).

The risk is that, when moving to a new climate regime (e.g., increased greenhouse gases), the errors may no longer compensate. This leads to uncertainty in climate change predictions. The known range of uncertainty of many parameters allows a wide variance of the resulting simulated climate (Murphy et al. 2004; Stainforth et al. 2005; M. Collins et al. 2006). The persistent scatter in the sensitivities of models from different modeling groups, despite the effort represented by the approximately four generations of modeling improvements, suggests that uncertainty in climate prediction may depend on underconstrained details and that we should not expect convergence anytime soon.

Stainforth et al 2005 (referenced in the quote above) tried much larger ensembles of coarser resolution climate models, and was discussed in the comments of Models, On – and Off – the Catwalk – Part Four – Tuning & the Magic Behind the Scenes. Rowlands et al 2012 is similar in approach and was discussed in Natural Variability and Chaos – Five – Why Should Observations match Models?

The way I read the IPCC reports and various papers is that clearly the projections are not a probability distribution. Then the data gets inevitably gets used as a de facto probability distribution.

Conclusion

“All models are wrong but some are useful” as George Box said, actually in a quite unrelated field (i.e., not climate). But it’s a good saying.

Many people who describe themselves as “lukewarmers” believe that climate sensitivity as characterized by the IPCC is too high and the real climate has a lower sensitivity. I have no idea.

Models may be wrong, but I don’t have an alternative model to provide. And therefore, given that they represent climate better than any current alternative, their results are useful.

We can’t currently create a real probability distribution from a set of temperature prediction results (assuming a given emissions scenario).

How useful is it to know that under a scenario like RCP6.0 the average global temperature increase in 2100 has been simulated as variously 1ºC, 2ºC, 3ºC, 4ºC? (note, I haven’t checked the CMIP5 simulations to get each value). And the tropics will vary less, land more? As we dig into more details we will attempt to look at how reliable regional and seasonal temperature anomalies might be compared with the overall number. Likewise rainfall and other important climate values.

I do find it useful to keep the idea of a set of possible numbers with no probability assigned. Then at some stage we can say something like, “if this RCP scenario turns out to be correct and the global average surface temperature actually increases by 3ºC by 2100, we know the following are reasonable assumptions … but we currently can’t make any predictions about these other values..

References

Long-term Climate Change: Projections, Commitments and Irreversibility, M Collins et al (2013) – In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change

Scenarios of long-term socio-economic and environmental development under climate stabilization, Keywan Riahi et al, Technological Forecasting & Social Change (2007) – free paper

Error Reduction and Convergence in Climate Prediction, Charles S Jackson et al, Journal of Climate (2008) – free paper

Notes

Note 1: As explored a little in the last article, RCP6.0 does include some changes to climate policy but it seems they are not major. I believe a very useful scenario for exploring impact assessments would be the population and development path of RCP6.0 (let’s call it RCP6.0A) without any climate policies.

For reasons of”scenario parsimony” this interesting pathway avoids attention.

Read Full Post »

In Part II we looked at various scenarios for emissions. One important determinant is how the world population will change through this century and with a few comments on that topic I thought it worth digging a little.

Here is Lutz, Sanderson & Scherbov, Nature (2001):

The median value of our projections reaches a peak around 2070 at 9.0 billion people and then slowly decreases. In 2100, the median value of our projections is 8.4 billion people with the 80 per cent prediction interval bounded by 5.6 and 12.1 billion.

From Lutz 2001

From Lutz 2001

Figure 1 – Click to enlarge

This paper is behind a paywall but Lutz references the 1996 book he edited for assumptions, which is freely available (link below).

In it the authors comment, p. 22:

Some users clearly want population figures for the year 2100 and beyond. Should the demographer disappoint such expectations and leave it to others with less expertise to produce them? The answer given in this study is no. But as discussed below, we make a clear distinction between what we call projections up to 2030-2050 and everything beyond that time, which we term extensions for illustrative purposes.

[Emphasis added]

And then p.32:

Sanderson (1995) shows that it is impossible to produce “objective” confidence ranges for future population projections. Subjective confidence intervals are the best we can ever attain because assumptions are always involved.

Here are some more recent views.

Gerland et al 2014 – Gerland is from the Population Division of the UN:

The United Nations recently released population projections based on data until 2012 and a Bayesian probabilistic methodology. Analysis of these data reveals that, contrary to previous literature, world population is unlikely to stop growing this century. There is an 80% probability that world population, now 7.2 billion, will increase to between 9.6 and 12.3 billion in 2100. This uncertainty is much smaller than the range from the traditional UN high and low variants. Much of the increase is expected to happen in Africa, in part due to higher fertility and a recent slowdown in the pace of fertility decline..

..Among the most robust empirical findings in the literature on fertility transitions are that higher contraceptive use and higher female education are associated with faster fertility decline. These suggest that the projected rapid population growth could be moderated by greater investments in family planning programs to satisfy the unmet need for contraception, and in girls’ education. It should be noted, however, that the UN projections are based on an implicit assumption of a continuation of existing policies, but an intensification of current investments would be required for faster changes to occur

Wolfgang Lutz & Samir KC (2010). Lutz seems popular in this field:

The total size of the world population is likely to increase from its current 7 billion to 8–10 billion by 2050. This uncertainty is because of unknown future fertility and mortality trends in different parts of the world. But the young age structure of the population and the fact that in much of Africa and Western Asia, fertility is still very high makes an increase by at least one more billion almost certain. Virtually, all the increase will happen in the developing world. For the second half of the century, population stabilization and the onset of a decline are likely..

Although the paper doesn’t focus on 2100, but only up to 2050 it does include a graph for probalistic expectations to 2100 and has some interesting commentary around how different forecasting groups deal with uncertainty, how women’s education plays a huge role in reducing fertility and many other stories, for example:

The Demographic and Health Survey for Ethiopia, for instance, shows that women without any formal education have on average six children, whereas those with secondary education have only two (see http://www.measuredhs.com). Significant differentials can be found in most populations of all cultural traditions. Only in a few modern societies does the strongly negative association give way to a U-shaped pattern in which the most educated women have a somewhat higher fertility than those with intermediate education. But globally, the education differentials are so pervasive that education may well be called the single most important observable source of population heterogeneity after age and sex (Lutz et al. 1999). There are good reasons to assume that during the course of a demographic transition the fact that higher education leads to lower fertility is a true causal mechanism, where education facilitates better access to and information about family planning and most importantly leads to a change in attitude in which ‘quantity’ of children is replaced by ‘quality’, i.e. couples want to have fewer children with better life chances..

Lee 2011, another very interesting paper, makes this comment:

The U.N. projections assume that fertility will slowly converge toward replacement level (2.1 births per woman) by 2100

Lutz’s book had a similar hint that many demographers assume that somehow societies on mass will converge towards a steady state. Lee also comments that probability treatments for “low”, “medium” and “high” are not very realistic because the methods used assume a correlation between different countries, that isn’t true in practice. Lutz likewise has similar points. Here is Lee:

Special issues arise in constructing consistent probability intervals for individual countries, for regions, and for the world, because account must be taken of the positive or negative correlations among the country forecast errors within regions and across regions. Since error correlation is typically positive but less than 1.0, country errors tend to cancel under aggregation, and the proportional error bounds for the world population are far narrower than for individual countries. The NRC study (20) found that the average absolute country error was 21% while the average global error was only 3%. When the High, Medium and Low scenario approach is used, there is no cancellation of error under aggregation, so the probability coverage at different levels of aggregation cannot be handled consistently. An ongoing research collaboration between the U.N. Population Division and a team led by Raftery is developing new and very promising statistical methods for handling uncertainty in future forecasts.

And then on UN projections:

One might quibble with this or that assumption, but the UN projections have had an impressive record of success in the past, particularly at the global level, and I expect that to continue in the future. To a remarkable degree, the UN has sought out expert advice and experimented with cutting edge forecasting techniques, while maintaining consistency in projections. But in forecasting, errors are inevitable, and sound decision making requires that the likelihood of errors be taken into account. In this respect, there is much room for improvement in the UN projections and indeed in all projections by government statistical offices.

This comment looks like an oblique academic gentle slapping around (disguised as praise), but it’s hard to tell.

Conclusion

I don’t have a conclusion. I thought it would be interesting to find some demographic experts and show their views on future population trends. The future is always hard to predict – although in demography the next 20 years are usually easy to predict, short of global plagues and famines.

It does seem hard to have much idea about the population in 2100, but the difference between a population of 8bn and 11bn will have a large impact on CO2 emissions (without very significant CO2 mitigation policies).

References

The end of world population growth, Wolfgang Lutz, Warren Sanderson & Sergei Scherbov, Nature (2001) – paywall paper

The future population of the world – what can we assume?, edited Wolfgang Lutz, Earthscan Publications (1996) – freely available book

World Population Stabilization Unlikely This Century, Patrick Gerland et al, Science (2014) – free paper

Dimensions of global population projections: what do we know about future population trends and structures? Wolfgang Lutz & Samir KC, Phil. Trans. R. Soc. B (2010)

The Outlook for Population Growth, Ronald Lee, Science (2011) – free paper

Read Full Post »

In one of the iconic climate model tests, CO2 is doubled from a pre-industrial level of 280ppm to 560ppm “overnight” and we find the new steady state surface temperature. The change in CO2 is an input to the climate model, also known as a “forcing” because it is from outside. That is, humans create more CO2 from generating electricity, driving automobiles and other activities – this affects the climate and the climate responds.

These experiments with simple climate models were first done with 1d radiative-convective models in the 1960s. For example, Manabe & Wetherald 1967 who found a 2.3ºC surface temperature increase with constant relative humidity and 1.3ºC with constant absolute humidity (and for many reasons constant relative humidity seems more likely to be closer to reality than constant absolute humidity).

In other experiments, especially more recently, more more complex GCMs simulate 100 years with the CO2 concentration being gradually increased, in line with projections about future emissions – and we see what happens to temperature with time.

There are also other GHGs (“greenhouse” gases / radiatively-active gases) in the atmosphere that are changing due to human activity – especially methane (CH4) and nitrous oxide (N2O). And of course, the most important GHG is water vapor, but changes in water vapor concentration are a climate feedback – that is, changes in water vapor result from temperature (and circulation) changes.

And there are aerosols, some internally generated within the climate and others emitted by human activity. These also affect the climate in a number of ways.

We don’t know what future anthropogenic emissions will be. What will humans do? Build lots more coal-fire power stations to meet energy demand of the future? Run the entire world’s power grid from wind and solar by 2040? Finally invent practical nuclear fusion? How many people will there be?

So for this we need some scenarios of future human activity (note 1).

Scenarios – SRES and RCP

SRES was published in 2000:

In response to a 1994 evaluation of the earlier IPCC IS92 emissions scenarios, the 1996 Plenary of the IPCC requested this Special Report on Emissions Scenarios (SRES) (see Appendix I for the Terms of Reference). This report was accepted by the Working Group III (WGIII) plenary session in March 2000. The long-term nature and uncertainty of climate change and its driving forces require scenarios that extend to the end of the 21st century. This Report describes the new scenarios and how they were developed.

The SRES scenarios cover a wide range of the main driving forces of future emissions, from demographic to technological and economic developments. As required by the Terms of Reference, none of the scenarios in the set includes any future policies that explicitly address climate change, although all scenarios necessarily encompass various policies of other types.

The set of SRES emissions scenarios is based on an extensive assessment of the literature, six alternative modeling approaches, and an “open process” that solicited wide participation and feedback from many groups and individuals. The SRES scenarios include the range of emissions of all relevant species of greenhouse gases (GHGs) and sulfur and their driving forces..

..A set of scenarios was developed to represent the range of driving forces and emissions in the scenario literature so as to reflect current understanding and knowledge about underlying uncertainties. They exclude only outlying “surprise” or “disaster” scenarios in the literature. Any scenario necessarily includes subjective elements and is open to various interpretations. Preferences for the scenarios presented here vary among users. No judgment is offered in this Report as to the preference for any of the scenarios and they are not assigned probabilities of occurrence, neither must they be interpreted as policy recommendations..

..By 2100 the world will have changed in ways that are difficult to imagine – as difficult as it would have been at the end of the 19th century to imagine the changes of the 100 years since. Each storyline assumes a distinctly different direction for future developments, such that the four storylines differ in increasingly irreversible ways. Together they describe divergent futures that encompass a significant portion of the underlying uncertainties in the main driving forces. They cover a wide range of key “future” characteristics such as demographic change, economic development, and technological change. For this reason, their plausibility or feasibility should not be considered solely on the basis of an extrapolation of current economic, technological, and social trends.

The RCPs were in part a new version of the same idea as SRES and published in 2011. My understanding is that the Representative Concentration Pathways worked more towards final values of radiative forcing in 2100 that were considered in the modeling literature, and you can see this in the names of each RCP.

from A special issue on the RCPs, van Vuuren et al (2011)

By design, the RCPs, as a set, cover the range of radiative forcing levels examined in the open literature and contain relevant information for climate model runs.

[Emphasis added]

From The representative concentration pathways: an overview, van Vuuren et al (2011)

This paper summarizes the development process and main characteristics of the Representative Concentration Pathways (RCPs), a set of four new pathways developed for the climate modeling community as a basis for long-term and near-term modeling experiments.

The four RCPs together span the range of year 2100 radiative forcing values found in the open literature, i.e. from 2.6 to 8.5 W/m². The RCPs are the product of an innovative collaboration between integrated assessment modelers, climate modelers, terrestrial ecosystem modelers and emission inventory experts. The resulting product forms a comprehensive data set with high spatial and sectoral resolutions for the period extending to 2100..

..The RCPs are named according to radiative forcing target level for 2100. The radiative forcing estimates are based on the forcing of greenhouse gases and other forcing agents. The four selected RCPs were considered to be representative of the literature, and included one mitigation scenario leading to a very low forcing level (RCP2.6), two medium stabilization scenarios (RCP4.5/RCP6) and one very high baseline emission scenarios (RCP8.5).

Here are some graphs from the RCP introduction paper:

Population and GDP scenarios:

rcp-population-and-gdp-fig2-499px

Figure 1 – Click to expand

I was surprised by the population graph for RCP 8.5 and 6 (similar scenarios are generated in SRES). From reading various sources (but not diving into any detailed literature) I understood that the consensus was for population to peak mid-century at around 9bn people and then reduce back to something like 7-8bn people by the end of the century. This is because all countries that have experienced rising incomes have significantly reduced average fertility rates.

Here is Angus Deaton, in his fascinating and accessible book for people interested in The Great Escape as he calls it (that is, our escape from poverty and poor health):

In Africa in 1950, each woman could expect to give birth to 6.6 children; by 2000, that number had fallen to 5.1, and the UN estimates that it is 4.4 today. In Asia as well as in Latin America and the Caribbean, the decline has been even larger, from 6 children to just over 2..

The annual rate of growth of the world’s population, which reached 2.2% in 1960, was only half of that in 2011.

The GDP graph on the right (above) is lacking a definition. From the other papers covering the scenarios I understand it to be total world GDP in US$ trillions (at 2000 values, i.e. adjusted for inflation), although the numbers don’t seem to align exactly.

Energy consumption for the different scenarios:

Figure 2 – Click to expand

Annual emissions:

Figure 3 – Click to expand

Resulting concentrations in the atmosphere for CO2, CH4 (methane) and N2O (nitrous oxide):

rcp-fig3-ghg-concentrations-499px

Figure 4 – Click to expand

Radiative forcing (for explanation of this term, see for example Wonderland, Radiative Forcing and the Rate of Inflation):

rcp-fig10-rf-499px

Figure 5  – Click to expand

We can see from this figure (fig 5, their fig 10) that the RCP numbers refer to the expected radiative forcing in 2100 – so RCP8.5, often known as the “business as usual” scenario has a radiative forcing, compared to pre-industrial values, of 8.5 W/m². And RCP6 has a radiative forcing in 2100, of 6 W/m².

We can also see from the figure on the right that increases in CO2 are the cause of almost all of most of the increase from current values. For example, only RCP8.5 has a higher methane (CH4) forcing than today.

Business as usual – RCP 8.5 or RCP 6?

I’ve seen RCP8.5 described as “business as usual” but it seems quite an unlikely scenario. Perhaps we need to dive into this scenario more in another article. In the meantime, part of the description from Riahi et al (2011):

The scenario’s storyline describes a heterogeneous world with continuously increasing global population, resulting in a global population of 12 billion by 2100. Per capita income growth is slow and both internationally as well as regionally there is only little convergence between high and low income countries. Global GDP reaches around 250 trillion US2005$ in 2100.

The slow economic development also implies little progress in terms of efficiency. Combined with the high population growth, this leads to high energy demands. Still, international trade in energy and technology is limited and overall rates of technological progress is modest. The inherent emphasis on greater self-sufficiency of individual countries and regions assumed in the scenario implies a reliance on domestically available resources. Resource availability is not necessarily a constraint but easily accessible conventional oil and gas become relatively scarce in comparison to more difficult to harvest unconventional fuels like tar sands or oil shale.

Given the overall slow rate of technological improvements in low-carbon technologies, the future energy system moves toward coal-intensive technology choices with high GHG emissions. Environmental concerns in the A2 world are locally strong, especially in high and medium income regions. Food security is also a major concern, especially in low-income regions and agricultural productivity increases to feed a steadily increasing population.

Compared to the broader integrated assessment literature, the RCP8.5 represents thus a scenario with high global population and intermediate development in terms of total GDP (Fig. 4).

Per capita income, however, stays at comparatively low levels of about 20,000 US $2005 in the long term (2100), which is considerably below the median of the scenario literature. Another important characteristic of the RCP8.5 scenario is its relatively slow improvement in primary energy intensity of 0.5% per year over the course of the century. This trend reflects the storyline assumption of slow technological change. Energy intensity improvement rates are thus well below historical average (about 1% per year between 1940 and 2000). Compared to the scenario literature RCP8.5 depicts thus a relatively conservative business as usual case with low income, high population and high energy demand due to only modest improvements in energy intensity.

When I heard the term “business as usual” I’m sure I wasn’t alone in understanding it like this: the world carries on without adopting serious CO2 limiting policies. That is, no international agreements on CO2 reductions, no carbon pricing, etc. And the world continues on its current trajectory of growth and development. When you look at the last 40 years, it has been quite amazing. Why would growth slow, population not follow the pathway it has followed in all countries that have seen rising prosperity, and why would technological innovation and adoption slow? It would be interesting to see a “business as usual” scenario for emissions, CO2 concentrations and radiative forcing that had a better fit to the name.

RCP 6 seems to be a closer fit than RCP 8.5 to the name “business as usual”.

RCP6 is a climate-policy intervention scenario. That is, without explicit policies designed to reduce emissions, radiative forcing would exceed 6.0 W/m² in the year 2100.

However, the degree of GHG emissions mitigation required over the period 2010 to 2060 is small, particularly compared to RCP4.5 and RCP2.6, but also compared to emissions mitigation requirement subsequent to 2060 in RCP6 (Van Vuuren et al., 2011). The IPCC Fourth Assessment Report classified stabilization scenarios into six categories as shown in Table 1. RCP6 scenario falls into the border between the fifth category and the sixth category.

Its global mean long-term, steady-state equilibrium temperature could be expected to rise 4.9° centigrade, assuming a climate sensitivity of 3.0 and its CO2 equivalent concentration could be 855 ppm (Metz et al. 2007).

Some of the background to RCP 8.5 assumptions is in an earlier paper also by the same lead author – Riahi et al 2007, another freely accessible paper (reference below) which is worth a read, for example:

The task ahead of anticipating the possible developments over a time frame as ‘ridiculously’ long as a century is wrought with difficulties. Particularly, readers of this Journal will have sympathy for the difficulties in trying to capture social and technological changes over such a long time frame. One wonders how Arrhenius’ scenario of the world in 1996 would have looked, perhaps filled with just more of the same of his time—geopolitically, socially, and technologically. Would he have considered that 100 years later:

  • backward and colonially exploited China would be in the process of surpassing the UK’s economic output, eventually even that of all of Europe or the USA?
  • the existence of a highly productive economy within a social welfare state in his home country Sweden would elevate the rural and urban poor to unimaginable levels of personal affluence, consumption, and free time?
  • the complete obsolescence of the dominant technology cluster of the day-coal-fired steam engines?

How he would have factored in the possibility of the emergence of new technologies, especially in view of Lord Kelvin’s sobering ‘conclusion’ of 1895 that “heavier-than-air flying machines are impossible”?

Note on Comments

The Etiquette and About this Blog both explain the commenting policy in this blog. I noted briefly in the Introduction that of course questions about 100 years from now mean some small relaxation of the policy. But, in a large number of previous articles, we have discussed the “greenhouse” effect (just about to death) and so people who question it are welcome to find a relevant article and comment there – for example, The “Greenhouse” Effect Explained in Simple Terms which has many links to related articles. Questions on climate sensitivity, natural variation, and likelihood of projected future temperatures due to emissions are, of course, all still fair game in this series.

But I’ll just delete comments that question the existence of the greenhouse effect. Draconian, no doubt.

References

Emissions Scenarios, IPCC (2000) – free report

A special issue on the RCPs, Detlef P van Vuuren et al, Climatic Change (2011) – free paper

The representative concentration pathways: an overview, Detlef P van Vuuren et al, Climatic Change (2011) – free paper

RCP4.5: a pathway for stabilization of radiative forcing by 2100, Allison M. Thomson et al, Climatic Change (2011) – free paper

An emission pathway for stabilization at 6 Wm−2 radiative forcing,  Toshihiko Masui et al, Climatic Change (2011) – free paper

RCP 8.5—A scenario of comparatively high greenhouse gas emissions, Keywan Riahi et al, Climatic Change (2011) – free paper

Scenarios of long-term socio-economic and environmental development under climate stabilization, Keywan Riahi et al, Technological Forecasting & Social Change (2007) – free paper

Thermal equilibrium of the atmosphere with a given distribution of relative humidity, S Manabe, RT Wetherald, Journal of the Atmospheric Sciences (1967) – free paper

The Great Escape, Health, Wealth and the Origins of Inequality, Angus Deaton, Princeton University Press (2013) – book

Notes

Note 1: Even if we knew future anthropogenic emissions accurately it wouldn’t give us the whole picture. The climate has sources and sinks for CO2 and methane and there is some uncertainty about them, especially how well they will operate in the future. That is, anthropogenic emissions are modified by the feedback of sources and sinks for these emissions.

Read Full Post »

A long time ago, in About this Blog I wrote:

Opinions
Opinions are often interesting and sometimes entertaining. But what do we learn from opinions? It’s more useful to understand the science behind the subject. What is this particular theory built on? How long has the theory been “established”? What lines of evidence support this theory? What evidence would falsify this theory? What do opposing theories say?

Now I would like to look at impacts of climate change. And so opinions and value judgements are inevitable.

In physics we can say something like “95% of radiation at 667 cm-1 is absorbed within 1m at the surface because of the absorption properties of CO2″ and be judged true or false. It’s a number. It’s an equation. And therefore the result is falsifiable – the essence of science. Perhaps in some cases all the data is not in, or the formula is not yet clear, but this can be noted and accepted. There is evidence in favor or against, or a mix of evidence.

As we build equations into complex climate models, judgements become unavoidable. For example, “convection is modeled as a sub-grid parameterization therefore..”. Where the conclusion following “therefore” is the judgement. We could call it an opinion. We could call it an expert opinion. We could call it science if the result is falsifiable. But it starts to get a bit more “blurry” – at some point we move from a region of settled science to a region of less-settled science.

And once we consider the impacts in 2100 it seems that certainty and falsifiability must be abandoned. “Blurry” is the best case.

 

Less than a year ago listening to America and the New Global Economy by Timothy Taylor (via audible.com) I remember he said something like “the economic cost of climate change was all lumped into a fat tail – if the temperature change was on the higher side”. Sorry for my inaccurate memory (and the downside of audible.com vs a real book). Well it sparked my interest in another part of the climate journey.

I’ve been reading IPCC Working Group II (wgII) – some of the “TAR” (= third assessment report) from 2001 for background and AR5, the latest IPCC report from 2014. Some of the impacts also show up in Working Group I which is about the physical climate science, and the IPCC Special Report on Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation from 2012, known as SREX (Special Report on Extremes). These are all available at the IPCC website.

The first chapter of the TAR, Working Group II says:

The world community faces many risks from climate change. Clearly it is important to understand the nature of those risks, where natural and human systems are likely to be most vulnerable, and what may be achieved by adaptive responses. To understand better the potential impacts and associated dangers of global climate change, Working Group II of the Intergovernmental Panel on Climate Change (IPCC) offers this Third Assessment Report (TAR) on the state of knowledge concerning the sensitivity, adaptability, and vulnerability of physical, ecological, and social systems to climate change.

A couple of common complaints in the blogosphere that I’ve noticed are:

  • “all the impacts are supposed to be negative but there are a lot of positives from warming”
  • “CO2 will increase plant growth so we’ll be better off”

Within the field of papers and IPCC reports it’s clear that CO2 increasing plant growth is not ignored. Likewise, there are expected to be winners and losers (often, but definitely not exclusively, geographically distributed), even though the IPCC summarizes the expected overall effect as negative.

Of course, there is a highly entertaining field of “recycled press releases about the imminent catastrophe of climate change” which I’m sure ignores any positives or tradeoffs. Even in what could charitably be called “respected media outlets” there seem to be few correspondents with basic scientific literacy. Not even the ability to add up the numbers on an electricity bill or distinguish between the press release of a company planning to get wonderful results in 2025 vs today’s reality.

Anyway, entertaining as it is to shoot fish in a barrel, we will try to stay away from discussing newsotainment and stay with the scientific literature and IPCC assessments. Inevitably, we’ll stray a little.

I haven’t tried to do a comprehensive summary of the issues believed to impact humanity, but here are some:

  • sea level rise
  • heatwaves
  • droughts
  • floods
  • more powerful cyclones and storms
  • food production
  • ocean acidification
  • extinction of animal and plant species
  • more pests (added, thanks Tom, corrected thanks DeWitt)
  • disease (added, thanks Tom)

Possibly I’ve missed some.

Covering the subject is not easy but it’s an interesting field.

Read Full Post »

In Planck, Stefan-Boltzmann, Kirchhoff and LTE one of our commenters asked a question about emissivity. The first part of that article is worth reading as a primer in the basics for this article. I don’t want to repeat all the basics, except to say that if a body is a “black body” it emits radiation according to a simple formula. This is the maximum that any body can emit. In practice, a body will emit less.

The ratio between actual and the black body is the emissivity. It has a value between 0 and 1.

The question that this article tries to help readers understand is the origin and use of the emissivity term in the Stefan-Boltzmann equation:

E = ε’σT4

where E = total flux, ε’ = “effective emissivity” (a value between 0 and 1), σ is a constant and T = temperature in Kelvin (i.e., absolute temperature).

The term ε’ in the Stefan-Boltzmann equation is not really a constant. But it is often treated as a constant in articles that related to climate. Is this valid? Not valid? Why is it not a constant?

There is a constant material property called emissivity, but it is a function of wavelength. For example, if we found that the emissivity of a body at 10.15 μm was 0.55 then this would be the same regardless of whether the body was in Antarctica (around 233K = -40ºC), the tropics (around 303K = 30ºC) or at the temperature of the sun’s surface (5800K). How do we know this? From experimental work over more than a century.

Hopefully some graphs will illuminate the difference between emissivity the material property (that doesn’t change), and the “effective emissivity” (that does change) we find in the Stefan-Boltzmann equation. In each graph you can see:

  • (top) the blackbody curve
  • (middle) the emissivity of this fictional material as a function of wavelength
  • (bottom) the actual emitted radiation due to the emissivity – and a calculation of the “effective emissivity”.

The calculation of “effective emissivity” = total actual emitted radiation / total blackbody emitted radiation (note 1).

At 288K – effective emissivity = 0.49:

emissivity-288k

At 300K – effective emissivity = 0.49:

emissivity-300k

At 400K – effective emissivity = 0.44:

emissivity-400k

At 500K – effective emissivity = 0.35:

emissivity-500k

At 5800K, that is solar surface temperature — effective emissivity = 0.00 (note the scale on the bottom graph is completely different from the scale of the top graph):

emissivity-5800k

Hopefully this helps people trying to understand what emissivity really relates to in the Stefan Boltzmann equation. It is not a constant except in rare cases. But you can see that treating it as a constant over a range of temperatures is a reasonable approximation (depending on the accuracy you want), but change the temperature “too much” and your “effective emissivity” can change massively.

As always with approximations and useful formulas, you need to understand the basis behind them to know when you can and can’t use them.

Any questions, just ask in the comments.

Note 1 – The flux was calculated for the wavelength range of 0.01 μm to 50μm. If you use the Stefan Boltzmann equation for 288K you will get E = 5.67×10-8 x 2884 = 390 W/m2. The reason my graph has 376 W/m2 is because I don’t include the wavelength range from 50 to infinity. It doesn’t change the practical results you see.

Read Full Post »

About 100 years ago I wrote Renewables XVII – Demand Management 1 and promised to examine the subject more in a subsequent article. As with many of my blog promises (“non-core promises”) I have failed to do anything in what could be even charitably described as a “timely manner”. I got diverted by my startup.

However, in a roundabout way I came across some articles that help illuminate the energy subject better than I could. While travelling I listened via audible.com to two great books by Timothy Taylor – America and the New Global Economy and A History of the U.S. Economy in the 20th Century. It turns out that Timothy Taylor is the editor of the Journal of Economic Perspectives (and also writes a blog – the Conversable Economist – which is great quality). This journal has recently made its articles open access back to the dawn of time and I downloaded a few years of the journal.

Digressing on my digression, in one of those two books, Taylor made an interesting comment about economists views on climate change which sparked my interest in studying the IPCC working groups 2 & 3 – impacts and mitigation. Possibly some articles to come in that arena, but no campaign promises. It’s a big subject.

The Journal of Economic Perspectives, Volume 26, Number 1, Winter 2012 contains a number of articles on energy, including Creating a Smarter U .S . Electricity Grid, Paul L Joskow. I recommend reading the whole paper – well-written and accessible. He comments on some of the papers that I had already discovered. A few comments selected:

Smart grid investment on the high voltage network has only a limited ability to increase the effective capacity of transmission networks. A large increase in transmission capacity, especially if it involves accessing generating capacity at new locations remote from load centers, requires building new physical transmission capacity. However, building major new transmission lines is extremely difficult. The U.S. transmission system was not built to facilitate large movements between interconnected control areas or over long distances; rather, it was built to balance supply and demand reliably within individual utility (or holding company) service areas. While the capacity of interconnections have expanded over time, the bulk of the price differences in Table 1 are due to the fact that there is insufficient transmission capacity to move large amounts of power from, for example, Chicago to New York City. The regulatory process that determines how high voltage transmission capacity (and smart grid investments in the transmission network) is sited and paid for in regulated transmission prices is of byzantine complexity..

The U.S. Department of Energy has supported about 70 smart grid projects involving local distribution systems on a roughly 50/50 cost sharing basis, with details available at 〈http://www.smartgrid.gov/recovery_act/tracking_deployment /distribution〉. However, a full transformation of local distribution systems will take many years and a lot of capital investment. Are the benefits likely to exceed the costs? In the only comprehensive and publicly available effort at cost–benefit analysis in this area, the Electric Power Research Institute (2011a) estimates that deployment (to about 55 percent of distribution feeders) would cost between $120–$170 billion, and claims that the benefits in terms of greater reliability of the electricity supply would be about $600 billion (both in net present value). Unfortunately, I found the benefit analyses to be speculative and impossible to reproduce given the information made available in EPRI’s report..

And on demand management programs’ impacts on peak demand:

The idea of moving from time-invariant electricity prices to “peak-load” pricing where prices are more closely tied to variations in marginal cost has been around for at least 50 years..

A large number of U.S. utilities began offering time-of-use and interruptible pricing options for large commercial and industrial customers during the 1980s, either as a pilot program or as an option. More recently, a number of states have introduced pilot programs for residential (household) consumers that install smart meters of various kinds, charge prices that vary with wholesale prices, and observe demand..

Faruqui and Sergici (2010) summarize the results of 15 earlier studies of various forms of dynamic pricing, including time-of-use pricing, peak pricing, and real-time pricing.. Faruqui (2011) summarizes the reduction in peak load from 109 dynamic pricing studies, including those that use time-of-use pricing, peak pricing, and full real-time pricing, and finds that higher peak period prices always lead to a reduction in peak demand. However, the reported price responses across these studies vary by an order of magnitude, and the factors that lead to the variability of responses have been subject to very limited analysis..

Accordingly, it seems to me that a sensible deployment strategy is to combine a long-run plan for rolling out smart-grid investments with well-designed pilots and experiments. Using randomized trials of smart grid technology and pricing, with a robust set of treatments and the “rest of the distribution grid” as the control, would allow much more confidence in estimates of demand response, meter and grid costs, reliability and power quality benefits, and other key outcomes. For example, Faruqui’s (2011b) report on the peak-period price responses for 109 pilot programs displays responses between 5 to 50 percent of peak demand. An order-of-magnitude difference in measured price responses is just not good enough to do convincing cost–benefit analyses, especially with the other issues noted above. In turn, the information that emerges from these studies could be used to make mid-course corrections in the deployment strategy. Given the large investments contemplated in smart meters and complementary investments, along with the diverse uncertainties that we now face, rushing to deploy a particular set of technologies as quickly as possible is in my view a mistake.

What I observed from reading a lot of papers back when I had promised a followup article (on demand management) was lots of fluff and a small amount of substance. As Joskow says, a wide range in potential outcomes, and not much in the way of large-scale data to draw real conclusions.

In that same linked document above you can also read other papers including: Prospects for Nuclear Power, Lucas W Davis; The Private and Public Economics of Renewable Electricity Generation, Severin Borenstein. Both of these papers are excellent.

Reading the Joskow paper in JEP I thought his name was familiar and it turns out I already had three of his papers:

This paper makes a very simple point regarding the proper methods for comparing the economic value of intermittent generating technologies (e.g. wind and solar) with the economic value of traditional dispatchable generating technologies (e.g. CCGT, coal, nuclear). I show that the prevailing approach that relies on comparisons of the “levelized cost” per MWh supplied by different generating technologies, or any other measure of total life-cycle production costs per MWh supplied, is seriously flawed..

[Emphasis added]

For people interested in understanding the subject of energy vs CO2 emissions, these are valuable and relatively easy to read papers.

Articles in this Series

Renewable Energy I – Introduction

Renewables II – Solar and Free Lunches – Solar power

Renewables III – US Grid Operators’ Opinions – The grid operators’ concerns

Renewables IV – Wind, Forecast Horizon & Backups – Some more detail about wind power – what do we do when the wind goes on vacation

Renewables V – Grid Stability As Wind Power Penetration Increases

Renewables VI – Report says.. 100% Renewables by 2030 or 2050

Renewables VII – Feasibility and Reality – Geothermal example

Renewables VIII – Transmission Costs And Outsourcing Renewable Generation

Renewables IX – Onshore Wind Costs

Renewables X – Nationalism vs Inter-Nationalism

Renewables XI – Cost of Gas Plants vs Wind Farms

Renewables XII – Windpower as Baseload and SuperGrids

Renewables XIII – One of Wind’s Hidden Costs

Renewables XIV – Minimized Cost of 99.9% Renewable Study

Renewables XV – Offshore Wind Costs

Renewables XVI – JP Morgan advises

Renewables XVII – Demand Management 1

Renewables XVIII – Demand Management & Levelized Cost

Renewables XIX – Behind the Executive Summary and Reality vs Dreams

Read Full Post »

« Newer Posts