Feeds:
Posts
Comments

In Impacts – II – GHG Emissions Projections: SRES and RCP we looked at projections of emissions under various scenarios with the resulting CO2 (and other GHG) concentrations and resulting radiative forcing.

Why do we need these scenarios? Because even if climate models were perfect and could accurately calculate the temperature 100 years from now, we wouldn’t know how much “anthropogenic CO2” (and other GHGs) would have been emitted by that time. The scenarios allow climate modelers to produce temperature (and other climate variable) projections on the basis of each of these scenarios.

The IPCC AR5 (fifth assessment report) from 2013 says (chapter 12, p. 1031):

Global mean temperatures will continue to rise over the 21st century if greenhouse gas (GHG) emissions continue unabated.

Under the assumptions of the concentration-driven RCPs, global mean surface temperatures for 2081–2100, relative to 1986–2005 will likely be in the 5 to 95% range of the CMIP5 models:

  • 0.3°C to 1.7°C (RCP2.6)
  • 1.1°C to 2.6°C (RCP4.5)
  • 1.4°C to 3.1°C (RCP6.0)
  • 2.6°C to 4.8°C (RCP8.5)

Global temperatures averaged over the period 2081– 2100 are projected to likely exceed 1.5°C above 1850-1900 for RCP4.5, RCP6.0 and RCP8.5 (high confidence), are likely to exceed 2°C above 1850-1900 for RCP6.0 and RCP8.5 (high confidence) and are more likely than not to exceed 2°C for RCP4.5 (medium confidence). Temperature change above 2°C under RCP2.6 is unlikely (medium confidence). Warming above 4°C by 2081–2100 is unlikely in all RCPs (high confidence) except for RCP8.5, where it is about as likely as not (medium confidence).

I commented in Part II that RCP8.5 seemed to be a scenario that didn’t match up with the last 40-50 years of development. Of course, the various scenario developers give their caveats, for example, Riahi et al 2007:

Given the large number of variables and their interdependencies, we are of the opinion that it is impossible to assign objective likelihoods or probabilities to emissions scenarios. We have also not attempted to assign any subjective likelihoods to the scenarios either. The purpose of the scenarios presented in this Special Issue is, rather, to span the range of uncertainty without an assessment of likely, preferable, or desirable future developments..

Readers should exercise their own judgment on the plausibility of above scenario ‘storylines’..

To me RCP6.0 seems a more likely future (compared with RCP8.5) in a world that doesn’t have any significant attempt to tackle CO2 emissions. That is, no major change in climate policy to today’s world, but similar economic and population development (note 1).

Here is the graph of projected temperature anomalies for the different scenarios. :

From AR5, chapter 12

From AR5, chapter 12

Figure 1

That graph is hard to make out for 2100, here is the table of corresponding data. I highlighted RCP6.0 in 2100 – you can click to enlarge the table:

ar5-ch12-table12-2-temperature-anomaly-2100-499px

Figure 2 – Click to expand

Probabilities and Lists

The table above has a “1 std deviation” and a 5%-95% distribution. The graph (which has the same source data) has shading to indicate 5%-95% of models for each RCP scenario.

These have no relation to real probability distributions. That is, the range of 5-95% for RCP6.0 doesn’t equate to: “the probability is 90% likely that the average temperature 2080-2100 will be 1.4-3.1ºC higher than the 1986-2005 average”.

A number of climate models are used to produce simulations and the results from these “ensembles” are sometimes pressed into “probability service”. For some concept background on ensembles read Ensemble Forecasting.

Here is IPCC AR5 chapter 12:

Ensembles like CMIP5 do not represent a systematically sampled family of models but rely on self-selection by the modelling groups.

This opportunistic nature of MMEs [multi-model ensembles] has been discussed, for example, in Tebaldi and Knutti (2007) and Knutti et al. (2010a). These ensembles are therefore not designed to explore uncertainty in a coordinated manner, and the range of their results cannot be straightforwardly interpreted as an exhaustive range of plausible outcomes, even if some studies have shown how they appear to behave as well calibrated probabilistic forecasts for some large-scale quantities. Other studies have argued instead that the tail of distributions is by construction undersampled.

In general, the difficulty in producing quantitative estimates of uncertainty based on multiple model output originates in their peculiarities as a statistical sample, neither random nor systematic, with possible dependencies among the members and of spurious nature, that is, often counting among their members models with different degrees of complexities (different number of processes explicitly represented or parameterized) even within the category of general circulation models..

..In summary, there does not exist at present a single agreed on and robust formal methodology to deliver uncertainty quantification estimates of future changes in all climate variables. As a consequence, in this chapter, statements using the calibrated uncertainty language are a result of the expert judgement of the authors, combining assessed literature results with an evaluation of models demonstrated ability (or lack thereof) in simulating the relevant processes (see Chapter 9) and model consensus (or lack thereof) over future projections. In some cases when a significant relation is detected between model performance and reliability of its future projections, some models (or a particular parametric configuration) may be excluded but in general it remains an open research question to find significant connections of this kind that justify some form of weighting across the ensemble of models and produce aggregated future projections that are significantly different from straightforward one model–one vote ensemble results. Therefore, most of the analyses performed for this chapter make use of all available models in the ensembles, with equal weight given to each of them unless otherwise stated.

And from one of the papers cited in that section of chapter 12, Jackson et al 2008:

In global climate models (GCMs), unresolved physical processes are included through simplified representations referred to as parameterizations.

Parameterizations typically contain one or more adjustable phenomenological parameters. Parameter values can be estimated directly from theory or observations or by “tuning” the models by comparing model simulations to the climate record. Because of the large number of parameters in comprehensive GCMs, a thorough tuning effort that includes interactions between multiple parameters can be very computationally expensive. Models may have compensating errors, where errors in one parameterization compensate for errors in other parameterizations to produce a realistic climate simulation (Wang 2007; Golaz et al. 2007; Min et al. 2007; Murphy et al. 2007).

The risk is that, when moving to a new climate regime (e.g., increased greenhouse gases), the errors may no longer compensate. This leads to uncertainty in climate change predictions. The known range of uncertainty of many parameters allows a wide variance of the resulting simulated climate (Murphy et al. 2004; Stainforth et al. 2005; M. Collins et al. 2006). The persistent scatter in the sensitivities of models from different modeling groups, despite the effort represented by the approximately four generations of modeling improvements, suggests that uncertainty in climate prediction may depend on underconstrained details and that we should not expect convergence anytime soon.

Stainforth et al 2005 (referenced in the quote above) tried much larger ensembles of coarser resolution climate models, and was discussed in the comments of Models, On – and Off – the Catwalk – Part Four – Tuning & the Magic Behind the Scenes. Rowlands et al 2012 is similar in approach and was discussed in Natural Variability and Chaos – Five – Why Should Observations match Models?

The way I read the IPCC reports and various papers is that clearly the projections are not a probability distribution. Then the data gets inevitably gets used as a de facto probability distribution.

Conclusion

“All models are wrong but some are useful” as George Box said, actually in a quite unrelated field (i.e., not climate). But it’s a good saying.

Many people who describe themselves as “lukewarmers” believe that climate sensitivity as characterized by the IPCC is too high and the real climate has a lower sensitivity. I have no idea.

Models may be wrong, but I don’t have an alternative model to provide. And therefore, given that they represent climate better than any current alternative, their results are useful.

We can’t currently create a real probability distribution from a set of temperature prediction results (assuming a given emissions scenario).

How useful is it to know that under a scenario like RCP6.0 the average global temperature increase in 2100 has been simulated as variously 1ºC, 2ºC, 3ºC, 4ºC? (note, I haven’t checked the CMIP5 simulations to get each value). And the tropics will vary less, land more? As we dig into more details we will attempt to look at how reliable regional and seasonal temperature anomalies might be compared with the overall number. Likewise rainfall and other important climate values.

I do find it useful to keep the idea of a set of possible numbers with no probability assigned. Then at some stage we can say something like, “if this RCP scenario turns out to be correct and the global average surface temperature actually increases by 3ºC by 2100, we know the following are reasonable assumptions … but we currently can’t make any predictions about these other values..

References

Long-term Climate Change: Projections, Commitments and Irreversibility, M Collins et al (2013) – In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change

Scenarios of long-term socio-economic and environmental development under climate stabilization, Keywan Riahi et al, Technological Forecasting & Social Change (2007) – free paper

Error Reduction and Convergence in Climate Prediction, Charles S Jackson et al, Journal of Climate (2008) – free paper

Notes

Note 1: As explored a little in the last article, RCP6.0 does include some changes to climate policy but it seems they are not major. I believe a very useful scenario for exploring impact assessments would be the population and development path of RCP6.0 (let’s call it RCP6.0A) without any climate policies.

For reasons of”scenario parsimony” this interesting pathway avoids attention.

Advertisements

In Part II we looked at various scenarios for emissions. One important determinant is how the world population will change through this century and with a few comments on that topic I thought it worth digging a little.

Here is Lutz, Sanderson & Scherbov, Nature (2001):

The median value of our projections reaches a peak around 2070 at 9.0 billion people and then slowly decreases. In 2100, the median value of our projections is 8.4 billion people with the 80 per cent prediction interval bounded by 5.6 and 12.1 billion.

From Lutz 2001

From Lutz 2001

Figure 1 – Click to enlarge

This paper is behind a paywall but Lutz references the 1996 book he edited for assumptions, which is freely available (link below).

In it the authors comment, p. 22:

Some users clearly want population figures for the year 2100 and beyond. Should the demographer disappoint such expectations and leave it to others with less expertise to produce them? The answer given in this study is no. But as discussed below, we make a clear distinction between what we call projections up to 2030-2050 and everything beyond that time, which we term extensions for illustrative purposes.

[Emphasis added]

And then p.32:

Sanderson (1995) shows that it is impossible to produce “objective” confidence ranges for future population projections. Subjective confidence intervals are the best we can ever attain because assumptions are always involved.

Here are some more recent views.

Gerland et al 2014 – Gerland is from the Population Division of the UN:

The United Nations recently released population projections based on data until 2012 and a Bayesian probabilistic methodology. Analysis of these data reveals that, contrary to previous literature, world population is unlikely to stop growing this century. There is an 80% probability that world population, now 7.2 billion, will increase to between 9.6 and 12.3 billion in 2100. This uncertainty is much smaller than the range from the traditional UN high and low variants. Much of the increase is expected to happen in Africa, in part due to higher fertility and a recent slowdown in the pace of fertility decline..

..Among the most robust empirical findings in the literature on fertility transitions are that higher contraceptive use and higher female education are associated with faster fertility decline. These suggest that the projected rapid population growth could be moderated by greater investments in family planning programs to satisfy the unmet need for contraception, and in girls’ education. It should be noted, however, that the UN projections are based on an implicit assumption of a continuation of existing policies, but an intensification of current investments would be required for faster changes to occur

Wolfgang Lutz & Samir KC (2010). Lutz seems popular in this field:

The total size of the world population is likely to increase from its current 7 billion to 8–10 billion by 2050. This uncertainty is because of unknown future fertility and mortality trends in different parts of the world. But the young age structure of the population and the fact that in much of Africa and Western Asia, fertility is still very high makes an increase by at least one more billion almost certain. Virtually, all the increase will happen in the developing world. For the second half of the century, population stabilization and the onset of a decline are likely..

Although the paper doesn’t focus on 2100, but only up to 2050 it does include a graph for probalistic expectations to 2100 and has some interesting commentary around how different forecasting groups deal with uncertainty, how women’s education plays a huge role in reducing fertility and many other stories, for example:

The Demographic and Health Survey for Ethiopia, for instance, shows that women without any formal education have on average six children, whereas those with secondary education have only two (see http://www.measuredhs.com). Significant differentials can be found in most populations of all cultural traditions. Only in a few modern societies does the strongly negative association give way to a U-shaped pattern in which the most educated women have a somewhat higher fertility than those with intermediate education. But globally, the education differentials are so pervasive that education may well be called the single most important observable source of population heterogeneity after age and sex (Lutz et al. 1999). There are good reasons to assume that during the course of a demographic transition the fact that higher education leads to lower fertility is a true causal mechanism, where education facilitates better access to and information about family planning and most importantly leads to a change in attitude in which ‘quantity’ of children is replaced by ‘quality’, i.e. couples want to have fewer children with better life chances..

Lee 2011, another very interesting paper, makes this comment:

The U.N. projections assume that fertility will slowly converge toward replacement level (2.1 births per woman) by 2100

Lutz’s book had a similar hint that many demographers assume that somehow societies on mass will converge towards a steady state. Lee also comments that probability treatments for “low”, “medium” and “high” are not very realistic because the methods used assume a correlation between different countries, that isn’t true in practice. Lutz likewise has similar points. Here is Lee:

Special issues arise in constructing consistent probability intervals for individual countries, for regions, and for the world, because account must be taken of the positive or negative correlations among the country forecast errors within regions and across regions. Since error correlation is typically positive but less than 1.0, country errors tend to cancel under aggregation, and the proportional error bounds for the world population are far narrower than for individual countries. The NRC study (20) found that the average absolute country error was 21% while the average global error was only 3%. When the High, Medium and Low scenario approach is used, there is no cancellation of error under aggregation, so the probability coverage at different levels of aggregation cannot be handled consistently. An ongoing research collaboration between the U.N. Population Division and a team led by Raftery is developing new and very promising statistical methods for handling uncertainty in future forecasts.

And then on UN projections:

One might quibble with this or that assumption, but the UN projections have had an impressive record of success in the past, particularly at the global level, and I expect that to continue in the future. To a remarkable degree, the UN has sought out expert advice and experimented with cutting edge forecasting techniques, while maintaining consistency in projections. But in forecasting, errors are inevitable, and sound decision making requires that the likelihood of errors be taken into account. In this respect, there is much room for improvement in the UN projections and indeed in all projections by government statistical offices.

This comment looks like an oblique academic gentle slapping around (disguised as praise), but it’s hard to tell.

Conclusion

I don’t have a conclusion. I thought it would be interesting to find some demographic experts and show their views on future population trends. The future is always hard to predict – although in demography the next 20 years are usually easy to predict, short of global plagues and famines.

It does seem hard to have much idea about the population in 2100, but the difference between a population of 8bn and 11bn will have a large impact on CO2 emissions (without very significant CO2 mitigation policies).

References

The end of world population growth, Wolfgang Lutz, Warren Sanderson & Sergei Scherbov, Nature (2001) – paywall paper

The future population of the world – what can we assume?, edited Wolfgang Lutz, Earthscan Publications (1996) – freely available book

World Population Stabilization Unlikely This Century, Patrick Gerland et al, Science (2014) – free paper

Dimensions of global population projections: what do we know about future population trends and structures? Wolfgang Lutz & Samir KC, Phil. Trans. R. Soc. B (2010)

The Outlook for Population Growth, Ronald Lee, Science (2011) – free paper

In one of the iconic climate model tests, CO2 is doubled from a pre-industrial level of 280ppm to 560ppm “overnight” and we find the new steady state surface temperature. The change in CO2 is an input to the climate model, also known as a “forcing” because it is from outside. That is, humans create more CO2 from generating electricity, driving automobiles and other activities – this affects the climate and the climate responds.

These experiments with simple climate models were first done with 1d radiative-convective models in the 1960s. For example, Manabe & Wetherald 1967 who found a 2.3ºC surface temperature increase with constant relative humidity and 1.3ºC with constant absolute humidity (and for many reasons constant relative humidity seems more likely to be closer to reality than constant absolute humidity).

In other experiments, especially more recently, more more complex GCMs simulate 100 years with the CO2 concentration being gradually increased, in line with projections about future emissions – and we see what happens to temperature with time.

There are also other GHGs (“greenhouse” gases / radiatively-active gases) in the atmosphere that are changing due to human activity – especially methane (CH4) and nitrous oxide (N2O). And of course, the most important GHG is water vapor, but changes in water vapor concentration are a climate feedback – that is, changes in water vapor result from temperature (and circulation) changes.

And there are aerosols, some internally generated within the climate and others emitted by human activity. These also affect the climate in a number of ways.

We don’t know what future anthropogenic emissions will be. What will humans do? Build lots more coal-fire power stations to meet energy demand of the future? Run the entire world’s power grid from wind and solar by 2040? Finally invent practical nuclear fusion? How many people will there be?

So for this we need some scenarios of future human activity (note 1).

Scenarios – SRES and RCP

SRES was published in 2000:

In response to a 1994 evaluation of the earlier IPCC IS92 emissions scenarios, the 1996 Plenary of the IPCC requested this Special Report on Emissions Scenarios (SRES) (see Appendix I for the Terms of Reference). This report was accepted by the Working Group III (WGIII) plenary session in March 2000. The long-term nature and uncertainty of climate change and its driving forces require scenarios that extend to the end of the 21st century. This Report describes the new scenarios and how they were developed.

The SRES scenarios cover a wide range of the main driving forces of future emissions, from demographic to technological and economic developments. As required by the Terms of Reference, none of the scenarios in the set includes any future policies that explicitly address climate change, although all scenarios necessarily encompass various policies of other types.

The set of SRES emissions scenarios is based on an extensive assessment of the literature, six alternative modeling approaches, and an “open process” that solicited wide participation and feedback from many groups and individuals. The SRES scenarios include the range of emissions of all relevant species of greenhouse gases (GHGs) and sulfur and their driving forces..

..A set of scenarios was developed to represent the range of driving forces and emissions in the scenario literature so as to reflect current understanding and knowledge about underlying uncertainties. They exclude only outlying “surprise” or “disaster” scenarios in the literature. Any scenario necessarily includes subjective elements and is open to various interpretations. Preferences for the scenarios presented here vary among users. No judgment is offered in this Report as to the preference for any of the scenarios and they are not assigned probabilities of occurrence, neither must they be interpreted as policy recommendations..

..By 2100 the world will have changed in ways that are difficult to imagine – as difficult as it would have been at the end of the 19th century to imagine the changes of the 100 years since. Each storyline assumes a distinctly different direction for future developments, such that the four storylines differ in increasingly irreversible ways. Together they describe divergent futures that encompass a significant portion of the underlying uncertainties in the main driving forces. They cover a wide range of key “future” characteristics such as demographic change, economic development, and technological change. For this reason, their plausibility or feasibility should not be considered solely on the basis of an extrapolation of current economic, technological, and social trends.

The RCPs were in part a new version of the same idea as SRES and published in 2011. My understanding is that the Representative Concentration Pathways worked more towards final values of radiative forcing in 2100 that were considered in the modeling literature, and you can see this in the names of each RCP.

from A special issue on the RCPs, van Vuuren et al (2011)

By design, the RCPs, as a set, cover the range of radiative forcing levels examined in the open literature and contain relevant information for climate model runs.

[Emphasis added]

From The representative concentration pathways: an overview, van Vuuren et al (2011)

This paper summarizes the development process and main characteristics of the Representative Concentration Pathways (RCPs), a set of four new pathways developed for the climate modeling community as a basis for long-term and near-term modeling experiments.

The four RCPs together span the range of year 2100 radiative forcing values found in the open literature, i.e. from 2.6 to 8.5 W/m². The RCPs are the product of an innovative collaboration between integrated assessment modelers, climate modelers, terrestrial ecosystem modelers and emission inventory experts. The resulting product forms a comprehensive data set with high spatial and sectoral resolutions for the period extending to 2100..

..The RCPs are named according to radiative forcing target level for 2100. The radiative forcing estimates are based on the forcing of greenhouse gases and other forcing agents. The four selected RCPs were considered to be representative of the literature, and included one mitigation scenario leading to a very low forcing level (RCP2.6), two medium stabilization scenarios (RCP4.5/RCP6) and one very high baseline emission scenarios (RCP8.5).

Here are some graphs from the RCP introduction paper:

Population and GDP scenarios:

rcp-population-and-gdp-fig2-499px

Figure 1 – Click to expand

I was surprised by the population graph for RCP 8.5 and 6 (similar scenarios are generated in SRES). From reading various sources (but not diving into any detailed literature) I understood that the consensus was for population to peak mid-century at around 9bn people and then reduce back to something like 7-8bn people by the end of the century. This is because all countries that have experienced rising incomes have significantly reduced average fertility rates.

Here is Angus Deaton, in his fascinating and accessible book for people interested in The Great Escape as he calls it (that is, our escape from poverty and poor health):

In Africa in 1950, each woman could expect to give birth to 6.6 children; by 2000, that number had fallen to 5.1, and the UN estimates that it is 4.4 today. In Asia as well as in Latin America and the Caribbean, the decline has been even larger, from 6 children to just over 2..

The annual rate of growth of the world’s population, which reached 2.2% in 1960, was only half of that in 2011.

The GDP graph on the right (above) is lacking a definition. From the other papers covering the scenarios I understand it to be total world GDP in US$ trillions (at 2000 values, i.e. adjusted for inflation), although the numbers don’t seem to align exactly.

Energy consumption for the different scenarios:

Figure 2 – Click to expand

Annual emissions:

Figure 3 – Click to expand

Resulting concentrations in the atmosphere for CO2, CH4 (methane) and N2O (nitrous oxide):

rcp-fig3-ghg-concentrations-499px

Figure 4 – Click to expand

Radiative forcing (for explanation of this term, see for example Wonderland, Radiative Forcing and the Rate of Inflation):

rcp-fig10-rf-499px

Figure 5  – Click to expand

We can see from this figure (fig 5, their fig 10) that the RCP numbers refer to the expected radiative forcing in 2100 – so RCP8.5, often known as the “business as usual” scenario has a radiative forcing, compared to pre-industrial values, of 8.5 W/m². And RCP6 has a radiative forcing in 2100, of 6 W/m².

We can also see from the figure on the right that increases in CO2 are the cause of almost all of most of the increase from current values. For example, only RCP8.5 has a higher methane (CH4) forcing than today.

Business as usual – RCP 8.5 or RCP 6?

I’ve seen RCP8.5 described as “business as usual” but it seems quite an unlikely scenario. Perhaps we need to dive into this scenario more in another article. In the meantime, part of the description from Riahi et al (2011):

The scenario’s storyline describes a heterogeneous world with continuously increasing global population, resulting in a global population of 12 billion by 2100. Per capita income growth is slow and both internationally as well as regionally there is only little convergence between high and low income countries. Global GDP reaches around 250 trillion US2005$ in 2100.

The slow economic development also implies little progress in terms of efficiency. Combined with the high population growth, this leads to high energy demands. Still, international trade in energy and technology is limited and overall rates of technological progress is modest. The inherent emphasis on greater self-sufficiency of individual countries and regions assumed in the scenario implies a reliance on domestically available resources. Resource availability is not necessarily a constraint but easily accessible conventional oil and gas become relatively scarce in comparison to more difficult to harvest unconventional fuels like tar sands or oil shale.

Given the overall slow rate of technological improvements in low-carbon technologies, the future energy system moves toward coal-intensive technology choices with high GHG emissions. Environmental concerns in the A2 world are locally strong, especially in high and medium income regions. Food security is also a major concern, especially in low-income regions and agricultural productivity increases to feed a steadily increasing population.

Compared to the broader integrated assessment literature, the RCP8.5 represents thus a scenario with high global population and intermediate development in terms of total GDP (Fig. 4).

Per capita income, however, stays at comparatively low levels of about 20,000 US $2005 in the long term (2100), which is considerably below the median of the scenario literature. Another important characteristic of the RCP8.5 scenario is its relatively slow improvement in primary energy intensity of 0.5% per year over the course of the century. This trend reflects the storyline assumption of slow technological change. Energy intensity improvement rates are thus well below historical average (about 1% per year between 1940 and 2000). Compared to the scenario literature RCP8.5 depicts thus a relatively conservative business as usual case with low income, high population and high energy demand due to only modest improvements in energy intensity.

When I heard the term “business as usual” I’m sure I wasn’t alone in understanding it like this: the world carries on without adopting serious CO2 limiting policies. That is, no international agreements on CO2 reductions, no carbon pricing, etc. And the world continues on its current trajectory of growth and development. When you look at the last 40 years, it has been quite amazing. Why would growth slow, population not follow the pathway it has followed in all countries that have seen rising prosperity, and why would technological innovation and adoption slow? It would be interesting to see a “business as usual” scenario for emissions, CO2 concentrations and radiative forcing that had a better fit to the name.

RCP 6 seems to be a closer fit than RCP 8.5 to the name “business as usual”.

RCP6 is a climate-policy intervention scenario. That is, without explicit policies designed to reduce emissions, radiative forcing would exceed 6.0 W/m² in the year 2100.

However, the degree of GHG emissions mitigation required over the period 2010 to 2060 is small, particularly compared to RCP4.5 and RCP2.6, but also compared to emissions mitigation requirement subsequent to 2060 in RCP6 (Van Vuuren et al., 2011). The IPCC Fourth Assessment Report classified stabilization scenarios into six categories as shown in Table 1. RCP6 scenario falls into the border between the fifth category and the sixth category.

Its global mean long-term, steady-state equilibrium temperature could be expected to rise 4.9° centigrade, assuming a climate sensitivity of 3.0 and its CO2 equivalent concentration could be 855 ppm (Metz et al. 2007).

Some of the background to RCP 8.5 assumptions is in an earlier paper also by the same lead author – Riahi et al 2007, another freely accessible paper (reference below) which is worth a read, for example:

The task ahead of anticipating the possible developments over a time frame as ‘ridiculously’ long as a century is wrought with difficulties. Particularly, readers of this Journal will have sympathy for the difficulties in trying to capture social and technological changes over such a long time frame. One wonders how Arrhenius’ scenario of the world in 1996 would have looked, perhaps filled with just more of the same of his time—geopolitically, socially, and technologically. Would he have considered that 100 years later:

  • backward and colonially exploited China would be in the process of surpassing the UK’s economic output, eventually even that of all of Europe or the USA?
  • the existence of a highly productive economy within a social welfare state in his home country Sweden would elevate the rural and urban poor to unimaginable levels of personal affluence, consumption, and free time?
  • the complete obsolescence of the dominant technology cluster of the day-coal-fired steam engines?

How he would have factored in the possibility of the emergence of new technologies, especially in view of Lord Kelvin’s sobering ‘conclusion’ of 1895 that “heavier-than-air flying machines are impossible”?

Note on Comments

The Etiquette and About this Blog both explain the commenting policy in this blog. I noted briefly in the Introduction that of course questions about 100 years from now mean some small relaxation of the policy. But, in a large number of previous articles, we have discussed the “greenhouse” effect (just about to death) and so people who question it are welcome to find a relevant article and comment there – for example, The “Greenhouse” Effect Explained in Simple Terms which has many links to related articles. Questions on climate sensitivity, natural variation, and likelihood of projected future temperatures due to emissions are, of course, all still fair game in this series.

But I’ll just delete comments that question the existence of the greenhouse effect. Draconian, no doubt.

References

Emissions Scenarios, IPCC (2000) – free report

A special issue on the RCPs, Detlef P van Vuuren et al, Climatic Change (2011) – free paper

The representative concentration pathways: an overview, Detlef P van Vuuren et al, Climatic Change (2011) – free paper

RCP4.5: a pathway for stabilization of radiative forcing by 2100, Allison M. Thomson et al, Climatic Change (2011) – free paper

An emission pathway for stabilization at 6 Wm−2 radiative forcing,  Toshihiko Masui et al, Climatic Change (2011) – free paper

RCP 8.5—A scenario of comparatively high greenhouse gas emissions, Keywan Riahi et al, Climatic Change (2011) – free paper

Scenarios of long-term socio-economic and environmental development under climate stabilization, Keywan Riahi et al, Technological Forecasting & Social Change (2007) – free paper

Thermal equilibrium of the atmosphere with a given distribution of relative humidity, S Manabe, RT Wetherald, Journal of the Atmospheric Sciences (1967) – free paper

The Great Escape, Health, Wealth and the Origins of Inequality, Angus Deaton, Princeton University Press (2013) – book

Notes

Note 1: Even if we knew future anthropogenic emissions accurately it wouldn’t give us the whole picture. The climate has sources and sinks for CO2 and methane and there is some uncertainty about them, especially how well they will operate in the future. That is, anthropogenic emissions are modified by the feedback of sources and sinks for these emissions.

A long time ago, in About this Blog I wrote:

Opinions
Opinions are often interesting and sometimes entertaining. But what do we learn from opinions? It’s more useful to understand the science behind the subject. What is this particular theory built on? How long has the theory been “established”? What lines of evidence support this theory? What evidence would falsify this theory? What do opposing theories say?

Now I would like to look at impacts of climate change. And so opinions and value judgements are inevitable.

In physics we can say something like “95% of radiation at 667 cm-1 is absorbed within 1m at the surface because of the absorption properties of CO2″ and be judged true or false. It’s a number. It’s an equation. And therefore the result is falsifiable – the essence of science. Perhaps in some cases all the data is not in, or the formula is not yet clear, but this can be noted and accepted. There is evidence in favor or against, or a mix of evidence.

As we build equations into complex climate models, judgements become unavoidable. For example, “convection is modeled as a sub-grid parameterization therefore..”. Where the conclusion following “therefore” is the judgement. We could call it an opinion. We could call it an expert opinion. We could call it science if the result is falsifiable. But it starts to get a bit more “blurry” – at some point we move from a region of settled science to a region of less-settled science.

And once we consider the impacts in 2100 it seems that certainty and falsifiability must be abandoned. “Blurry” is the best case.

 

Less than a year ago listening to America and the New Global Economy by Timothy Taylor (via audible.com) I remember he said something like “the economic cost of climate change was all lumped into a fat tail – if the temperature change was on the higher side”. Sorry for my inaccurate memory (and the downside of audible.com vs a real book). Well it sparked my interest in another part of the climate journey.

I’ve been reading IPCC Working Group II (wgII) – some of the “TAR” (= third assessment report) from 2001 for background and AR5, the latest IPCC report from 2014. Some of the impacts also show up in Working Group I which is about the physical climate science, and the IPCC Special Report on Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation from 2012, known as SREX (Special Report on Extremes). These are all available at the IPCC website.

The first chapter of the TAR, Working Group II says:

The world community faces many risks from climate change. Clearly it is important to understand the nature of those risks, where natural and human systems are likely to be most vulnerable, and what may be achieved by adaptive responses. To understand better the potential impacts and associated dangers of global climate change, Working Group II of the Intergovernmental Panel on Climate Change (IPCC) offers this Third Assessment Report (TAR) on the state of knowledge concerning the sensitivity, adaptability, and vulnerability of physical, ecological, and social systems to climate change.

A couple of common complaints in the blogosphere that I’ve noticed are:

  • “all the impacts are supposed to be negative but there are a lot of positives from warming”
  • “CO2 will increase plant growth so we’ll be better off”

Within the field of papers and IPCC reports it’s clear that CO2 increasing plant growth is not ignored. Likewise, there are expected to be winners and losers (often, but definitely not exclusively, geographically distributed), even though the IPCC summarizes the expected overall effect as negative.

Of course, there is a highly entertaining field of “recycled press releases about the imminent catastrophe of climate change” which I’m sure ignores any positives or tradeoffs. Even in what could charitably be called “respected media outlets” there seem to be few correspondents with basic scientific literacy. Not even the ability to add up the numbers on an electricity bill or distinguish between the press release of a company planning to get wonderful results in 2025 vs today’s reality.

Anyway, entertaining as it is to shoot fish in a barrel, we will try to stay away from discussing newsotainment and stay with the scientific literature and IPCC assessments. Inevitably, we’ll stray a little.

I haven’t tried to do a comprehensive summary of the issues believed to impact humanity, but here are some:

  • sea level rise
  • heatwaves
  • droughts
  • floods
  • more powerful cyclones and storms
  • food production
  • ocean acidification
  • extinction of animal and plant species
  • more pests (added, thanks Tom, corrected thanks DeWitt)
  • disease (added, thanks Tom)

Possibly I’ve missed some.

Covering the subject is not easy but it’s an interesting field.

In Planck, Stefan-Boltzmann, Kirchhoff and LTE one of our commenters asked a question about emissivity. The first part of that article is worth reading as a primer in the basics for this article. I don’t want to repeat all the basics, except to say that if a body is a “black body” it emits radiation according to a simple formula. This is the maximum that any body can emit. In practice, a body will emit less.

The ratio between actual and the black body is the emissivity. It has a value between 0 and 1.

The question that this article tries to help readers understand is the origin and use of the emissivity term in the Stefan-Boltzmann equation:

E = ε’σT4

where E = total flux, ε’ = “effective emissivity” (a value between 0 and 1), σ is a constant and T = temperature in Kelvin (i.e., absolute temperature).

The term ε’ in the Stefan-Boltzmann equation is not really a constant. But it is often treated as a constant in articles that related to climate. Is this valid? Not valid? Why is it not a constant?

There is a constant material property called emissivity, but it is a function of wavelength. For example, if we found that the emissivity of a body at 10.15 μm was 0.55 then this would be the same regardless of whether the body was in Antarctica (around 233K = -40ºC), the tropics (around 303K = 30ºC) or at the temperature of the sun’s surface (5800K). How do we know this? From experimental work over more than a century.

Hopefully some graphs will illuminate the difference between emissivity the material property (that doesn’t change), and the “effective emissivity” (that does change) we find in the Stefan-Boltzmann equation. In each graph you can see:

  • (top) the blackbody curve
  • (middle) the emissivity of this fictional material as a function of wavelength
  • (bottom) the actual emitted radiation due to the emissivity – and a calculation of the “effective emissivity”.

The calculation of “effective emissivity” = total actual emitted radiation / total blackbody emitted radiation (note 1).

At 288K – effective emissivity = 0.49:

emissivity-288k

At 300K – effective emissivity = 0.49:

emissivity-300k

At 400K – effective emissivity = 0.44:

emissivity-400k

At 500K – effective emissivity = 0.35:

emissivity-500k

At 5800K, that is solar surface temperature — effective emissivity = 0.00 (note the scale on the bottom graph is completely different from the scale of the top graph):

emissivity-5800k

Hopefully this helps people trying to understand what emissivity really relates to in the Stefan Boltzmann equation. It is not a constant except in rare cases. But you can see that treating it as a constant over a range of temperatures is a reasonable approximation (depending on the accuracy you want), but change the temperature “too much” and your “effective emissivity” can change massively.

As always with approximations and useful formulas, you need to understand the basis behind them to know when you can and can’t use them.

Any questions, just ask in the comments.

Note 1 – The flux was calculated for the wavelength range of 0.01 μm to 50μm. If you use the Stefan Boltzmann equation for 288K you will get E = 5.67×10-8 x 2884 = 390 W/m2. The reason my graph has 376 W/m2 is because I don’t include the wavelength range from 50 to infinity. It doesn’t change the practical results you see.

About 100 years ago I wrote Renewables XVII – Demand Management 1 and promised to examine the subject more in a subsequent article. As with many of my blog promises (“non-core promises”) I have failed to do anything in what could be even charitably described as a “timely manner”. I got diverted by my startup.

However, in a roundabout way I came across some articles that help illuminate the energy subject better than I could. While travelling I listened via audible.com to two great books by Timothy Taylor – America and the New Global Economy and A History of the U.S. Economy in the 20th Century. It turns out that Timothy Taylor is the editor of the Journal of Economic Perspectives (and also writes a blog – the Conversable Economist – which is great quality). This journal has recently made its articles open access back to the dawn of time and I downloaded a few years of the journal.

Digressing on my digression, in one of those two books, Taylor made an interesting comment about economists views on climate change which sparked my interest in studying the IPCC working groups 2 & 3 – impacts and mitigation. Possibly some articles to come in that arena, but no campaign promises. It’s a big subject.

The Journal of Economic Perspectives, Volume 26, Number 1, Winter 2012 contains a number of articles on energy, including Creating a Smarter U .S . Electricity Grid, Paul L Joskow. I recommend reading the whole paper – well-written and accessible. He comments on some of the papers that I had already discovered. A few comments selected:

Smart grid investment on the high voltage network has only a limited ability to increase the effective capacity of transmission networks. A large increase in transmission capacity, especially if it involves accessing generating capacity at new locations remote from load centers, requires building new physical transmission capacity. However, building major new transmission lines is extremely difficult. The U.S. transmission system was not built to facilitate large movements between interconnected control areas or over long distances; rather, it was built to balance supply and demand reliably within individual utility (or holding company) service areas. While the capacity of interconnections have expanded over time, the bulk of the price differences in Table 1 are due to the fact that there is insufficient transmission capacity to move large amounts of power from, for example, Chicago to New York City. The regulatory process that determines how high voltage transmission capacity (and smart grid investments in the transmission network) is sited and paid for in regulated transmission prices is of byzantine complexity..

The U.S. Department of Energy has supported about 70 smart grid projects involving local distribution systems on a roughly 50/50 cost sharing basis, with details available at 〈http://www.smartgrid.gov/recovery_act/tracking_deployment /distribution〉. However, a full transformation of local distribution systems will take many years and a lot of capital investment. Are the benefits likely to exceed the costs? In the only comprehensive and publicly available effort at cost–benefit analysis in this area, the Electric Power Research Institute (2011a) estimates that deployment (to about 55 percent of distribution feeders) would cost between $120–$170 billion, and claims that the benefits in terms of greater reliability of the electricity supply would be about $600 billion (both in net present value). Unfortunately, I found the benefit analyses to be speculative and impossible to reproduce given the information made available in EPRI’s report..

And on demand management programs’ impacts on peak demand:

The idea of moving from time-invariant electricity prices to “peak-load” pricing where prices are more closely tied to variations in marginal cost has been around for at least 50 years..

A large number of U.S. utilities began offering time-of-use and interruptible pricing options for large commercial and industrial customers during the 1980s, either as a pilot program or as an option. More recently, a number of states have introduced pilot programs for residential (household) consumers that install smart meters of various kinds, charge prices that vary with wholesale prices, and observe demand..

Faruqui and Sergici (2010) summarize the results of 15 earlier studies of various forms of dynamic pricing, including time-of-use pricing, peak pricing, and real-time pricing.. Faruqui (2011) summarizes the reduction in peak load from 109 dynamic pricing studies, including those that use time-of-use pricing, peak pricing, and full real-time pricing, and finds that higher peak period prices always lead to a reduction in peak demand. However, the reported price responses across these studies vary by an order of magnitude, and the factors that lead to the variability of responses have been subject to very limited analysis..

Accordingly, it seems to me that a sensible deployment strategy is to combine a long-run plan for rolling out smart-grid investments with well-designed pilots and experiments. Using randomized trials of smart grid technology and pricing, with a robust set of treatments and the “rest of the distribution grid” as the control, would allow much more confidence in estimates of demand response, meter and grid costs, reliability and power quality benefits, and other key outcomes. For example, Faruqui’s (2011b) report on the peak-period price responses for 109 pilot programs displays responses between 5 to 50 percent of peak demand. An order-of-magnitude difference in measured price responses is just not good enough to do convincing cost–benefit analyses, especially with the other issues noted above. In turn, the information that emerges from these studies could be used to make mid-course corrections in the deployment strategy. Given the large investments contemplated in smart meters and complementary investments, along with the diverse uncertainties that we now face, rushing to deploy a particular set of technologies as quickly as possible is in my view a mistake.

What I observed from reading a lot of papers back when I had promised a followup article (on demand management) was lots of fluff and a small amount of substance. As Joskow says, a wide range in potential outcomes, and not much in the way of large-scale data to draw real conclusions.

In that same linked document above you can also read other papers including: Prospects for Nuclear Power, Lucas W Davis; The Private and Public Economics of Renewable Electricity Generation, Severin Borenstein. Both of these papers are excellent.

Reading the Joskow paper in JEP I thought his name was familiar and it turns out I already had three of his papers:

This paper makes a very simple point regarding the proper methods for comparing the economic value of intermittent generating technologies (e.g. wind and solar) with the economic value of traditional dispatchable generating technologies (e.g. CCGT, coal, nuclear). I show that the prevailing approach that relies on comparisons of the “levelized cost” per MWh supplied by different generating technologies, or any other measure of total life-cycle production costs per MWh supplied, is seriously flawed..

[Emphasis added]

For people interested in understanding the subject of energy vs CO2 emissions, these are valuable and relatively easy to read papers.

Articles in this Series

Renewable Energy I – Introduction

Renewables II – Solar and Free Lunches – Solar power

Renewables III – US Grid Operators’ Opinions – The grid operators’ concerns

Renewables IV – Wind, Forecast Horizon & Backups – Some more detail about wind power – what do we do when the wind goes on vacation

Renewables V – Grid Stability As Wind Power Penetration Increases

Renewables VI – Report says.. 100% Renewables by 2030 or 2050

Renewables VII – Feasibility and Reality – Geothermal example

Renewables VIII – Transmission Costs And Outsourcing Renewable Generation

Renewables IX – Onshore Wind Costs

Renewables X – Nationalism vs Inter-Nationalism

Renewables XI – Cost of Gas Plants vs Wind Farms

Renewables XII – Windpower as Baseload and SuperGrids

Renewables XIII – One of Wind’s Hidden Costs

Renewables XIV – Minimized Cost of 99.9% Renewable Study

Renewables XV – Offshore Wind Costs

Renewables XVI – JP Morgan advises

Renewables XVII – Demand Management 1

Renewables XVIII – Demand Management & Levelized Cost

Renewables XIX – Behind the Executive Summary and Reality vs Dreams

The subject of EMICs – Earth Models of Intermediate Complexity – came up in recent comments on Ghosts of Climates Past – Eleven – End of the Last Ice age. I promised to write something about EMICs, in part because of my memory of a more recent paper on EMICs. This article will just be short as I found that I have already covered some of the EMIC ground.

In the previous 19 articles of this series we’ve seen a concise summary (just kidding) of the problems of modeling ice ages. That is, it is hard to model ice ages for at least three reasons:

  • knowledge of the past is hard to come by, relying on proxies which have dating uncertainties and multiple variables being expressed in one proxy (so are we measuring temperature, or a combination of temperature and other variables?)
  • computing resources make it impossible to run a GCM at current high resolution for the 100,000 years necessary, let alone to run ensembles with varying external forcings and varying parameters (internal physics)
  • lack of knowledge of key physics, specifically: ice sheet dynamics with very non-linear behavior; and the relationship between CO2, methane and the ice age cycles

The usual approach using GCMs is to have some combination of lower resolution grids, “faster” time and prescribed ice sheets and greenhouse gases.

These articles cover the subject:

Part Seven – GCM I – early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Eight – GCM II – more recent work from the “noughties” – GCM results plus EMIC (earth models of intermediate complexity) again trying to produce perennial snow cover

Part Nine – GCM III – very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

One of the the papers I thought about covering in this article (Calov et al 2005) is already briefly covered in Part Eight. I would like to highlight one comment I made in the conclusion of Part Ten:

What the paper [Jochum et al, 2012] also reveals – in conjunction with what we have seen from earlier articles – is that as we move through generations and complexities of models we can get success, then a better model produces failure, then a better model again produces success. Also we noted that whereas the 2003 model (also cold-biased) of Vettoretti & Peltier found perennial snow cover through increased moisture transport into the critical region (which they describe as an “atmospheric–cryospheric feedback mechanism”), this more recent study with a better model found no increase in moisture transport.

So, onto a little more about EMICs.

There are two papers from 2000/2001 describing the CLIMBER-2 model and the results from sensitivity experiments. These are by the same set of authors – Petoukhov et al 2000 & Ganopolski et al 2001 (see references).

Here is the grid:

From Petoukhov et al (2000)

From Petoukhov et al (2000)

The CLIMBER-2 model has a low spatial resolution which only resolves individual continents (subcontinents) and ocean basins (fig 1). Latitudinal resolutions is the same for all modules (10º). In the longitudinal direction the Earth is represented by seven equal sectors (roughly 51º􏰖 longitude) in the atmosphere and land modules.

The ocean model is a zonally averaged multibasin model, which in longitudinal direction resolves only three ocean basins Atlantic, Indian, Pacific). Each ocean grid cell communicates with either one, two or three atmosphere grid cells, depending on the width of the ocean basin. Very schematic orography and bathymetry are prescribed in the model, to represent the Tibetan plateau, the high Antarctic elevation and the presence of the Greenland-Scotland sill in the Atlantic ocean.

The atmospheric model has a simplified approach, leading to the description 2.5D model. The time step can be relaxed to about 1 day per step. The ocean grid is a little finer in latitude.

On selecting parameters and model “tuning”:

Careful tuning is essential for a new model, as some parameter values are not known a priori and incorrect choices of parameter values compromise the quality and reliability of simulations. At the same time tuning can be abused (getting the right results for the wrong reasons) if there are too many free parameters. To avoid this we adhered to a set of common-sense rules for good tuning practice:

1. Parameters which are known empirically or from theory must not be used for tuning.

2. Where ever possible parametrizations should be tuned separately against observed data, not in the context of the whole model. (Most of the parameters values in Table 1 were obtained in this way and only few of them were determined by tuning the model to the observed climate).

3. Parameters must relate to physical processes, not to specific geographic regions (hidden flux adjustments).

4. The number of tuning parameters must be much smaller than the degrees of freedom predicted by the model. (In our case the predicted degrees of freedom exceed the number of tuning parameters by several orders of magnitude).

To apply the coupled climate model for simulations of climates substantially different from the present, it is crucial to avoid any type of ̄flux adjustment. One of the reasons for the need of ̄flux adjustments in many general circulation models is their high computational cost, which makes optimal tuning􏱃 difficult. The high speed of CLIMBER-2 allows us to perform many sensitivity experiments required to identify the physical reasons for model problems and the best parameter choices. A physically correct choice of model parameters is fundamentally different from a flux adjustment; only in the former case the surface fluxes are part of the proper feedbacks when the climate changes.

Note that many GCMs back in 2000 did need to use flux adjustment (in Natural Variability and Chaos – Three – Attribution & Fingerprints I commented “..The climate models “drifted”, unless, in deity-like form, you topped up (or took out) heat and momentum from various grid boxes..)

So this all sounds reasonable. Obviously it is a model with less resolution than a GCM, and even the high resolution (by current standards) GCMs need some kind of approach to parameter selection (see Models, On – and Off – the Catwalk – Part Four – Tuning & the Magic Behind the Scenes).

What I remembered about EMICs and suggested in my comment was based on this 2010 paper by Ganopolski, Calov & Claussen:

We will start the discussion of modelling results with a so-called Baseline Experiment (BE). This experiment represents a “suboptimal” subjective tuning of the model parameters to achieve the best agreement between modelling results and palaeoclimate data. Obviously, even with a model of intermediate complexity it is not possible to test all possible combinations of important model parameters which can be considered as free (tunable) parameters.

In fact, the BE was selected from hundred model simulations of the last glacial cycle with different combinations of key model parameters.

Note, that we consider “tunable” parameters only for the ice-sheet model and the SEMI interface, while the utilized climate component of CLIMBER-2 is the same in previous studies, such as those used by C05 [this is Calov et al. (2005)]. In the next section, we will discuss the results of a set of sensitivity experiments, which show that our modelling results are rather sensitive to the choice of the model parameters..

..The ice sheet model and the ice sheet-climate interface contain a number of parameters which are not derived from first principles. They can be considered as “tunable” parameters. As stated above, the BE was subjectively selected from a large suite of experiments as the best fit to empirical data. Below we will discuss results of a number of additional experiments illustrating the sensitivity of simulated glacial cycle to several model parameters. These results show that the model is rather sensitive to a number of poorly constrained parameters and parameterisations, demonstrating the challenges to realistic simulations of glacial cycles with a comprehensive Earth system model.

And in their conclusion:

Our experiments demonstrate that the CLIMBER-2 model with an appropriate choice of model parameters simulates the major aspects of the last glacial cycle under orbital and greenhouse gases forcing rather realistically. In the simulations, the glacial cycle begins with a relatively abrupt lateral expansion of the North American ice sheets and parallel growth of the smaller northern European ice sheets. During the initial phase of the glacial cycle (MIS 5), the ice sheets experience large variations on precessional time scales. Later on, due to a decrease in the magnitude of the precessional cycle and a stabilising effect of low CO2 concentration, the ice sheets remain large and grow consistently before reaching their maximum at around 20 kyr BP..

..From about 19 kyr BP, the ice sheets start to retreat with a maximum rate of sea level rise reaching some 15 m per 1000 years around 15kyrBP. The northern European ice sheets disappeared first, and the North American ice sheets completely disappeared at around 7 kyr BP. Fast sliding processes and the reduction of surface albedo due to deposition of dust play an important role in rapid deglaciation of the NH. Thus our results strongly support the idea about important role of aeolian dust in the termination of glacial cycles proposed earlier by Peltier and Marshall (1995)..

..Results from a set of sensitivity experiments demonstrate high sensitivity of simulated glacial cycle to the choice of some modelling parameters, and thus indicate the challenge to perform realistic simulations of glacial cycles with the computationally expensive models.

My summary – the simplifications of the EMIC combined with the “trying lots of parameters” approach means I have trouble putting much significance on the results.

While the basic setup, as described in the 2000 & 2001 papers seems reasonable, EMICs miss a lot of physics. This is important with something like starting and ending an ice age, where the feedbacks in higher resolution models can significantly reduce the effect seen by lower resolution models. When we run 100’s of simulations with different parameters (relating to the ice sheet) and find the best result I wonder what we’ve actually found.

That doesn’t mean they are of no value. Models help us to understand how the physics of climate actually works, because we can’t do these calculations in our heads. GCMs require too much computing resources to properly study ice ages.

So I look at EMICs as giving some useful insights that need to be validated with more complex models. Or with further study against other observations (what predictions do these parameter selections give us that can be verified?)

I don’t see them as demonstrating that the results “show” we’ve now modeled ice ages. The exact same comment also goes for another 2007 paper which used a GCM coupled to an ice sheet model that we covered in Part Nineteen – Ice Sheet Models I. An update of that paper in 2013 came with a excited Nature press release but to me simply demonstrates that with a few unknown parameters you can get a good result with some specific values of those parameters. This is not at all surprising. Let’s call it a good start.

Perhaps Abe Ouchi et al 2013 was the paper that will be verified as the answer to the question of ice age terminations – the delayed isostatic rebound.

Perhaps Ganopolski, Calov & Claussen 2010 with the interaction of dust on ice sheets will be verified as the answer to that question.

Perhaps neither will be.

Articles in this Series

Part One – An introduction

Part Two – Lorenz – one point of view from the exceptional E.N. Lorenz

Part Three – Hays, Imbrie & Shackleton – how everyone got onto the Milankovitch theory

Part Four – Understanding Orbits, Seasons and Stuff – how the wobbles and movements of the earth’s orbit affect incoming solar radiation

Part Five – Obliquity & Precession Changes – and in a bit more detail

Part Six – “Hypotheses Abound” – lots of different theories that confusingly go by the same name

Part Seven – GCM I – early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Seven and a Half – Mindmap – my mind map at that time, with many of the papers I have been reviewing and categorizing plus key extracts from those papers

Part Eight – GCM II – more recent work from the “noughties” – GCM results plus EMIC (earth models of intermediate complexity) again trying to produce perennial snow cover

Part Nine – GCM III – very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age – a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Eleven – End of the Last Ice age – latest data showing relationship between Southern Hemisphere temperatures, global temperatures and CO2

Twelve – GCM V – Ice Age Termination – very recent work from He et al 2013, using a high resolution GCM (CCSM3) to analyze the end of the last ice age and the complex link between Antarctic and Greenland

Thirteen – Terminator II – looking at the date of Termination II, the end of the penultimate ice age – and implications for the cause of Termination II

Fourteen – Concepts & HD Data – getting a conceptual feel for the impacts of obliquity and precession, and some ice age datasets in high resolution

Fifteen – Roe vs Huybers – reviewing In Defence of Milankovitch, by Gerard Roe

Sixteen – Roe vs Huybers II – remapping a deep ocean core dataset and updating the previous article

Seventeen – Proxies under Water I – explaining the isotopic proxies and what they actually measure

Eighteen – “Probably Nonlinearity” of Unknown Origin – what is believed and what is put forward as evidence for the theory that ice age terminations were caused by orbital changes

Nineteen – Ice Sheet Models I – looking at the state of ice sheet models

References

CLIMBER-2: a climate system model of intermediate complexity. Part I: model description and performance for present climate, V Petoukhov, A Ganopolski, V Brovkin, M Claussen, A Eliseev, C Kubatzki & S Rahmstorf, Climate Dynamics (2000)

CLIMBER-2: a climate system model of intermediate complexity. Part II: model sensitivity, A Ganopolski, V Petoukhov, S Rahmstorf, V Brovkin, M Claussen, A Eliseev & C Kubatzki, Climate Dynamics 􏱄(2001)

Transient simulation of the last glacial inception. Part I: glacial inception as a bifurcation in the climate system, Reinhard Calov, Andrey Ganopolski, Martin Claussen, Vladimir Petoukhov & Ralf Greve, Climate Dynamics (2005)

Simulation of the last glacial cycle with a coupled climate ice-sheet model of intermediate complexity, A. Ganopolski, R. Calov, and M. Claussen, Climate of the Past (2010)