Feeds:
Posts
Comments

Archive for the ‘Climate Models’ Category

In the comments on Part Five there was some discussion on Mauritsen & Stevens 2015 which looked at the “iris effect”:

A controversial hypothesis suggests that the dry and clear regions of the tropical atmosphere expand in a warming climate and thereby allow more infrared radiation to escape to space

One of the big challenges in climate modeling (there are many) is model resolution and “sub-grid parameterization”. A climate model is created by breaking up the atmosphere (and ocean) into “small” cells of something like 200km x 200km, assigning one value in each cell for parameters like N-S wind, E-W wind and up-down wind – and solving the set of equations (momentum, heat transfer and so on) across the whole earth. However, in one cell like this below you have many small regions of rapidly ascending air (convection) topped by clouds of different thicknesses and different heights and large regions of slowly descending air:

Held and Soden (2000)

Held and Soden (2000)

The model can’t resolve the actual processes inside the grid. That’s the nature of how finite element analysis works. So, of course, the “parameterization schemes” to figure out how much cloud, rain and humidity results from say a warming earth are problematic and very hard to verify.

Running higher resolution models helps to illuminate the subject. We can’t run these higher resolution models for the whole earth – instead all kinds of smaller scale model experiments are done which allow climate scientists to see which factors affect the results.

Here is the “plain language summary” from Organization of tropical convection in low vertical wind shears:Role of updraft entrainment, Tompkins & Semie 2017:

Thunderstorms dry out the atmosphere since they produce rainfall. However, their efficiency at drying the atmosphere depends on how they are arranged; take a set of thunderstorms and sprinkle them randomly over the tropics and the troposphere will remain quite moist, but take that same number of thunderstorms and place them all close together in a “cluster” and the atmosphere will be much drier.

Previous work has indicated that thunderstorms might start to cluster more as temperatures increase, thus drying the atmosphere and letting more infrared radiation escape to space as aresult – acting as a strong negative feedback on climate, the so-called iris effect.

We investigate the clustering mechanisms using 2km grid resolution simulations, which show that strong turbulent mixing of air between thunderstorms and their surrounding is crucial for organization to occur. However, with grid cells of 2 km this mixing is not modelled explicitly but instead represented by simple model approximations, which are hugely uncertain. We show three commonly used schemes differ by over an order of magnitude. Thus we recommend that further investigation into the climate iris feedback be conducted in a coordinated community model intercomparison effort to allow model uncertainty to be robustly accounted for.

And a little about computation resources and resolution. CRMs are “cloud resolving models”, i.e. higher resolution models over smaller areas:

In summary, cloud-resolving models with grid sizes of the order of 1 km have revealed many of the potential feed-back processes that may lead to, or enhance, convective organization. It should be recalled however, that these studies are often idealized and involve computational compromises, as recently discussed in Mapes [2016]. The computational requirements of RCE experiments that require more than 40 days of integration still largely prohibit horizontal resolutions finer than 1 km. Simulations such as Tompkins [2001c], Bryan et al. [2003], and Khairoutdinov et al. [2009] that use resolutions less than 350 m were restricted to 1 or 2 days. If water vapor entrainment is a factor for either the establishment and/or the amplification of convective organization, it raises the issue that the organization strength in CRMs model using grid sizes of the order of 1 km or larger is likely to be sensitive to the model resolution and simulation framework in terms of the choice of subgrid-scale diffusion and mixing.

In their conclusion on what resolution is needed:

.. and states that convergence is achieved when the most energetic eddies are well resolved, which is not the case at 2 km, and Craig and Dornbrack [2008] also suggest that resolving clouds requires grid sizes that resolve the typical buoyancy scale of a few hundred meters. The present state of the art of LES is represented by Heinze et al. [2016], integrating a model for the whole of Germany with a 100 m grid spacing, for a period of 4 days.

They continue:

The simulations in this paper also highlight the fact that intricacies of the assumptions contained in the parameterization of small- scale physics can strongly impact the possibility of crossing the threshold from unorganized to organized equilibrium states. The expense of such simulations has usually meant that only one model configuration is used concerning assumptions of small-scale processes such as mixing and microphysics, often initialized from a single initial condition. The potential of multiple equilibria and also an hysteresis in the transition between organized and unorganized states [Muller and Held, 2012], points to the requirement for larger integration ensembles employing a range of initial and boundary conditions, and physical parameterization assumptions. The ongoing requirements of large-domain, RCE numerical experiments imply that this challenge can be best met with a community-based, convective organization model intercomparison project (CORGMIP).

Here is Detailed Investigation of the Self-Aggregation of Convection in Cloud-Resolving Simulations, Muller & Held (2012). The second author is Isaac Held, often referenced on this blog who has been writing very interesting papers for about 40 years:

It is well known that convection can organize on a wide range of scales. Important examples of organized convection include squall lines, mesoscale convective systems (Emanuel 1994; Holton 2004), and the Madden– Julian oscillation (Grabowski and Moncrieff 2004). The ubiquity of convective organization above tropical oceans has been pointed out in several observational studies (Houze and Betts 1981; WCRP 1999; Nesbitt et al. 2000)..

..Recent studies using a three-dimensional cloud resolving model show that when the domain is sufficiently large, tropical convection can spontaneously aggregate into one single region, a phenomenon referred to as self-aggregation (Bretherton et al. 2005; Emanuel and Khairoutdinov 2010). The final climate is a spatially organized atmosphere composed of two distinct areas: a moist area with intense convection, and a dry area with strong radiative cooling (Figs. 1b and 2b,d). Whether or not a horizontally homogeneous convecting atmosphere in radiative convective equilibrium self-aggregates seems to depend on the domain size (Bretherton et al. 2005). More generally, the conditions under which this instability of the disorganized radiative convective equilibrium state of tropical convection occurs, as well as the feedback responsible, remain unclear.

We see the difference in self-aggregation of convection between the two domain sizes below:

 

From Muller & Held 2012

Figure 1

The effect on rainfall and OLR (outgoing longwave radiation) is striking, and also note that the mean is affected:

From Muller & Held 2012

Figure 2

Then they look at varying model resolution (dx), domain size (L) and also the initial conditions. The higher resolution models don’t produce the self-aggregation, but the results are also sensitive to domain size and initial conditions. The black crosses denote model runs where the convection stayed disorganized, the red circles where the convection self-aggregated:

From Muller & Held 2012

Figure 3

In their conclusion:

The relevance of self-aggregation to observed convective organization (mesoscale convective systems, mesoscale convective complexes, etc.) requires further investigation. Based on its sensitivity to resolution (Fig. 6a), it may be tempting to see self-aggregation as a numerical artifact that occurs at coarse resolutions, whereby low-cloud radiative feedback organizes the convection.

Nevertheless, it is not clear that self-aggregation would not occur at fine resolution if the domain size were large enough. Furthermore, the hysteresis (Fig. 6b) increases the importance of the aggregated state, since it expands the parameter span over which the aggregated state exists as a stable climate equilibrium. The existence of the aggregated state appears to be less sensitive to resolution than the self-aggregation process. It is also possible that our results are sensitive to the value of the sea surface temperature; indeed, Emanuel and Khairoutdinov (2010) find that warmer sea surface temperatures tend to favor the spontaneous self-aggregation of convection.

Current convective parameterizations used in global climate models typically do not account for convective organization.

More two-dimensional and three dimensional simulations at high resolution are desirable to better understand self-aggregation, and convective organization in general, and its dependence on the subgrid-scale closure, boundary layer, ocean surface, and radiative scheme used. The ultimate goal is to help guide and improve current convective parameterizations.

From the results in their paper we might think that self-aggregation of convection was a model artifact that disappears with higher resolution models (they are careful not to really conclude this). Tompkins & Semie 2017 suggested that Muller & Held’s results may be just a dependence on their sub-grid parameterization scheme (see note 1).

From Hohenegger & Stevens 2016, how convection self-aggregates over time in their model:

From Hohenegger & Stevens 2016

Figure 4 – Click to enlarge

From a review paper on the same topic by Wing et al 2017:

The novelty of self-aggregation is reflected by the many remaining unanswered questions about its character, causes and effects. It is clear that interactions between longwave radiation and water vapor and/or clouds are critical: non-rotating aggregation does not occur when they are omitted. Beyond this, the field is in play, with the relative roles of surface fluxes, rain evaporation, cloud versus water vapor interactions with radiation, wind shear, convective sensitivity to free atmosphere water vapor, and the effects of an interactive surface yet to be firmly characterized and understood.

The sensitivity of simulated aggregation not only to model physics but to the size and shape of the numerical domain and resolution remains a source of concern about whether we have even robustly characterized and simulated the phenomenon. While aggregation has been observed in models (e.g., global models) in which moist convection is parameterized, it is not yet clear whether such models simulate aggregation with any real fidelity. The ability to simulate self-aggregation using models with parameterized convection and clouds will no doubt become an important test of the quality of such schemes.

Understanding self-aggregation may hold the key to solving a number of obstinate problems in meteorology and climate. There is, for example, growing optimism that understanding the interplay among radiation, surface fluxes, clouds, and water vapor may lead to robust accounts of the Madden Julian oscillation and tropical cyclogenesis, two long-standing problems in atmospheric science.

Indeed, the difficulty of modeling these phenomena may be owing in part to the challenges of simulating them using representations of clouds and convection that were not designed or tested with self-aggregation in mind.

Perhaps most exciting is the prospect that understanding self-aggregation may lead to an improved understanding of climate. The strong hysteresis observed in many simulations of aggregation—once a cluster is formed it tends to be robust to changing environmental conditions—points to the possibility of intransitive or almost intransitive behavior of tropical climate.

The strong drying that accompanies aggregation, by cooling the system, may act as a kind of thermostat, if indeed the existence or degree of aggregation depends on temperature. Whether or how well this regulation is simulated in current climate models depends on how well such models can simulate aggregation, given the imperfections of their convection and cloud parameterizations.

Clearly, there is much exciting work to be done on aggregation of moist convection.

[Emphasis added]

Conclusion

Climate science asks difficult questions that are currently unanswerable. This goes against two myths that circulate media and many blogs: on the one hand the myth that the important points are all worked out; and on the other hand the myth that climate science is a political movement creating alarm, with each paper reaching more serious and certain conclusions than the paper before. Reading lots of papers I find a real science. What is reported in the media is unrelated to the state of the field.

At the heart of modeling climate is the need to model turbulent fluid flows (air and water) and this can’t be done. Well, it can be done, but using schemes that leave open the possibility or probability that further work will reveal them to be inadequate in a serious way. Running higher resolution models helps to answer some questions, but more often reveals yet new questions. If you have a mathematical background this is probably easy to grasp. If you don’t it might not make a whole lot of sense, but hopefully you can see from the papers that very recent papers are not yet able to resolve some challenging questions.

At some stage sufficiently high resolution models will be validated and possibly allow development of more realistic parameterization schemes for GCMs. For example, here is Large-eddy simulations over Germany using ICON: a comprehensive evaluation, Reike Heinze et al 2016, evaluating their model with 150m grid resolution – 3.3bn grid points on a sub-1 second time step over 4 days over Germany:

These results consistently show that the high-resolution model significantly improves the representation of small- to mesoscale variability. This generates confidence in the ability to simulate moist processes with fidelity. When using the model output to assess turbulent and moist processes and to evaluate and develop climate model parametrizations, it seems relevant to make use of the highest resolution, since the coarser-resolved model variants fail to reproduce aspects of the variability.

Related Articles

Ensemble Forecasting – why running a lot of models gets better results than one “best” model

Latent heat and Parameterization – example of one parameterization and its problems

Turbulence, Closure and Parameterization – explaining how the insoluble problem of turbulence gets handled in models

Part Four – Tuning & the Magic Behind the Scenes – how some important model choices get made

Part Five – More on Tuning & the Magic Behind the Scenes – parameterization choices, aerosol properties and the impact on temperature hindcasting, plus a high resolution model study

Part Six – Tuning and Seasonal Contrasts – model targets and model skill, plus reviewing seasonal temperature trends in observations and models

References

Missing iris efect as a possible cause of muted hydrological change and high climate sensitivity in models, Thorsten Mauritsen and Bjorn Stevens, Nature Geoscience (2015) – free paper

Organization of tropical convection in low vertical wind shears:Role of updraft entrainment, Adrian M Tompkins & Addisu G Semie, Journal of Advances in Modeling Earth Systems (2017) – free paper

Detailed Investigation of the Self-Aggregation of Convection in Cloud-Resolving Simulations, Caroline Muller & Isaac Held, Journal of the Atmospheric Sciences (2012) – free paper

Coupled radiative convective equilibrium simulations with explicit and parameterized convection, Cathy Hohenegger & Bjorn Stevens, Journal of Advances in Modeling Earth Systems (2016) – free paper

Convective Self-Aggregation in Numerical Simulations: A Review, Allison A Wing, Kerry Emanuel, Christopher E Holloway & Caroline Muller, Surv Geophys (2017) – free paper

Large-eddy simulations over Germany using ICON: a comprehensive evaluation, Reike Heinze et al, Quarterly Journal of the Royal Meteorological Society (2016)

Other papers worth reading:

Featured Article Self-aggregation of convection in long channel geometry, Allison A Wing & Timothy W Cronin, Quarterly Journal of the Royal Meteorological Society (2016) – paywall paper

Notes

Note 1: The equations for turbulent fluid flow are insoluble due to the computing resources required. Energy gets dissipated at the scales where viscosity comes into play. In air this is a few mm. So even much higher resolution models like the cloud resolving models (CRMs) with scales of 1km or even smaller still need some kind of parameterizations to work. For more on this see Turbulence, Closure and Parameterization.

Read Full Post »

I was re-reading Missing iris effect as a possible cause of muted hydrological change and high climate sensitivity in models, Thorsten Mauritsen and Bjorn Stevens from 2015 (because I referenced it in a recent comment) and then looked up other recent papers citing it. One interesting review paper is by Stevens et al from 2016. I recognized his name from many other papers and it looks like Bjorn Stevens has been publishing papers since the early 1990s, with almost 200 papers in peer-reviewed journals, mostly on this and related topics. Likewise, Sherwood and Bony (two of the coauthors) are very familiar names from this field.

Many regular readers (and I’m sure new readers of this blog) will understand much more than me about current controversies in climate sensitivity. The question in brief (of course there are many subtleties) – how much will the earth warm if we double CO2? It’s a very important question. As the authors explain at the start:

Nearly 40 years have passed since the U.S. National Academies issued the “Charney Report.” This landmark assessment popularized the concept of the “equilibrium climate sensitivity” (ECS), the increase of Earth’s globally and annually averaged near surface temperature that would follow a sustained doubling of atmospheric carbon dioxide relative to its preindustrial value. Through the application of physical reasoning applied to the analysis of output from a handful of relatively simple models of the climate system, Jule G. Charney and his co-authors estimated a range of 1.5 –4.5 K for the ECS [Charney et al., 1979].

Charney is a eminent name you will know, along with Lorentz, if you read about the people who broke ground on numerical weather modeling. The authors explain a little about the definition of ECS:

ECS is an idealized but central measure of climate change, which gives specificity to the more general idea of Earth’s radiative response to warming. This specificity makes ECS something that is easy to grasp, if not to realize. For instance, the high heat capacity and vast carbon stores of the deep ocean mean that a new climate equilibrium would only be fully attained a few millennia after an applied forcing [Held et al., 2010; Winton et al., 2010; Li et al., 2012]; and uncertainties in the carbon cycle make it difficult to know what level of emissions is compatible with a doubling of the atmospheric CO2 concentration in the first place.

Concepts such as the “transient climate response” or the “transient climate response to cumulative carbon emissions” have been introduced to account for these effects and may be a better index of the warming that will occur within a century or two [Allen and Frame, 2007; Knutti and Hegerl, 2008; Collins et al., 2013;MacDougall, 2016].

But the ECS is strongly related and conceptually simpler, so it endures as the central measure of Earth’s susceptibility to forcing [Flato et al., 2013].

And about the implications of narrowing the range of ECS:

The socioeconomic value of better understanding the ECS is well documented. If the ECS were well below 1.5 K, climate change would be a less serious problem. The stakes are much higher for the upper bound. If the ECS were above 4.5 K, immediate and severe reductions of greenhouse gas emissions would be imperative to avoid dangerous climate changes within a few human generations.

From a mitigation point of view, the difference between an ECS of 1.5 K and 4.5 K corresponds to about a factor of two in the allowable CO2 emissions for a given temperature target [Stocker et al., 2013] and it explains why the value of learning more about the ECS has been appraised so highly [Cooke et al., 2013; Neubersch et al., 2014].

The ECS also gains importance because it conditions many other impacts of greenhouse gases, such as regional temperature and rainfall [Bony et al., 2013; Tebaldi and Arblaster, 2014], and even extremes [Seneviratne et al., 2016], knowledge of which is required for developing effective adaptation strategies. Being an important and simple measure of climate change, the ECS is something that climate science should and must be able to better understand and quantify more precisely.

One of the questions they raise is at the heart of my question about whether climate sensitivity is a constant that we can measure, or a value that has some durable meaning rather than being dependent on the actual climate specifics at the time. For example, there are attempts to measure it via the climate response during an El Nino. We see the climate warm and we measure how the top of atmosphere radiation balance changes. We attempt to measure the difference in ocean temperature between end of the last ice age and today and deduce climate sensitivity. Perhaps I have a mental picture of non-linear systems that is preventing me from seeing the obvious. However, the picture I have in my head is that the dependence of the top of radiation balance on temperature is not a constant.

Here is their commentary. They use the term “pattern effect” for my mental model described above:

Hence, a generalization of the concept of climate sensitivity to different eras may need to account for differences that arise from the different base state of the climate system, increasingly so for large perturbations.

Even for small perturbations, there is mounting evidence that the outward radiation may be sensitive to the geographic pattern of surface temperature changes. Senior and Mitchell [2000] argued that if warming is greater over land, or at high latitudes, different feedbacks may occur than for the case where the same amount of warming is instead concentrated over tropical oceans.

These effects appear to be present in a range of models [Armour et al., 2013; Andrews et al., 2015]. Physically they can be understood because clouds—and their impact on radiation—are sensitive to changes in the atmospheric circulation, which responds to geographic differences in warming [Kang et al., 2013], or simply because an evolving pattern of surface warming weights local responses differently at different times [Armour et al., 2013].

Hence different patterns of warming, occurring on different timescales, may be associated with stronger or weaker radiative responses. This introduces an additional state dependence, one that is not encapsulated by the global mean temperature. We call this a “pattern effect.” Pattern effects are thought to be important for interpreting changes over the instrumental period [Gregory and Andrews, 2016], and may contribute to the state dependence of generalized measures of Earth’s climate sensitivity as inferred from the geological record.

Some of my thoughts are that the insoluble questions on this specific topic are also tied into the question about the climate being chaotic vs just weather being chaotic – see for example, Natural Variability and Chaos – Four – The Thirty Year Myth. In that article we look at the convention of defining climate as the average of 30 years of weather and why that “eliminates” chaos, or doesn’t. Non-linear systems have lots of intractable problems – more on that topic in the whole series Natural Variability and Chaos. It’s good to see it being mentioned in this paper.

Read the whole paper – it reviews the conditions necessary for very low climate sensitivity and for very high climate sensitivity, with the idea being that if one necessary condition can be ruled out then the very low and/or very high climate sensitivity can be ruled out. The paper also includes some excellent references for further insights.

From Stevens et al 2016

Click to enlarge

Happy Thanksgiving to our US readers.

References

Missing iris effect as a possible cause of muted hydrological change and high climate sensitivity in models, Thorsten Mauritsen & Bjorn Stevens, Nature Geoscience (2015) – paywall paper

Prospects for narrowing bounds on Earth’s equilibrium climate sensitivity, Bjorn Stevens, Steven C Sherwood, Sandrine Bony & Mark J Webb, Earth’s
Future (2016) – free paper

Read Full Post »

In Part Five – More on Tuning & the Magic Behind the Scenes and also in the earlier Part Four we looked at the challenge of selecting parameters in climate models. A recent 2017 paper on this topic by Frédéric Hourdin and colleagues is very illuminating. One of the co-authors is Thorsten Mauritsen, the principal author of the 2012 paper we reviewed in Part Four. Another co-author is Jean-Christophe Golaz, the principal author of the 2013 paper we reviewed in Part Five.

The topics are similar but there is some interesting additional detail and commentary. The paper is open and, as always, I recommend reading the whole paper.

One of the key points is that climate models need to be specific about their “target” – were they trying to get the model to match recent climatology? top of atmosphere radiation balance? last 100 years of temperature trends? If we know that a model was developed with an eye on a particular target then it doesn’t demonstrate model skill if they get that target right.

Because of the uncertainties in observations and in the model formulation, the possible parameter choices are numerous and will differ from one modeling group to another. These choices should be more often considered in model intercomparison studies. The diversity of tuning choices reflects the state of our current climate understanding, observation, and modeling. It is vital that this diversity be maintained. It is, however, important that groups better communicate their tuning strategy. In particular, when comparing models on a given metric, either for model assessment or for understanding of climate mechanisms, it is essential to know whether some models used this metric as tuning target.

They comment on the paper by Jeffrey Kiehl from 2007 (referenced in The Debate is Over – 99% of Scientists believe Gravity and the Heliocentric Solar System so therefore..) which showed how models with higher sensitivity to CO2 have higher counter-balancing negative forcing from aerosols.

And later in the paper:

The question of whether the twentieth-century warming should be considered a target of model development or an emergent property is polarizing the climate modeling community, with 35% of modelers stating that twentieth-century warming was rated very important to decisive, whereas 30% would not consider it at all during development.

Some view the temperature record as an independent evaluation dataset not to be used, while others view it as a valuable observational constraint on the model development. Likewise, opinions diverge as to which measures, either forcing or ECS, are legitimate means for improving the model match to observed warming.

The question of developing toward the twentieth- century warming therefore is an area of vigorous debate within the community..

..The fact that some models are explicitly, or implicitly, tuned to better match the twentieth-century warming, while others may not be, clearly complicates the interpretation of the results of combined model ensembles such as CMIP. The diversity of approaches is unavoidable as individual modeling centers pursue their model development to seek their specific scientific goals.

It is, however, essential that decisions affecting forcing or feedback made during model development be transparently documented.

And so, onto another recent paper by Sumant Nigam and colleagues. They review the temperature trends by season over the last 100 years and review that against models. They look only at the northern hemisphere over land, due to the better temperature dataset available (compared with the southern hemisphere).

Here are the observations of the trends for each of the four seasons, I find it fascinating to see the difference between the seasonal trends:

From Nigam et al 2017

Figure 1 – Click to enlarge

Then they compare the observations to some of the models used in IPCC AR5 (from the model intercomparison project, CMIP5) – top line is observations, each line below is a different model. When we compare the geographical distribution of winter-summer trend (right column) we can see that the models don’t do very well:

From Nigam et al 2017

Figure 2 – Click to enlarge

From their conclusion:

The urgent need for shifting the evaluative and diagnostic focus away from the customary annual mean toward the seasonal cycle of secular warming is manifest in the inability of the leading climate models (whose simulations inform the IPCC’s Fifth Assessment Report) to generate realistic and robust (large signal-to noise ratio) twentieth-century winter and summer SAT trends over the northern continents. The large intra-ensemble SD of century-long SAT trends in some IPCC AR5 models (e.g., GFDL-CM3) moreover raises interesting questions: If this subset of climate models is realistic, especially in generation of ultra-low-frequency variability, is the century-long (1902–2014) linear trend in observed SAT—a one-member ensemble of the climate record—a reliable indicator of the secular warming signal?

I’ve commented a number of times in various articles – people who don’t read climate science papers often have some idea that climate scientists are monolithically opposed to questioning model results or questioning “the orthodoxy”. This is contrary to what you find if you read lots of papers. It might be that press releases that show up in The New York Times, CNN or the BBC (or pick another ideological bellwether) have some kind of monolithic sameness but this just demonstrates that no one interested in finding out anything important (apart from the weather and celebrity news) should ever watch/read media outlets.

They continue:

The relative contribution of both mechanisms to the observed seasonality in century-long SAT trends needs further assessment because of uncertainties in the diagnosis of evapotranspiration and sea level pressure from the century-long observational records. Climate system models—ideal tools for investigation of mechanisms through controlled experimentation—are unfortunately not yet ready given their inability to simulate the seasonality of trends in historical simulations.

Subversive indeed.

Their investigation digs into evapotranspiration – the additional water vapor, available from plants, to be evaporated and therefore to remove heat from the surface during the summer months.

Conclusion

All models are wrong but some are useful” – a statement attributed to a modeler from a different profession (statistical process control) and sometimes quoted also by climate modelers.

This is always a good way to think about models. Perhaps the inability of climate models to reproduce seasonal trends is inconsequential – or perhaps it is important. Models fail on many levels. The question is why, and the answers lead to better models.

Climate science is a real science, contrary to the claims of many people who don’t read much climate science papers, because many published papers ask important and difficult questions, and critique the current state of the science. That is, falsifiability is being addressed. These questions might not become media headlines, or even make it into the Summary for Policymakers in IPCC reports, but papers with these questions are not outliers.

I found both of these papers very interesting. Hourdin et al because they ask valuable questions about how models are tuned, and Nigam et al because they point out that climate models do a poor job of reproducing an important climate trend (seasonal temperature) which provides an extra level of testing for climate models.

References

Striking Seasonality in the Secular Warming of the Northern Continents: Structure and Mechanisms, Sumant Nigam et al, Journal of Climate (2017)

The Art and Science of Climate Model Tuning, Frédéric Hourdin et al, American Meteorological Society (2017) – free paper

Read Full Post »

Over in another article, a commenter claims:

..Catastrophic predictions depend on accelerated forcings due to water vapour feedback. This water vapour feedback is simply written into climate models as parameters. It is not derived from any kind simulation of first principles in the General Circulation Model runs (GCMs)..

[Emphasis added]

I’ve seen this article of faith a lot. If you frequent fantasy climate blogs where people learn first principles and modeling basics from comments by other equally well-educated commenters this is the kind of contribution you will be able to make after years of study.

None of us knowed nothing, so we all sat around and teached each other.

Actually how the atmospheric section of climate models work is pretty simple in principle. The atmosphere is divided up into a set of blocks (a grid) with each block having dimensions something like 200 km x 200km x 500m high. The values vary a lot and depend on the resolution of the model, this is just to give you an idea.

Then each block has an E-W wind; a N-S wind; a vertical velocity; temperature; pressure; the concentrations of CO2, water vapor, methane; cloud fractions, and so on.

Then the model “steps forward in time” and uses equations to calculate the new values of each item.

The earth is spinning and conservation of momentum, heat, mass is applied to each block. The principles of radiation through each block in each direction apply via paramaterizations (note 1).

Specifically on water vapor – the change in mass of water vapor in each block is calculated from the amount of water evaporated, the amount of water vapor condensed, and the amount of rainfall taking water out of the block. And from the movement of air via E-W, N-S and up/down winds. The final amount of water vapor in each time step affects the radiation emitted upwards and downwards.

It’s more involved and you can read whole books on the subject.

I doubt that anyone who has troubled themselves to read even one paper on climate modeling basics could reach the conclusion so firmly believed in fantasy climate blogs and repeated above. If you never need to provide evidence for your claims..

For this blog we do like to see proof of claims, so please take a read of Description of the NCAR Community Atmosphere Model (CAM 4.0) and just show where this water vapor feedback is written in. Or pick another climate model used by a climate modeling group.

This is the kind of exciting stuff you find in the 200+ pages of an atmospheric model description:

From CAM4 Technical Note

You can also find details of the shortwave and longwave radiation parameterization schemes and how they apply to water vapor.

Here is a quote from The Global Circulation of the Atmosphere (ref below):

Essentially all GCMs yield water vapor feedback consistent with that which would result from holding relative humidity approximately fixed as climate changes. This is an emergent property of the simulated climate system; fixed relative humidity is not in any way built into the model physics, and the models offer ample means by which relative humidity could change.

From Water Vapor Feedback and Global Warming, a paper well-worth anyone reading for who wants to understand this key question in climate:

Water vapor is the dominant greenhouse gas, the most important gaseous source of infrared opacity in the atmosphere. As the concentrations of other greenhouse gases, particularly carbon dioxide, increase because of human activity, it is centrally important to predict how the water vapor distribution will be affected. To the extent that water vapor concentrations increase in a warmer world, the climatic effects of the other greenhouse gases will be amplified. Models of the Earth’s climate indicate that this is an important positive feedback that increases the sensitivity of surface temperatures to carbon dioxide by nearly a factor of two when considered in isolation from other feedbacks, and possibly by as much as a factor of three or more when interactions with other feedbacks are considered. Critics of this consensus have attempted to provide reasons why modeling results are overestimating the strength of this feedback..

Remember, just a few years of study at fantasy climate blogs can save an hour or more of reading papers on atmospheric physics.

References

Description of the NCAR Community Atmosphere Model (CAM 4)– free paper

On the Relative Humidity of the Atmosphere, Chapter 6 of The Global Circulation of the Atmosphere, edited by Tapio Schneider & Adam Sobel, Princeton University Press (2007)

Water Vapor Feedback and Global Warming, Held & Soden, Annu. Rev. Energy Environ (2000) – free paper

Radiative forcing by well-mixed greenhouse gases: Estimates from climate models in the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4), WD Collins et al, JGR (2006)

Notes

Note 1: The very accurate calculation of radiation transfer is done via line by line calculations but they are computationally very expensive and so a simpler approximation is used in GCMs. Of course there are many studies comparing parameterizations vs line by line calculations. One example is Radiative forcing by well-mixed greenhouse gases: Estimates from climate models in the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4), WD Collins et al, JGR (2006).

Read Full Post »

In recent articles we have looked at rainfall and there is still more to discuss. This article changes tack to look at tropical cyclones, prompted by the recent US landfall of Harvey and Irma along with questions from readers about attribution and the future.

It might be surprising to find the following statement from leading climate scientists (Kevin Walsh and many co-authors in 2015):

At present, there is no climate theory that can predict the formation rate of tropical cyclones from the mean climate state.

The subject gets a little involved so let’s dig into a few papers. First from Gabriel Vecchi and some co-authors in 2008 in the journal Science. The paper is very brief and essentially raises one question – has the recent rise in total Atlantic cyclone intensity been a result of increases in absolute sea surface temperature (SST) or relative sea surface temperature:

From Vecchi et al 2008

Figure 1

The top graph (above) shows a correlation of 0.79 between SST and PDI (power dissipation index). The bottom graph shows a correlation of 0.79 between relative SST (local sea surface temperature minus the average tropical sea surface temperature) and PDI.

With more CO2 in the atmosphere from burning fossil fuels we expect a warmer SST in the tropical Atlantic in 2100 than today. But we don’t expect the tropical Atlantic to warm faster than the tropics in general.

If cyclone intensity is dependent on local SST we expect more cyclones, or more powerful cyclones. If cyclone intensity is dependent on relative SST we expect no increase in cyclones. This is because climate models predict warmer SSTs in the future but not warmer Atlantic SSTs than the tropics. The paper also shows a few high resolution models – green symbols – sitting close to the zero change line.

Now predicting tropical cyclones with GCMs has a fundamental issue – the scale of a modern high resolution GCM is around 100km. But cyclone prediction requires a higher resolution due to their relatively small size.

Thomas Knutson and co-authors (including the great Isaac Held) produced a 2007 paper with an interesting method (of course, the idea is not at all new). They input actual meteorological data (i.e. real history from NCEP reanalysis) into a high resolution model which covered just the Atlantic region. Their aim was to see how well this model could reproduce tropical storms. There are some technicalities to the model – the output is constantly “nudged” back towards the actual climatology and out at the boundaries of the model we can’t expect good simulation results. The model resolution is 18km.

The main question addressed here is the following: Assuming one has essentially perfect knowledge of large-scale atmospheric conditions in the Atlantic over time, how well can one then simulate past variations in Atlantic hurricane activity using a dynamical model?

They comment that the cause of the recent (at that time) upswing in hurricane activity “remains unresolved”. (Of course, fast forward to 2016, prior to the recent two large landfall hurricanes, and the overall activity is at a 1970 low. In early 2018, this may be revised again..).

Two interesting graphs emerge. First an excellent match between model and observations for overall frequency year on year:

From Knutson et al 2007

Figure 2

Second, an inability to predict the most intense hurricanes. The black dots are observations, the red dots are simulations from the model. The vertical axis, a little difficult to read, is SLP, or sea level pressure:

From Knutson et al 2007

Figure 3

These results are a common theme of many papers – inputting the historical climatological data into a model we can get some decent results on year to year variation in tropical cyclones. But models under-predict the most intense cyclones (hurricanes).

Here is Morris Bender and co-authors (including Thomas Knutson, Gabriel Vecchi – a frequent author or co-author in this genre, and of course Isaac Held) from 2010:

Some statistical analyses suggest a link between warmer Atlantic SSTs and increased hurricane activity, although other studies contend that the spatial structure of the SST change may be a more important control on tropical cyclone frequency and intensity. A few studies suggest that greenhouse warming has already produced a substantial rise in Atlantic tropical cyclone activity, but others question that conclusion.

This is a very typical introduction in papers on this topic. I note in passing this is a huge blow to the idea that climate scientists only ever introduce more certainty and alarm on the harm from future CO2 emissions. They don’t. However, it is also true that some climate scientists believe that recent events have been accentuated due to the last century of fossil fuel burning and these perspectives might be reported in the media. I try to ignore the media and that is my recommendation to readers on just about all subjects except essential ones like the weather and celebrity news.

This paper used a weather prediction model starting a few days before each storm to predict the outcome. If you understand the idea behind Knutson 2007 then this is just one step further – a few days prior to the emergence of an intense storm, input the actual climate data into a high resolution model and see how well the high res model predicts the observations. They also used projected future climates from CMIP3 models (note 1).

In the set of graphs below there are three points I want to highlight – and you probably need to click on the graph to enlarge it.

First, in graph B, “Zetac” is the model used by Knutson et al 2007, whereas GFDL is the weather prediction model getting better results in this paper – you can see that observations and the GFDL are pretty close in the maximum wind speed distribution. Second, the climate change predictions in E show that predictions of the future show an overall reduction in frequency of tropical storms, but an increase in the frequency of storms with the highest wind speeds – this is a common theme in papers from this genre. Third, in graph F, the results (from the weather prediction model) fed by different GCMs for future climate show quite different distributions. For example, the UKMO model produces a distribution of future wind speeds that is lower than current values.

From Bender et al 2010

Figure 4 – Click to enlarge

In this graph (S3 from the Supplementary data) we see graphs of the difference between future projected climatologies and current climatologies for three relevant parameters for each of the four different models shown in graph F in the figure above:

From Bender et al 2010

Figure 5 – Click to enlarge

This illustrates that different projected future climatologies, which all show increased SST in the Atlantic region, generate quite different hurricane intensities. The paper suggests that the reduction in wind shear in the UKMO model produces a lower frequency of higher intensity hurricanes.

Conclusion

This article illustrates that feeding higher resolution models with current data can generate realistic cyclone data in some aspects, but less so in other aspects. As we increase the model resolution we can get even better results – but this is dependent on inputting the correct climate data. As we look towards 2100 the questions are – How realistic is the future climate data? How does that affect projections of hurricane frequencies and intensities?

Articles in this Series

Impacts – I – Introduction

Impacts – II – GHG Emissions Projections: SRES and RCP

Impacts – III – Population in 2100

Impacts – IV – Temperature Projections and Probabilities

Impacts – V – Climate change is already causing worsening storms, floods and droughts

Impacts – VI – Sea Level Rise 1

Impacts – VII – Sea Level 2 – Uncertainty

Impacts – VIII – Sea level 3 – USA

Impacts – IX – Sea Level 4 – Sinking Megacities

Impacts – X – Sea Level Rise 5 – Bangladesh

Impacts XI – Rainfall 1

Impacts – XII – Rainfall 2

Impacts – XIII – Rainfall 3

References

Hurricanes and climate: the US CLIVAR working group on hurricanes, American Meteorological Society, Kevin Walsh et al (2015) – free paper

Whither Hurricane Activity? Gabriel A Vecchi, Kyle L Swanson & Brian J. Soden, Science (2008) – free paper

Simulation of the Recent Multidecadal Increase of Atlantic Hurricane Activity Using an 18-km-Grid Regional Model, Thomas Knutson et al, American Meteorological Society, (2007) – free paper

Modeled Impact of Anthropogenic Warming on the Frequency of Intense Atlantic Hurricanes, Morris A Bender et al, Science (2010) – free paper

Notes

Note 1: The scenario is A1B, which is similar to RCP6 – that is, an approximate doubling of CO2 by the end of the century. The simulations came from the CMIP3 suite of model results.

Read Full Post »

In XII – Rainfall 2 we saw the results of many models on rainfall as GHGs increase. They project wetter tropics, drier subtropics and wetter higher latitude regions. We also saw an expectation that rainfall will increase globally, with something like 2-3% per ºC of warming.

Here is a (too small) graph from Allen & Ingram (2002) showing the model response of rainfall under temperature changes from GHG increases. The dashed line marked “C-C” is the famous (in climate physics) Clausius–Clapeyron relation which, at current temperatures, shows a 7% change in water vapor per ºC of warming. The red triangles are the precipitation changes from model simulations showing about half of that.

From Allen & Ingram (2002)

Figure 1

Here is another graph from the same paper showing global mean temperature change (top) and rainfall over land (bottom):

From Allen & Ingram (2002)

Figure 2

The temperature has increased over the last 50 years, and models and observations show that the precipitation has.. oh, it’s not changed. What is going on?

First, the authors explain some important background:

The distribution of moisture in the troposphere (the part of the atmosphere that is strongly coupled to the surface) is complex, but there is one clear and strong control: moisture condenses out of supersaturated air.

This constraint broadly accounts for the humidity of tropospheric air parcels above the boundary layer, because almost all such parcels will have reached saturation at some point in their recent history. Physically, therefore, it has long seemed plausible that the distribution of relative humidity would remain roughly constant under climate change, in which case the Clausius-Clapeyron relation implies that specific humidity would increase roughly exponentially with temperature.

This reasoning is strongest at higher latitudes where air is usually closer to saturation, and where relative humidity is indeed roughly constant through the substantial temperature changes of the seasonal cycle. For lower latitudes it has been argued that the real-world response might be different. But relative humidity seems to change little at low latitudes under a global warming scenario, even in models of very high vertical resolution, suggesting this may be a robust ’emergent constraint’ on which models have already converged.

They continue:

If tropospheric moisture loading is controlled by the constraints of (approximately) unchanged relative humidity and the Clausius-Clapeyron relation, should we expect a corresponding exponential increase in global precipitation and the overall intensity of the hydrological cycle as global temperatures rise?

This is certainly not what is observed in models.

To clarify, the point in the last sentence is that models do show an increase in precipitation, but not at the same rate as the expected increase in specific humidity (see note 1 for new readers).

They describe their figure 2 (our figure 1 above) and explain:

The explanation for these model results is that changes in the overall intensity of the hydrological cycle are controlled not by the availability of moisture, but by the availability of energy: specifically, the ability of the troposphere to radiate away latent heat released by precipitation.

At the simplest level, the energy budgets of the surface and troposphere can be summed up as a net radiative heating of the surface (from solar radiation, partly offset by radiative cooling) and a net radiative cooling of the troposphere to the surface and to space (R) being balanced by an upward latent heat flux (LP, where L is the latent heat of evaporation and P is global-mean precipitation): evaporation cools the surface and precipitation heats the troposphere.

[Emphasis added].

Basics Digression

Picture the atmosphere over a long period of time (like a decade), and for the whole globe. If it hasn’t heated up or cooled down we know that the energy in must equal energy out (or if it has only done so only marginally then energy in is almost equal to energy out). This is the first law of thermodynamics – energy is conserved.

What energy comes into the atmosphere?

  1. Solar radiation is partly absorbed by the atmosphere (most is transmitted through and heats the surface of the earth)
  2. Radiation emitted from the earth’s surface (we’ll call this terrestrial radiation) is mostly absorbed by the atmosphere (some is transmitted straight through to space)
  3. Warm air is convected up from the surface
  4. Heat stored in evaporated water vapor (latent heat) is convected up from the surface and the water vapor condenses out, releasing heat into the atmosphere when this happens

How does the atmosphere lose energy?

  1. It radiates downwards to the surface
  2. It radiates out to space

..end of digression

Changing Energy Budget

In a warmer world, if we have more evaporation we have more latent heat transfer from the surface into the troposphere. But the atmosphere has to be able to radiate this heat away. If it can’t, then the atmosphere becomes warmer, and this reduces convection. So with a warmer surface we may have a plentiful potential supply of latent heat (via water vapor) but the atmosphere needs a mechanism to radiate away this heat.

Allen & Ingram put forward a simple conceptual equation:

ΔRc + ΔRT = LΔP

where the change in radiative cooling ΔR, is split into two components: ΔRc that is independent of the change in atmospheric temperature; and ΔRT that depends only on the temperature

L = latent heat of water vapor (a constant), ΔP = change in rainfall (= change in evaporation, as evaporation is balanced by rainfall)

LΔP is about 1W/m² per 1% increase in global precipitation.

Now, if we double CO2, then before any temperature changes we decrease the outgoing longwave radiation through the tropopause (the top of the troposphere) by about 3-4W/m² and we increase atmospheric radiation to the surface by about 1W/m².

So doubling CO2, ΔRc = -2 to -3W/m²; prior to a temperature change ΔRT = 0; and so ΔP reduces.

The authors comment that increasing CO2 before any temperature change takes place reduces the intensity of the hydrological cycle and this effect was seen in early modeling experiments using prescribed sea surface temperatures.

Now, of course, the idea of doubling CO2 without any temperature change is just a thought experiment. But it’s an important thought experiment because it lets us isolate different factors.

The authors then consider their factor ΔRT:

The enhanced radiative cooling due to tropospheric warming, ΔRT, is approximately proportional to ΔT: tropospheric temperatures scale with the surface temperature change and warmer air radiates more energy, so ΔRT = kΔT, with k=3W/(m²K)..

All this is saying is that as the surface warms, the atmosphere warms at about the same rate, and the atmosphere then emits more radiation. This is why the model results of rainfall in our figure 2 above show no trend in rainfall over 50 years, and also match the observations – the constraint on rainfall is the changing radiative balance in the troposphere.

And so they point out:

Thus, although there is clearly usable information in fig. 3 [our figure 2], it would be physically unjustified to estimate ΔP/ΔT directly from 20th century observations and assume that the same quantity will apply in the future, when the balance between climate drivers will be very different.

There is a lot of other interesting commentary in their paper, although the paper itself is now quite dated (and unfortunately behind a paywall). In essence they discuss the difficulties of modeling precipitation changes, especially for a given region, and are looking for “emergent constraints” from more fundamental physics that might help constrain forecasts.

A forecasting system that rules out some currently conceivable futures as unlikely could be far more useful for long-range planning than a small number of ultra-high-resolution forecasts that simply rule in some (very detailed futures as possibilities).

This is a very important point when considering impacts.

Conclusion

Increasing the surface temperature by 1ºC is expected to increase the humidity over the ocean by about 7%. This is simply the basic physics of saturation. However, climate models predict an increase in mean rainfall of maybe 2-3% per ºC. The fundamental reason is that the movement of latent heat from the surface to the atmosphere has to be radiated away by the atmosphere, and so the constraint is the ability of the atmosphere to do this. And so the limiting factor in increasing rainfall is not the humidity increase, it is the radiative cooling of the atmosphere.

We also see that despite 50 years of warming, mean rainfall hasn’t changed. Models also predict this. This is believed to be a transient state, for reasons explained in the article.

References

Constraints on future changes in climate and the hydrologic cycle, MR Allen & WJ Ingram, Nature (2002)  – freely available [thanks, Robert]

Notes

1 Relative humidity is measured as a percentage. If the relative humidity = 100% it means the air is saturated with water vapor – it can’t hold any more water vapor. If the relative humidity = 0% it means the air is completely dry. As temperature increases the ability of air to hold water vapor increases non-linearly.

For example, at 0ºC, 1kg of air can carry around 4g of water vapor, at 10ºC that has doubled to 8g, and at 20ºC it has doubled again to 15g (I’m using approximate values).

So now imagine saturated air over the ocean at 20ºC rising up and therefore cooling (it is cooler higher up in the atmosphere). By the time the air parcel has cooled down to 0ºC (this might be anything from 2km to 5km altitude) it is still saturated but is only carrying 4g of water vapor, having condensed out 11g into water droplets.

Read Full Post »

I probably should have started a separate series on rainfall and then woven the results back into the Impacts series. It might take a few articles working through the underlying physics and how models and observations of current and past climate compare before being able to consider impacts.

There are a number of different ways to look at rainfall models and reality:

  • What underlying physics provides definite constraints regardless of individual models, groups of models or parameterizations?
  • How well do models represent the geographical distribution of rain over a climatological period like 30 years? (e.g. figure 8 in Impacts XI – Rainfall 1)
  • How well do models represent the time series changes of rainfall?
  • How well do models represent the land vs ocean? (when we think about impacts, rainfall over land is what we care about)
  • How well do models represent the distribution of rainfall and the changing distribution of rainfall, from lightest to heaviest?

In this article I thought I would highlight a set of conclusions from one paper among many. It’s a good starting point. The paper is A canonical response of precipitation characteristics to global warming from CMIP5 models by Lau and his colleagues, and is freely available, and as always I recommend people read the whole paper, along with the supporting information that is also available via the link.

As an introduction, the underlying physics perhaps provides some constraints. This is strongly believed in the modeling community. The constraint is a simple one – if we warm the ocean by 1K (= 1ºC) then the amount of water vapor above the ocean surface increases by about 7%. So we expect a warmer world to have more water vapor – at least in the boundary layer (typically 1km) and over the ocean. If we have more water vapor then we expect more rainfall. But GCMs and also simple models suggest a lower value, like 2-3% per K, not 7%/K. We will come back to why in another article.

It also seems from models that with global warming, rainfall increases more in regions and times of already high rainfall and reduces in regions and times of low rainfall – the “wet get wetter and the dry get drier”. (Also a marketing mantra that introducing a catchy slogan ensures better progress of an idea). So we also expect changes in the distribution of rainfall. One reason for this is a change in the tropical circulation. All to be covered later, so onto the paper..

We analyze the outputs of 14 CMIP5 models based on a 140 year experiment with a prescribed 1% per year increase in CO2 emission. This rate of CO2 increase is comparable to that prescribed for the RCP8.5, a relatively conservative business-as-usual scenario, except the latter includes also changes in other GHG and aerosols, besides CO2.

A 27-year period at the beginning of the integration is used as the control to compute rainfall and temperature statistics, and to compare with climatology (1979–2005) of rainfall data from the Global Precipitation Climatology Project (GPCP). Two similar 27-year periods in the experiment that correspond approximately to a doubling of CO2 emissions (DCO2) and a tripling of CO2 emissions (TCO2) compared to the control are chosen respectively to compute the same statistics..

Just a note that I disagree with the claim that RCP8.5 is a “relatively conservative business as usual scenario” (see Impacts – II – GHG Emissions Projections: SRES and RCP), but that’s just an opinion, as are all views about where the world will be in population, GDP and cumulative emissions 100-150 years from now. It doesn’t detract from the rainfall analysis in the paper.

For people wondering “what is CMIP5?” – this is the model inter-comparison project for the most recent IPCC report (AR5) where many models have to address the same questions so they can be compared.

Here we see (and along with other graphs you can click to enlarge) what the models show in temperature (top left), mean global rainfall (top right), zonal rainfall anomaly by latitude (bottom left) and the control vs the tripled CO2 comparison (bottom right). The many different colors in the first three graphs are each model, while the black line is the mean of the models (“ensemble mean”). The bottom right graph helps put the changes shown in the bottom left into a perspective – with the different between the red and the blue being the difference between tripling CO2 and today:

From Lau et al 2013

Figure 1 – Click to enlarge

In the figure above, the bottom left graph shows anomalies. We see one of the characteristics of models as a result of more GHGs – wetter tropics and drier sub-tropics, along with wetter conditions at higher latitudes.

From the supplementary material, below we see a better regional breakdown of fig 1d (bottom right in the figure above). I’ll highlight the bottom left graph (c) for the African region. Over the continent, the differences between present day and tripling CO2 seem minor as far as model predictions go for mean rainfall:

From Lau et al 2013

Figure 2 – Click to enlarge

The supplementary material also has a comparison between models and observations. The first graph below is what we are looking at (the second graph we will consider afterwards). TRMM (Tropical Rainfall Measuring Mission) is satellite data and GPCP one rainfall climatology that we met in the last article – so they are both observational datasets. We see that the models over-estimate tropic rainfall, especially south of the equator:

From Lau et al 2013

Figure 3 – Click to enlarge

Rainfall Distribution from Light through to Heavy Rain

Lau and his colleagues then look at rainfall distribution in terms of light rainfall through to heavier rainfall. So, take global rainfall and divide it into frequency of occurrence, with light rainfall to the left and heavy rainfall to the right. Take a look back at the bottom graph in the figure above (figure 3, their figure S1). Note that the horizontal axis is logarithmic, with a ratio of over 1000 from left to right.

It isn’t an immediately intuitive graph. Basically there are two sets of graphs. The left “cluster” is how often that rainfall amount occurred, and the black line is GPCP observations. The “right cluster” is how much rainfall fell (as a percentage of total rainfall) for that rainfall amount and again black is observations.

So lighter rainfall, like 1mm/day and below accounts for 50% of time, but being light rainfall accounts for less than 10% of total rainfall.

To facilitate discussion regarding rainfall characteristics in this work, we define, based on the ensemble model PDF, three major rain types: light rain (LR), moderate rain (MR), and heavy rain (HR) respectively as those with monthly mean rain rate below the 20th percentile (<0.3 mm/day), between response (TCO2 minus control, black) and the inter-model 1s the 40th–70th percentile (0.9–2.4mm/day), and above the 98.5% percentile (>9mm/day). An extremely heavy rain (EHR) type defined at the 99.9th percentile (>24 mm day1) will also be referred to, as appropriate.

Here is a geographical breakdown of the total and then the rainfall in these three categories, model mean on the left and observations on the right:

From Lau et al 2013

Figure 4 – Click to enlarge

We can see that the models tend to overestimate the heavy rain and underestimate the light rain. These graphics are excellent because they help us to see the geographical distribution.

Now in the graphs below we see at the top the changes in frequency of mean precipitation (60S-60N) as a function of rain rate; and at the bottom we see the % change in rainfall per K of temperature change, again as a function of rain rate. Note that the bottom graph also has a logarithmic scale for the % change, so as you move up each grid square the value is doubled.

The different models are also helpfully indicated so the spread can be seen:

From Lau et al 2013

Figure 5 – Click to enlarge

Notice that the models are all predicting quite a high % change in rainfall per K for the heaviest rain – something around 50%. In contrast the light rainfall is expected to be up a few % per K and the medium rainfall is expected to be down a few % per K.

Globally, rainfall increases by 4.5%, with a sensitivity (dP/P/dT) of 1.4% per K

Here is a table from their supplementary material with a zonal breakdown of changes in mean rainfall (so not divided into heavy, light etc). For the non-maths people the first row, dP/P is just the % change in precipitation (“d” in front of a variable means “change in that variable”), the second row is change in temperature and the third row is the % change in rainfall per K (or ºC) of warming from GHGs:

From Lau et al 2013

Figure 6 – Click to enlarge

Here are the projected geographical distributions of the changes in mean (top left), heavy (top right), medium (bottom left) and light rain (bottom right) – using their earlier definitions – under tripling CO2:

From Lau et al 2013

Figure 7 – Click to enlarge

And as a result of these projections, the authors also show the number of dry months and the projected changes in number of dry months:

From Lau et al 2013

Figure 8 – Click to enlarge

The authors conclude:

The IPCC CMIP5 models project a robust, canonical global response of rainfall characteristics to CO2 warming, featuring an increase in heavy rain, a reduction in moderate rain, and an increase in light rain occurrence and amount globally.

For a scenario of 1% CO2 increase per year, the model ensemble mean projects at the time of approximately tripling of the CO2 emissions, the probability of occurring of extremely heavy rain (monthly mean >24mm/day) will increase globally by 100%–250%, moderate rain will decrease by 5%–10% and light rain will increase by 10%–15%.

The increase in heavy rain is most pronounced in the equatorial central Pacific and the Asian monsoon regions. Moderate rain is reduced over extensive oceanic regions in the subtropics and extratropics, but increased over the extratropical land regions of North America, and Eurasia, and extratropical Southern Oceans. Light rain is mostly found to be inversely related to moderate rain locally, and with heavy rain in the central Pacific.

The model ensemble also projects a significant global increase up to 16% more frequent in the occurrences of dry months (drought conditions), mostly over the subtropics as well as marginal convective zone in equatorial land regions, reflecting an expansion of the desert and arid zones..

 

..Hence, the canonical global rainfall response to CO2 warming captured in the CMIP5 model projection suggests a global scale readjustment involving changes in circulation and rainfall characteristics, including possible teleconnection of extremely heavy rain and droughts separated by far distances. This adjustment is strongly constrained geographically by climatological rainfall pattern, and most likely by the GHG warming induced sea surface temperature anomalies with unstable moister and warmer regions in the deep tropics getting more heavy rain, at the expense of nearby marginal convective zones in the tropics and stable dry zones in the subtropics.

Our results are generally consistent with so-called “the rich-getting-richer, poor-getting-poorer” paradigm for precipitation response under global warming..

Conclusion

This article has basically presented the results of one paper, which demonstrates consistency in model response of rainfall to doubling and tripling of CO2 in the atmosphere. In subsequent articles we will look at the underlying physics constraints, at time-series over recent decades and try to make some kind of assessment.

Articles in this Series

Impacts – I – Introduction

Impacts – II – GHG Emissions Projections: SRES and RCP

Impacts – III – Population in 2100

Impacts – IV – Temperature Projections and Probabilities

Impacts – V – Climate change is already causing worsening storms, floods and droughts

Impacts – VI – Sea Level Rise 1

Impacts – VII – Sea Level 2 – Uncertainty

Impacts – VIII – Sea level 3 – USA

Impacts – IX – Sea Level 4 – Sinking Megacities

Impacts – X – Sea Level Rise 5 – Bangladesh

Impacts XI – Rainfall 1

References

A canonical response of precipitation characteristics to global warming from CMIP5 models, William K.-M. Lau, H.-T. Wu, & K.-M. Kim, GRL (2013) – free paper

Further Reading

Here are a bunch of papers that I found useful for readers who want to dig into the subject. Most of them are available for free via Google Scholar, but one of the most helpful to me (first in the list) was Allen & Ingram 2002 and the only way I could access it was to pay $4 to rent it for a couple of days.

Allen MR, Ingram WJ (2002) Constraints on future changes in climate and the hydrologic cycle. Nature 419:224–232

Allan RP (2006) Variability in clear-sky longwave radiative cooling of the atmosphere. J Geophys Res 111:D22, 105

Allan, R. P., B. J. Soden, V. O. John, W. Ingram, and P. Good (2010), Current changes in tropical precipitation, Environ. Res. Lett., doi:10.1088/ 1748-9326/5/52/025205

Physically Consistent Responses of the Global Atmospheric Hydrological Cycle in Models and Observations, Richard P. Allan et al, Surv Geophys (2014)

Held IM, Soden BJ (2006) Robust responses of the hydrological cycle to global warming. J Clim 19:5686–5699

Changes in temperature and precipitation extremes in the CMIP5 ensemble, VV Kharin et al, Climatic Change (2013)

Energetic Constraints on Precipitation Under Climate Change, Paul A. O’Gorman et al, Surv Geophys (2012) 33:585–608

Trenberth, K. E. (2011), Changes in precipitation with climate change, Clim. Res., 47, 123–138, doi:10.3354/cr00953

Zahn M, Allan RP (2011) Changes in water vapor transports of the ascending branch of the tropical circulation. J Geophys Res 116:D18111

Read Full Post »

If we want to assess forecasts of floods, droughts and crop yields then we will need to know rainfall. We will also need to know temperature of course.

The forte of climate models is temperature. Rainfall is more problematic.

Before we get to model predictions about the future we need to review observations and the ability of models to reproduce them. Observations are also problematic – rainfall varies locally and over short durations. And historically we lacked effective observation systems in many locations and regions of the world, so data has to be pieced together and estimated from reanalysis.

Smith and his colleagues created a new rainfall dataset. Here is a comment from their 2012 paper:

Although many land regions have long precipitation records from gauges, there are spatial gaps in the sampling for undeveloped regions, areas with low populations, and over oceans. Since 1979 satellite data have been used to fill in those sampling gaps. Over longer periods gaps can only be filled using reconstructions or reanalyses..

Here are two views of the global precipitation data from a dataset which starts with the satellite era, that is, 1979 onwards – GPCP (Global Precipitation Climatology Project):

From Adler et al 2003

Figure 1

From Adler et al 2003

Figure 2

For historical data before satellites we only have rain gauge data. The GPCC dataset, explained in Becker et al 2013, shows the number of stations over time by region:

From Becker et al 2013

Figure 3- Click to expand

And the geographical distribution of rain gauge stations at different times:

From Becker et al 2013

Figure 4 – Click to expand

The IPCC compared the global trends over land from four different datasets over the last century and the last half-century:

From IPCC AR5 Ch. 2

Figure 5 – Click to expand

And the regional trends:

From IPCC AR5 Ch. 2

Figure 6 – Click to expand

The graphs for the annual change in rainfall, note the different scales for each region (as we would expect given the difference in average rainfall in different region):

From IPCC AR5 ch 2

Figure 7

We see that the decadal or half-decadal variation is much greater than any apparent long term trend. The trend data (as reviewed by the IPCC in figs 5 & 6) shows significant differences in the datasets but when we compare the time series it appears that the datasets match up better than indicated by the trend comparisons.

The data with the best historical coverage is 30ºN – 60ºN and the trend values for 1951-2000 (from different reconstructions) range from an annual increase of 1 to 1.5 mm/yr per decade (fig 6 / table 2.10 of IPCC report). This is against an absolute value of about 1000 mm/yr in this region (reading off the climatology in figure 2).

This is just me trying to put the trend data in perspective.

Models

Here is the IPCC AR5 chapter 9 on model comparisons to satellite-era rainfall observations. Top left is observations (basically the same dataset as figure 1 in this article over a slightly longer period with different colors) and bottom right is percentage error of model average with respect to observations:

From IPCC AR5 ch 9

Figure 8 – Click to expand

We can see that the average of all models has substantial errors on mean rainfall.

Articles in this Series

Impacts – I – Introduction

Impacts – II – GHG Emissions Projections: SRES and RCP

Impacts – III – Population in 2100

Impacts – IV – Temperature Projections and Probabilities

Impacts – V – Climate change is already causing worsening storms, floods and droughts

Impacts – VI – Sea Level Rise 1

Impacts – VII – Sea Level 2 – Uncertainty

Impacts – VIII – Sea level 3 – USA

Impacts – IX – Sea Level 4 – Sinking Megacities

Impacts – X – Sea Level Rise 5 – Bangladesh

References

IPCC AR5 Chapter 2

Improved Reconstruction of Global Precipitation since 1900, Smith, Arken, Ren & Shen, Journal of Atmospheric and Oceanic Technology (2012)

The Version-2 Global Precipitation Climatology Project (GPCP) Monthly Precipitation Analysis (1979–Present), Adler et al, Journal of Hydrometeorology (2003)

A description of the global land-surface precipitation data products of the Global Precipitation Climatology Centre with sample applications including centennial (trend) analysis from 1901–present, A Becker, Earth Syst. Sci. Data (2013)

Read Full Post »

In Impacts – II – GHG Emissions Projections: SRES and RCP we looked at projections of emissions under various scenarios with the resulting CO2 (and other GHG) concentrations and resulting radiative forcing.

Why do we need these scenarios? Because even if climate models were perfect and could accurately calculate the temperature 100 years from now, we wouldn’t know how much “anthropogenic CO2” (and other GHGs) would have been emitted by that time. The scenarios allow climate modelers to produce temperature (and other climate variable) projections on the basis of each of these scenarios.

The IPCC AR5 (fifth assessment report) from 2013 says (chapter 12, p. 1031):

Global mean temperatures will continue to rise over the 21st century if greenhouse gas (GHG) emissions continue unabated.

Under the assumptions of the concentration-driven RCPs, global mean surface temperatures for 2081–2100, relative to 1986–2005 will likely be in the 5 to 95% range of the CMIP5 models:

  • 0.3°C to 1.7°C (RCP2.6)
  • 1.1°C to 2.6°C (RCP4.5)
  • 1.4°C to 3.1°C (RCP6.0)
  • 2.6°C to 4.8°C (RCP8.5)

Global temperatures averaged over the period 2081– 2100 are projected to likely exceed 1.5°C above 1850-1900 for RCP4.5, RCP6.0 and RCP8.5 (high confidence), are likely to exceed 2°C above 1850-1900 for RCP6.0 and RCP8.5 (high confidence) and are more likely than not to exceed 2°C for RCP4.5 (medium confidence). Temperature change above 2°C under RCP2.6 is unlikely (medium confidence). Warming above 4°C by 2081–2100 is unlikely in all RCPs (high confidence) except for RCP8.5, where it is about as likely as not (medium confidence).

I commented in Part II that RCP8.5 seemed to be a scenario that didn’t match up with the last 40-50 years of development. Of course, the various scenario developers give their caveats, for example, Riahi et al 2007:

Given the large number of variables and their interdependencies, we are of the opinion that it is impossible to assign objective likelihoods or probabilities to emissions scenarios. We have also not attempted to assign any subjective likelihoods to the scenarios either. The purpose of the scenarios presented in this Special Issue is, rather, to span the range of uncertainty without an assessment of likely, preferable, or desirable future developments..

Readers should exercise their own judgment on the plausibility of above scenario ‘storylines’..

To me RCP6.0 seems a more likely future (compared with RCP8.5) in a world that doesn’t have any significant attempt to tackle CO2 emissions. That is, no major change in climate policy to today’s world, but similar economic and population development (note 1).

Here is the graph of projected temperature anomalies for the different scenarios. :

From AR5, chapter 12

From AR5, chapter 12

Figure 1

That graph is hard to make out for 2100, here is the table of corresponding data. I highlighted RCP6.0 in 2100 – you can click to enlarge the table:

ar5-ch12-table12-2-temperature-anomaly-2100-499px

Figure 2 – Click to expand

Probabilities and Lists

The table above has a “1 std deviation” and a 5%-95% distribution. The graph (which has the same source data) has shading to indicate 5%-95% of models for each RCP scenario.

These have no relation to real probability distributions. That is, the range of 5-95% for RCP6.0 doesn’t equate to: “the probability is 90% likely that the average temperature 2080-2100 will be 1.4-3.1ºC higher than the 1986-2005 average”.

A number of climate models are used to produce simulations and the results from these “ensembles” are sometimes pressed into “probability service”. For some concept background on ensembles read Ensemble Forecasting.

Here is IPCC AR5 chapter 12:

Ensembles like CMIP5 do not represent a systematically sampled family of models but rely on self-selection by the modelling groups.

This opportunistic nature of MMEs [multi-model ensembles] has been discussed, for example, in Tebaldi and Knutti (2007) and Knutti et al. (2010a). These ensembles are therefore not designed to explore uncertainty in a coordinated manner, and the range of their results cannot be straightforwardly interpreted as an exhaustive range of plausible outcomes, even if some studies have shown how they appear to behave as well calibrated probabilistic forecasts for some large-scale quantities. Other studies have argued instead that the tail of distributions is by construction undersampled.

In general, the difficulty in producing quantitative estimates of uncertainty based on multiple model output originates in their peculiarities as a statistical sample, neither random nor systematic, with possible dependencies among the members and of spurious nature, that is, often counting among their members models with different degrees of complexities (different number of processes explicitly represented or parameterized) even within the category of general circulation models..

..In summary, there does not exist at present a single agreed on and robust formal methodology to deliver uncertainty quantification estimates of future changes in all climate variables. As a consequence, in this chapter, statements using the calibrated uncertainty language are a result of the expert judgement of the authors, combining assessed literature results with an evaluation of models demonstrated ability (or lack thereof) in simulating the relevant processes (see Chapter 9) and model consensus (or lack thereof) over future projections. In some cases when a significant relation is detected between model performance and reliability of its future projections, some models (or a particular parametric configuration) may be excluded but in general it remains an open research question to find significant connections of this kind that justify some form of weighting across the ensemble of models and produce aggregated future projections that are significantly different from straightforward one model–one vote ensemble results. Therefore, most of the analyses performed for this chapter make use of all available models in the ensembles, with equal weight given to each of them unless otherwise stated.

And from one of the papers cited in that section of chapter 12, Jackson et al 2008:

In global climate models (GCMs), unresolved physical processes are included through simplified representations referred to as parameterizations.

Parameterizations typically contain one or more adjustable phenomenological parameters. Parameter values can be estimated directly from theory or observations or by “tuning” the models by comparing model simulations to the climate record. Because of the large number of parameters in comprehensive GCMs, a thorough tuning effort that includes interactions between multiple parameters can be very computationally expensive. Models may have compensating errors, where errors in one parameterization compensate for errors in other parameterizations to produce a realistic climate simulation (Wang 2007; Golaz et al. 2007; Min et al. 2007; Murphy et al. 2007).

The risk is that, when moving to a new climate regime (e.g., increased greenhouse gases), the errors may no longer compensate. This leads to uncertainty in climate change predictions. The known range of uncertainty of many parameters allows a wide variance of the resulting simulated climate (Murphy et al. 2004; Stainforth et al. 2005; M. Collins et al. 2006). The persistent scatter in the sensitivities of models from different modeling groups, despite the effort represented by the approximately four generations of modeling improvements, suggests that uncertainty in climate prediction may depend on underconstrained details and that we should not expect convergence anytime soon.

Stainforth et al 2005 (referenced in the quote above) tried much larger ensembles of coarser resolution climate models, and was discussed in the comments of Models, On – and Off – the Catwalk – Part Four – Tuning & the Magic Behind the Scenes. Rowlands et al 2012 is similar in approach and was discussed in Natural Variability and Chaos – Five – Why Should Observations match Models?

The way I read the IPCC reports and various papers is that clearly the projections are not a probability distribution. Then the data gets inevitably gets used as a de facto probability distribution.

Conclusion

“All models are wrong but some are useful” as George Box said, actually in a quite unrelated field (i.e., not climate). But it’s a good saying.

Many people who describe themselves as “lukewarmers” believe that climate sensitivity as characterized by the IPCC is too high and the real climate has a lower sensitivity. I have no idea.

Models may be wrong, but I don’t have an alternative model to provide. And therefore, given that they represent climate better than any current alternative, their results are useful.

We can’t currently create a real probability distribution from a set of temperature prediction results (assuming a given emissions scenario).

How useful is it to know that under a scenario like RCP6.0 the average global temperature increase in 2100 has been simulated as variously 1ºC, 2ºC, 3ºC, 4ºC? (note, I haven’t checked the CMIP5 simulations to get each value). And the tropics will vary less, land more? As we dig into more details we will attempt to look at how reliable regional and seasonal temperature anomalies might be compared with the overall number. Likewise rainfall and other important climate values.

I do find it useful to keep the idea of a set of possible numbers with no probability assigned. Then at some stage we can say something like, “if this RCP scenario turns out to be correct and the global average surface temperature actually increases by 3ºC by 2100, we know the following are reasonable assumptions … but we currently can’t make any predictions about these other values..

References

Long-term Climate Change: Projections, Commitments and Irreversibility, M Collins et al (2013) – In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change

Scenarios of long-term socio-economic and environmental development under climate stabilization, Keywan Riahi et al, Technological Forecasting & Social Change (2007) – free paper

Error Reduction and Convergence in Climate Prediction, Charles S Jackson et al, Journal of Climate (2008) – free paper

Notes

Note 1: As explored a little in the last article, RCP6.0 does include some changes to climate policy but it seems they are not major. I believe a very useful scenario for exploring impact assessments would be the population and development path of RCP6.0 (let’s call it RCP6.0A) without any climate policies.

For reasons of”scenario parsimony” this interesting pathway avoids attention.

Read Full Post »

In one of the iconic climate model tests, CO2 is doubled from a pre-industrial level of 280ppm to 560ppm “overnight” and we find the new steady state surface temperature. The change in CO2 is an input to the climate model, also known as a “forcing” because it is from outside. That is, humans create more CO2 from generating electricity, driving automobiles and other activities – this affects the climate and the climate responds.

These experiments with simple climate models were first done with 1d radiative-convective models in the 1960s. For example, Manabe & Wetherald 1967 who found a 2.3ºC surface temperature increase with constant relative humidity and 1.3ºC with constant absolute humidity (and for many reasons constant relative humidity seems more likely to be closer to reality than constant absolute humidity).

In other experiments, especially more recently, more more complex GCMs simulate 100 years with the CO2 concentration being gradually increased, in line with projections about future emissions – and we see what happens to temperature with time.

There are also other GHGs (“greenhouse” gases / radiatively-active gases) in the atmosphere that are changing due to human activity – especially methane (CH4) and nitrous oxide (N2O). And of course, the most important GHG is water vapor, but changes in water vapor concentration are a climate feedback – that is, changes in water vapor result from temperature (and circulation) changes.

And there are aerosols, some internally generated within the climate and others emitted by human activity. These also affect the climate in a number of ways.

We don’t know what future anthropogenic emissions will be. What will humans do? Build lots more coal-fire power stations to meet energy demand of the future? Run the entire world’s power grid from wind and solar by 2040? Finally invent practical nuclear fusion? How many people will there be?

So for this we need some scenarios of future human activity (note 1).

Scenarios – SRES and RCP

SRES was published in 2000:

In response to a 1994 evaluation of the earlier IPCC IS92 emissions scenarios, the 1996 Plenary of the IPCC requested this Special Report on Emissions Scenarios (SRES) (see Appendix I for the Terms of Reference). This report was accepted by the Working Group III (WGIII) plenary session in March 2000. The long-term nature and uncertainty of climate change and its driving forces require scenarios that extend to the end of the 21st century. This Report describes the new scenarios and how they were developed.

The SRES scenarios cover a wide range of the main driving forces of future emissions, from demographic to technological and economic developments. As required by the Terms of Reference, none of the scenarios in the set includes any future policies that explicitly address climate change, although all scenarios necessarily encompass various policies of other types.

The set of SRES emissions scenarios is based on an extensive assessment of the literature, six alternative modeling approaches, and an “open process” that solicited wide participation and feedback from many groups and individuals. The SRES scenarios include the range of emissions of all relevant species of greenhouse gases (GHGs) and sulfur and their driving forces..

..A set of scenarios was developed to represent the range of driving forces and emissions in the scenario literature so as to reflect current understanding and knowledge about underlying uncertainties. They exclude only outlying “surprise” or “disaster” scenarios in the literature. Any scenario necessarily includes subjective elements and is open to various interpretations. Preferences for the scenarios presented here vary among users. No judgment is offered in this Report as to the preference for any of the scenarios and they are not assigned probabilities of occurrence, neither must they be interpreted as policy recommendations..

..By 2100 the world will have changed in ways that are difficult to imagine – as difficult as it would have been at the end of the 19th century to imagine the changes of the 100 years since. Each storyline assumes a distinctly different direction for future developments, such that the four storylines differ in increasingly irreversible ways. Together they describe divergent futures that encompass a significant portion of the underlying uncertainties in the main driving forces. They cover a wide range of key “future” characteristics such as demographic change, economic development, and technological change. For this reason, their plausibility or feasibility should not be considered solely on the basis of an extrapolation of current economic, technological, and social trends.

The RCPs were in part a new version of the same idea as SRES and published in 2011. My understanding is that the Representative Concentration Pathways worked more towards final values of radiative forcing in 2100 that were considered in the modeling literature, and you can see this in the names of each RCP.

from A special issue on the RCPs, van Vuuren et al (2011)

By design, the RCPs, as a set, cover the range of radiative forcing levels examined in the open literature and contain relevant information for climate model runs.

[Emphasis added]

From The representative concentration pathways: an overview, van Vuuren et al (2011)

This paper summarizes the development process and main characteristics of the Representative Concentration Pathways (RCPs), a set of four new pathways developed for the climate modeling community as a basis for long-term and near-term modeling experiments.

The four RCPs together span the range of year 2100 radiative forcing values found in the open literature, i.e. from 2.6 to 8.5 W/m². The RCPs are the product of an innovative collaboration between integrated assessment modelers, climate modelers, terrestrial ecosystem modelers and emission inventory experts. The resulting product forms a comprehensive data set with high spatial and sectoral resolutions for the period extending to 2100..

..The RCPs are named according to radiative forcing target level for 2100. The radiative forcing estimates are based on the forcing of greenhouse gases and other forcing agents. The four selected RCPs were considered to be representative of the literature, and included one mitigation scenario leading to a very low forcing level (RCP2.6), two medium stabilization scenarios (RCP4.5/RCP6) and one very high baseline emission scenarios (RCP8.5).

Here are some graphs from the RCP introduction paper:

Population and GDP scenarios:

rcp-population-and-gdp-fig2-499px

Figure 1 – Click to expand

I was surprised by the population graph for RCP 8.5 and 6 (similar scenarios are generated in SRES). From reading various sources (but not diving into any detailed literature) I understood that the consensus was for population to peak mid-century at around 9bn people and then reduce back to something like 7-8bn people by the end of the century. This is because all countries that have experienced rising incomes have significantly reduced average fertility rates.

Here is Angus Deaton, in his fascinating and accessible book for people interested in The Great Escape as he calls it (that is, our escape from poverty and poor health):

In Africa in 1950, each woman could expect to give birth to 6.6 children; by 2000, that number had fallen to 5.1, and the UN estimates that it is 4.4 today. In Asia as well as in Latin America and the Caribbean, the decline has been even larger, from 6 children to just over 2..

The annual rate of growth of the world’s population, which reached 2.2% in 1960, was only half of that in 2011.

The GDP graph on the right (above) is lacking a definition. From the other papers covering the scenarios I understand it to be total world GDP in US$ trillions (at 2000 values, i.e. adjusted for inflation), although the numbers don’t seem to align exactly.

Energy consumption for the different scenarios:

Figure 2 – Click to expand

Annual emissions:

Figure 3 – Click to expand

Resulting concentrations in the atmosphere for CO2, CH4 (methane) and N2O (nitrous oxide):

rcp-fig3-ghg-concentrations-499px

Figure 4 – Click to expand

Radiative forcing (for explanation of this term, see for example Wonderland, Radiative Forcing and the Rate of Inflation):

rcp-fig10-rf-499px

Figure 5  – Click to expand

We can see from this figure (fig 5, their fig 10) that the RCP numbers refer to the expected radiative forcing in 2100 – so RCP8.5, often known as the “business as usual” scenario has a radiative forcing, compared to pre-industrial values, of 8.5 W/m². And RCP6 has a radiative forcing in 2100, of 6 W/m².

We can also see from the figure on the right that increases in CO2 are the cause of almost all of most of the increase from current values. For example, only RCP8.5 has a higher methane (CH4) forcing than today.

Business as usual – RCP 8.5 or RCP 6?

I’ve seen RCP8.5 described as “business as usual” but it seems quite an unlikely scenario. Perhaps we need to dive into this scenario more in another article. In the meantime, part of the description from Riahi et al (2011):

The scenario’s storyline describes a heterogeneous world with continuously increasing global population, resulting in a global population of 12 billion by 2100. Per capita income growth is slow and both internationally as well as regionally there is only little convergence between high and low income countries. Global GDP reaches around 250 trillion US2005$ in 2100.

The slow economic development also implies little progress in terms of efficiency. Combined with the high population growth, this leads to high energy demands. Still, international trade in energy and technology is limited and overall rates of technological progress is modest. The inherent emphasis on greater self-sufficiency of individual countries and regions assumed in the scenario implies a reliance on domestically available resources. Resource availability is not necessarily a constraint but easily accessible conventional oil and gas become relatively scarce in comparison to more difficult to harvest unconventional fuels like tar sands or oil shale.

Given the overall slow rate of technological improvements in low-carbon technologies, the future energy system moves toward coal-intensive technology choices with high GHG emissions. Environmental concerns in the A2 world are locally strong, especially in high and medium income regions. Food security is also a major concern, especially in low-income regions and agricultural productivity increases to feed a steadily increasing population.

Compared to the broader integrated assessment literature, the RCP8.5 represents thus a scenario with high global population and intermediate development in terms of total GDP (Fig. 4).

Per capita income, however, stays at comparatively low levels of about 20,000 US $2005 in the long term (2100), which is considerably below the median of the scenario literature. Another important characteristic of the RCP8.5 scenario is its relatively slow improvement in primary energy intensity of 0.5% per year over the course of the century. This trend reflects the storyline assumption of slow technological change. Energy intensity improvement rates are thus well below historical average (about 1% per year between 1940 and 2000). Compared to the scenario literature RCP8.5 depicts thus a relatively conservative business as usual case with low income, high population and high energy demand due to only modest improvements in energy intensity.

When I heard the term “business as usual” I’m sure I wasn’t alone in understanding it like this: the world carries on without adopting serious CO2 limiting policies. That is, no international agreements on CO2 reductions, no carbon pricing, etc. And the world continues on its current trajectory of growth and development. When you look at the last 40 years, it has been quite amazing. Why would growth slow, population not follow the pathway it has followed in all countries that have seen rising prosperity, and why would technological innovation and adoption slow? It would be interesting to see a “business as usual” scenario for emissions, CO2 concentrations and radiative forcing that had a better fit to the name.

RCP 6 seems to be a closer fit than RCP 8.5 to the name “business as usual”.

RCP6 is a climate-policy intervention scenario. That is, without explicit policies designed to reduce emissions, radiative forcing would exceed 6.0 W/m² in the year 2100.

However, the degree of GHG emissions mitigation required over the period 2010 to 2060 is small, particularly compared to RCP4.5 and RCP2.6, but also compared to emissions mitigation requirement subsequent to 2060 in RCP6 (Van Vuuren et al., 2011). The IPCC Fourth Assessment Report classified stabilization scenarios into six categories as shown in Table 1. RCP6 scenario falls into the border between the fifth category and the sixth category.

Its global mean long-term, steady-state equilibrium temperature could be expected to rise 4.9° centigrade, assuming a climate sensitivity of 3.0 and its CO2 equivalent concentration could be 855 ppm (Metz et al. 2007).

Some of the background to RCP 8.5 assumptions is in an earlier paper also by the same lead author – Riahi et al 2007, another freely accessible paper (reference below) which is worth a read, for example:

The task ahead of anticipating the possible developments over a time frame as ‘ridiculously’ long as a century is wrought with difficulties. Particularly, readers of this Journal will have sympathy for the difficulties in trying to capture social and technological changes over such a long time frame. One wonders how Arrhenius’ scenario of the world in 1996 would have looked, perhaps filled with just more of the same of his time—geopolitically, socially, and technologically. Would he have considered that 100 years later:

  • backward and colonially exploited China would be in the process of surpassing the UK’s economic output, eventually even that of all of Europe or the USA?
  • the existence of a highly productive economy within a social welfare state in his home country Sweden would elevate the rural and urban poor to unimaginable levels of personal affluence, consumption, and free time?
  • the complete obsolescence of the dominant technology cluster of the day-coal-fired steam engines?

How he would have factored in the possibility of the emergence of new technologies, especially in view of Lord Kelvin’s sobering ‘conclusion’ of 1895 that “heavier-than-air flying machines are impossible”?

Note on Comments

The Etiquette and About this Blog both explain the commenting policy in this blog. I noted briefly in the Introduction that of course questions about 100 years from now mean some small relaxation of the policy. But, in a large number of previous articles, we have discussed the “greenhouse” effect (just about to death) and so people who question it are welcome to find a relevant article and comment there – for example, The “Greenhouse” Effect Explained in Simple Terms which has many links to related articles. Questions on climate sensitivity, natural variation, and likelihood of projected future temperatures due to emissions are, of course, all still fair game in this series.

But I’ll just delete comments that question the existence of the greenhouse effect. Draconian, no doubt.

References

Emissions Scenarios, IPCC (2000) – free report

A special issue on the RCPs, Detlef P van Vuuren et al, Climatic Change (2011) – free paper

The representative concentration pathways: an overview, Detlef P van Vuuren et al, Climatic Change (2011) – free paper

RCP4.5: a pathway for stabilization of radiative forcing by 2100, Allison M. Thomson et al, Climatic Change (2011) – free paper

An emission pathway for stabilization at 6 Wm−2 radiative forcing,  Toshihiko Masui et al, Climatic Change (2011) – free paper

RCP 8.5—A scenario of comparatively high greenhouse gas emissions, Keywan Riahi et al, Climatic Change (2011) – free paper

Scenarios of long-term socio-economic and environmental development under climate stabilization, Keywan Riahi et al, Technological Forecasting & Social Change (2007) – free paper

Thermal equilibrium of the atmosphere with a given distribution of relative humidity, S Manabe, RT Wetherald, Journal of the Atmospheric Sciences (1967) – free paper

The Great Escape, Health, Wealth and the Origins of Inequality, Angus Deaton, Princeton University Press (2013) – book

Notes

Note 1: Even if we knew future anthropogenic emissions accurately it wouldn’t give us the whole picture. The climate has sources and sinks for CO2 and methane and there is some uncertainty about them, especially how well they will operate in the future. That is, anthropogenic emissions are modified by the feedback of sources and sinks for these emissions.

Read Full Post »

« Newer Posts - Older Posts »