Feeds:
Posts
Comments

Archive for February, 2017

[I started writing this some time ago and got side-tracked, initially because aerosol interaction in clouds and rainfall is quite fascinating with lots of current research and then because there are many papers on higher resolution simulations of convection that also looked interesting.. so decided to post it less than complete because it will be some time before I can put together a more complete article..]

In Part Four of this series we looked at the paper by Mauritsen et al (2012). Isaac Held has a very interesting post on his blog – and people interested in understanding climate science will benefit from reading his blog – he has been in the field writing papers for 40 years). He highlighted this paper: Cloud tuning in a coupled climate model: Impact on 20th century warming, Jean-Christophe Golaz, Larry W. Horowitz, and Hiram Levy II, GRL (2013).

Their paper has many similarities to Mauritsen et al (2013). Here are some of their comments:

Climate models incorporate a number of adjustable parameters in their cloud formulations. They arise from uncertainties in cloud processes. These parameters are tuned to achieve a desired radiation balance and to best reproduce the observed climate. A given radiation balance can be achieved by multiple combinations of parameters. We investigate the impact of cloud tuning in the CMIP5 GFDL CM3 coupled climate model by constructing two alternate configurations.

They achieve the desired radiation balance using different, but plausible, combinations of parameters. The present-day climate is nearly indistinguishable among all configurations.

However, the magnitude of the aerosol indirect effects differs by as much as 1.2 W/m², resulting in significantly different temperature evolution over the 20th century..

 

..Uncertainties that arise from interactions between aerosols and clouds have received considerable attention due to their potential to offset a portion of the warming from greenhouse gases. These interactions are usually categorized into first indirect effect (“cloud albedo effect”; Twomey [1974]) and second indirect effect (“cloud lifetime effect”; Albrecht [1989]).

Modeling studies have shown large spreads in the magnitudes of these effects [e.g., Quaas et al., 2009]. CM3 [Donner et al., 2011] is the first Geophysical Fluid Dynamics Laboratory (GFDL) coupled climate model to represent indirect effects.

As in other models, the representation in CM3 is fraught with uncertainties. In particular, adjustable cloud parameters used for the purpose of tuning the model radiation can also have a significant impact on aerosol effects [Golaz et al., 2011]. We extend this previous study by specifically investigating the impact that cloud tuning choices in CM3 have on the simulated 20th century warming.

What did they do?

They adjusted the “autoconversion threshold radius”, which controls when water droplets turn into rain.

Autoconversion converts cloud water to rain. The conversion occurs once the mean cloud droplet radius exceeds rthresh. Larger rthresh delays the formation of rain and increases cloudiness.

The default in CM3 was 8.2 􏰃μm. They selected alternate values from other GFDL models: 6.0 􏰃μm (CM3w) and 10.6 􏰃μm (CM3c). Of course, they have to then adjust others parameters to achieve radiation balance – the “erosion time” (lateral mixing effect reducing water in clouds) which they note is poorly constrained (that is, we don’t have some external knowledge of the correct value for this parameter) and the “velocity variance” which affects how quickly water vapor condenses out onto aerosols.

Here is the time evolution in the three models (and also observations):

 

From Golaz et al 2013

From Golaz et al 2013

Figure 1 – Click to enlarge

In terms of present day climatology, the three variants are very close, but in terms of 20th century warming two variants are very different and only CM3w is close to observations.

Here is their conclusion, well worth studying. I reproduce it in full:

CM3w predicts the most realistic 20th century warming. However, this is achieved with a small and less desirable threshold radius of 6.0 􏰃μm for the onset of precipitation.

Conversely, CM3c uses a more desirable value of 10.6 􏰃μm but produces a very unrealistic 20th century temperature evolution. This might indicate the presence of compensating model errors. Recent advances in the use of satellite observations to evaluate warm rain processes [Suzuki et al., 2011; Wang et al., 2012] might help understand the nature of these compensating errors.

CM3 was not explicitly tuned to match the 20th temperature record.

However, our findings indicate that uncertainties in cloud processes permit a large range of solutions for the predicted warming. We do not believe this to be a peculiarity of the CM3 model.

Indeed, numerous previous studies have documented a strong sensitivity of the radiative forcing from aerosol indirect effects to details of warm rain cloud processes [e.g., Rotstayn, 2000; Menon et al., 2002; Posselt and Lohmann, 2009; Wang et al., 2012].

Furthermore, in order to predict a realistic evolution of the 20th century, models must balance radiative forcing and climate sensitivity, resulting in a well-documented inverse correlation between forcing and sensitivity [Schwartz et al., 2007; Kiehl, 2007; Andrews et al., 2012].

This inverse correlation is consistent with an intercomparison-driven model selection process in which “climate models’ ability to simulate the 20th century temperature increase with fidelity has become something of a show-stopper as a model unable to reproduce the 20th century would probably not see publication” [Mauritsen et al., 2012].

Very interesting paper, and freely availableKiehl’s paper, referenced in the conclusion, is also well-worth reading. In his paper he shows that models with the highest sensitivity to GHGs have the highest negative value from 20th century aerosols, while the models with the lowest sensitivity to GHGs have the lowest negative value from 20th century aerosols. Therefore, both ends of the range can reproduce 20th century temperature anomalies, while suggesting very different 21st century temperature evolution.

A paper on higher resolution models, Siefert et al 2015, did some model experiments, “large eddy simulations”, which are much higher resolution than GCMs. The best resolution GCMs today typically have a grid size around 100km x 100km. Their LES model had a grid size of 25m x 25m, with 2048 x 2048 x 200 grid points, to span a simulated volume of 51.2 km x 51.2 km x 5 km, and ran for a simulated 60hr time span.

They had this to say about the aerosol indirect effect:

It has also been conjectured that changes in CCN might influence cloud macrostructure. Most prominently, Albrecht [1989] argued that processes which reduce the average size of cloud droplets would retard and reduce the rain formation in clouds, resulting in longer-lived clouds. Longer living clouds would increase cloud cover and reflect even more sunlight, further cooling the atmosphere and surface. This type of aerosol-cloud interaction is often called a lifetime effect. Like the Twomey effect, the idea that smaller particles will form rain less readily is based on sound physical principles.

Given this strong foundation, it is somewhat surprising that empirical evidence for aerosol impacts on cloud macrophysics is so thin.

Twenty-five years after Albrecht’s paper, the observational evidence for a lifetime effect in the marine cloud regimes for which it was postulated is limited and contradictory. Boucher et al. [2013] who assess the current level of our understanding, identify only one study, by Yuan et al. [2011], which provides observational evidence consistent with a lifetime effect. In that study a natural experiment, outgassing of SO2 by the Kilauea volcano is used to study the effect of a changing aerosol environment on cloud macrophysical processes.

But even in this case, the interpretation of the results are not without ambiguity, as precipitation affects both the outgassing aerosol precursors and their lifetime. Observational studies of ship tracks provide another inadvertent experiment within which one could hope to identify lifetime effects [Conover, 1969; Durkee et al., 2000; Hobbs et al., 2000], but in many cases the opposite response of clouds to aerosol perturbations is observed: some observations [Christensen and Stephens, 2011; Chen et al., 2012] are consistent with more efficient mixing of smaller cloud drops leading to more rapid cloud desiccation and shorter lifetimes.

Given the lack of observational evidence for a robust lifetime effect, it seems fair to question the viability of the underlying hypothesis.

In their paper they show a graphic of what their model produced, it’s not the main dish but interesting for the realism:

From Seifert et al 2015

From Seifert et al 2015

Figure 2 – Click to expand

It is an involved paper, but here is one of the conclusions, relevant for the topic at hand:

Our simulations suggest that parameterizations of cloud-aerosol effects in large-scale models are almost certain to overstate the impact of aerosols on cloud albedo and rain rate. The process understanding on which the parameterizations are based is applicable to isolated clouds in constant environments, but necessarily neglects interactions between clouds, precipitation, and circulations that, as we have shown, tend to buffer much of the impact of aerosol change.

For people new to parameterizations, a couple of articles that might be useful:

Conclusion

Climate models necessarily have some massive oversimplifications, as we can see from the large eddy simulation which has 25m x 25m grid cells while GCMs have 100km x 100km at best. Even LES models have simplifications – to get to direct numerical solution (DNS) of the equations for turbulent flow we would need a scale closer to a few millimeters rather than meters.

The over-simplifications in GCMs require “parameters” which are not actually intrinsic material properties, but are more an average of some part of a climate process over a large scale. (Note that even if we had the resolution for the actual fundamental physics we wouldn’t necessary know the material parameters necessary, especially in the case of aerosols which are heterogeneous in time and space).

As the climate changes will these “parameters” remain constant, or change as the climate changes?

References

Cloud tuning in a coupled climate model: Impact on 20th century warming, Jean-Christophe Golaz, Larry W. Horowitz, and Hiram Levy II, GRL (2013) – free paper

Twentieth century climate model response and climate sensitivity, Jeffrey T. Kiehl, GRL (2007) – free paper

Large-eddy simulation of the transient and near-equilibrium behavior of precipitating shallow convection, Axel Seifert et al, Journal of Advances in Modeling Earth Systems (2015) – free paper

Read Full Post »

In Parts VI and VII we looked at past and projected sea level rise. It is clear that the sea level has risen over the last hundred years, and it’s clear that with more warming sea level will rise some more. The uncertainties (given a specific global temperature increase) are more around how much more ice will melt than how much the ocean will expand (warmer water expands). Future sea level rise will clearly affect some people in the future, but very differently in different countries and regions. This article considers the US.

A month or two ago, via a link from a blog, I found a paper which revised upwards a current calculation (or average of such calculations) of damage due to sea level rise in 2100 in the US. Unfortunately I can’t find the paper, but essentially the idea was people would continue moving to the ocean in ever increasing numbers, and combined with possible 1m+ sea level rise (see Part VI & VII) the cost in the US would be around $1TR (I can’t remember the details but my memory tells me this paper concluded costs were 3x previous calculations due to this ever increasing population move to coastal areas – in any case, the exact numbers aren’t important).

Two examples that I could find (on global movement of people rather than just in the US), Nicholls 2011:

..This threatened population is growing significantly (McGranahan et al., 2007), and it will almost certainly increase in the coming decades, especially if the strong tendency for coastward migration continues..

And Anthoff et al 2010

Fifthly, building on the fourth point, FUND assumes that the pattern of coastal development persists and attracts future development. However, major disasters such as the landfall of hurricanes could trigger coastal abandonment, and hence have a profound influence on society’s future choices concerning coastal protection as the pattern of coastal occupancy might change radically.

A cycle of decline in some coastal areas is not inconceivable, especially in future worlds where capital is highly mobile and collective action is weaker. As the issue of sea-level rise is so widely known, disinvestment from coastal areas may even be triggered without disasters..

I was struck by the “trillion dollar problem” paper and the general issues highlighted in other papers. The future cost of sea level rise in the US is not just bad, it’s extremely expensive because people will keep moving to the ocean.

Why are people moving to the coast?

So here is an obvious take on the subject that doesn’t need an IAM (integrated assessment model).. Perhaps lots of people missed the IPCC TAR (third assessment report) in 2001. Perhaps anthropogenic global warming fears had not reached a lot of the population. Maybe it didn’t get a lot of media coverage. But surely no could have missed Al Gore’s movie. I mean, I missed it from choice, but how could anyone in rich countries not know about the discussion?

So anyone since 2006 (arbitrary line in the sand) who bought a house that is susceptible to sea level rise is responsible for their own loss that they incur around 2100. That is, if the worst fears about sea level rise play out, combined with more extreme storms (subject of a future article) which create larger ocean storm surges, their house won’t be worth much in 2100.

Now, barring large increases in life expectancy, anyone who bought a house in 2005 will almost certainly be dead in 2100. There will be a few unlucky centenarians.

Think of it as an estate tax. People who have expensive ocean-front houses will pass on their now worthless house to their children or grandchildren. Some people love the idea of estate taxes – in that case you have a positive. Some people hate the idea of estate taxes – in that case strike it up as a negative. And, drawing a long bow here, I suspect a positive correlation between concern about climate change and belief in the positive nature of estate taxes, so possibly it’s a win-win for many people.

Now onto infrastructure.

From time to time I’ve had to look at depreciation and official asset life for different kinds of infrastructure and I can’t remember seeing one for 100 years. 50 years maybe for civil structures. I’m definitely not an expert. That said, even if the “official depreciation” gives something a life of 50 years, much is still being used 150 years later – buildings, railways, and so on.

So some infrastructure very close to the ocean might have to be abandoned. But it will have had 100 years of useful life and that is pretty good in public accounting terms.

Why is anyone building housing, roads, power stations, public buildings, railways and airports in the US in locations that will possibly be affected by sea level rise in 2100? Maybe no one is.

So the cost of sea level rise for 2100 in the US seems to be a close to zero cost problem.

These days, if a particular area is recognized as a flood plain people are discouraged from building on it and no public infrastructure gets built there. It’s just common sense.

Some parts of New Orleans were already below sea level when Hurricane Katrina hit. Following that disaster, lots of people moved out of New Orleans to a safer suburb. Lots of people stayed. Their problems will surely get worse with a warmer climate and a higher sea level (and also if storms gets stronger – subject of a future article). But they already had a problem. Infrastructure was at or below sea level and sufficient care was not taken of their coastal defences.

A major problem that happens overnight, or over a year, is difficult to deal with. A problem 100 years from now that affects a tiny percentage of the land area of a country, even with a large percentage (relatively speaking) of population living there today, is a minor problem.

Perhaps the costs of recreating current threatened infrastructure a small distance inland are very high, and the existing infrastructure would in fact have lasted more than 100 years. In that case, people who believe Keynesian economics might find the economic stimulus to be a positive. People who don’t think Keynesian economics does anything (no multiplier effect) except increase taxes, or divert productive resources into less productive resources will find it be a negative. Once again, drawing a long bow, I see a correlation between people more concerned about climate change also being more likely to find Keynesian economics a positive. Perhaps again, there is a win-win.

In summary, given the huge length of time to prepare for it, US sea level rise seems like a minor planning inconvenience combined with an estate tax.

Articles in this Series

Impacts – I – Introduction

Impacts – II – GHG Emissions Projections: SRES and RCP

Impacts – III – Population in 2100

Impacts – IV – Temperature Projections and Probabilities

Impacts – V – Climate change is already causing worsening storms, floods and droughts

Impacts – VI – Sea Level Rise 1

Impacts – VII – Sea Level 2 – Uncertainty

References

Planning for the impacts of sea level rise, RJ Nicholls, Oceanography (2011)

The economic impact of substantial sea-level rise, David Anthoff et al, Mitig Adapt Strateg Glob Change (2010)

Read Full Post »

In Part VI we looked at past and projected sea level rise. There is significant uncertainty in future sea level rise, even assuming we know the future global temperature change. The uncertainty results from “how much ice will melt?”

We can be reasonably sure of sea level rise from thermal expansion (so long as we know the temperature). By contrast, we don’t have much confidence in the contribution from melting ice (on land). This is because ice sheet dynamics (glaciers, Greenland & Antarctic ice sheet) are non-linear and not well understood.

Here’s something surprising. Suppose you live in Virginia near the ocean. And suppose all of the Greenland ice sheet melted in a few years (not possible, but just suppose). How much would sea level change in Virginia? Hint: the entire Greenland ice sheet converted into global mean sea level is about 7m.

Zero change in Virginia.

Here are charts of relative sea level change across the globe for Greenland & West Antarctica, based on a 1mm/yr contribution from each location – click to expand:

From Tamisiea 2011

From Tamisiea 2011

Figure 1 – Click to Expand

We see that the sea level actually drops close to Greenland, stays constant around mid-northern latitudes in the Atlantic and rises in other locations. The reason is simple – the Greenland ice sheet is a local gravitational attractor and is “pulling the ocean up” towards Greenland. Once it is removed, the local sea level drops.

Uncertainty

If we knew for sure that the global mean temperature in 2100 would be +2ºC or +3ºC compared to today we would have a good idea in each case of the sea level rise from thermal expansion. But not much certainty on any rise from melting ice sheets.

Let’s consider someone thinking about the US for planning purposes. If the Greenland ice sheet contributes lots of melting ice, the sea level on the US Atlantic coast won’t be affected at all and the increase on the Pacific coast will be significantly less than the overall sea level rise. In this case, the big uncertainty in the magnitude of sea level rise is not much of a factor for most of the US.

If the West Antarctic ice sheet contributes lots of melting ice, the sea level on the east and west coasts of the US will be affected by more than the global mean sea level rise.

For example, imagine the sea level was expected to rise 0.3m from thermal expansion by 2100. But there is a fear that ice melting will cause 0 – 0.5m global rise. A US policymaker really needs to know which ice sheet will melt. The “we expect at most an additional 0.5m from melting ice” tells her that she might have – in total – a maximum sea level rise of 0.3m on the east coast and a little more than 0.3m on the west coast if Greenland melts; but she instead might have – in total – a maximum of almost 1m on each coast if West Antarctica melts.

The source of the melting ice just magnifies the uncertainty for policy and economics.

If this 10th century legend was still with us maybe it would be different (we only have his tweets):

Donaeld the Unready

Articles in this Series

Impacts – I – Introduction

Impacts – II – GHG Emissions Projections: SRES and RCP

Impacts – III – Population in 2100

Impacts – IV – Temperature Projections and Probabilities

Impacts – V – Climate change is already causing worsening storms, floods and droughts

Impacts – VI – Sea Level Rise 1

References

The moving boundaries of sea level change: Understanding the origins of geographic variability, ME Tamisiea & JX Mitrovica, Oceanography (2011)

Read Full Post »

In Part V we looked at the IPCC, an outlier organization, that claimed floods, droughts and storms had not changed in a measurable way globally in the last 50 -100 years (of course, some regions have seen increases and some have seen decreases, some decades have been bad, some decades have been good).

This puts them at a disadvantage compared with the overwhelming mass of NGOs, environmental groups, media outlets and various government departments who claim the opposite, but the contrarian in me found their research too interesting to ignore. Plus, they come with references to papers in respectable journals.

We haven’t looked at future projections of these events as yet. Whenever there are competing effects to create a result we can expect it to be difficult to calculate future effects. In contrast, one climate effect that we can be sure about is sea level. If the world warms, as it surely will with more GHGs, we can expect sea level to rise.

In my own mental list of “bad stuff to happen”, I had sea level rise as an obvious #1 or #2. But ideas and opinions need to be challenged and I had not really investigated the impacts.

The world is a big place and rising sea level will have different impacts in different places. Generally the media presentation on sea level is unrelentingly negative, probably following the impact of the impressive 2004 documentary directed by Roland Emmerich, and the dramatized adaption by Al Gore in 2006 (directed by Davis Guggenheim).

Let’s start by looking at some sea level basics.

Like everything else related to climate, getting an accurate global dataset on sea level is difficult – especially when we want consistency over decades.

The world is a big place and past climatological measurements were mostly focused on collecting local weather data for the country or region in question. Satellites started measuring climate globally in the late 1970s, but satellites for sea level and mass balance didn’t begin measurements until 10-20 years ago. So, climate scientists attempt to piece together disparate data systems, to reconcile them, and to match up the results with what climate models calculate – call it “a sea level budget”.

“The budget” means balancing two sides of the equation:

  • how has sea level changed year by year and decade by decade?
  • what contributions to sea level do we calculate from the effects of warming climate?

Components of Sea Level Rise

If we imagine sea level as the level in a large bathtub it is relatively simple conceptually. As the ocean warms the level rises for two reasons:

  • warmer water expands (increasing the volume of existing mass)
  • ice melts (adding mass)

The “material properties” of water are well known and not in doubt. With lots of measurements of ocean temperature around the globe we can be relatively sure of the expansion. Ocean temperature has increasing coverage over the last 100 years, especially since the Argo project that started a little more than 10 years ago. But if we go back 30 years we have a lot less measurements and usually only at the surface. If we go back 100 years we have less again. So there are questions and uncertainties.

Arctic ice melting has no impact on sea level because it is already floating. Water or ice that is already floating doesn’t change the sea level by melting/freezing. Ice on a continent that melts and runs into the ocean increases sea level due to increasing the mass. Some Antarctic ice shelves are in the ocean but are part of the Antarctic ice sheet that is supported by the continent of Antarctica – melt these ice sheets and they will add to ocean level.

Sea level over the last 100 years has increased by about 0.20m (about 8 inches, if we use advanced US units).

To put it into one perspective, 20,000 years ago the sea level was about 120m lower than today – this was the end of the last ice age. About 130,000 years ago the sea level was a few meters higher (no one is certain of the exact figure). This was the time of the last “interglacial” (called the Eemian interglacial).

If we melted all of Greenland’s ice sheet we would see a further 7m rise from today, and Greenland and Antarctica together would lead to a 70m rise. Over millennia (but not a century), the complete Greenland ice sheet melting is a possibility, but Antarctica is not (at around -30ºC, it is a very long way below freezing).

Complications

Why not use tide gauges to measure sea level rise? Some have been around for 100 years and a few have been around for 200 years.

There aren’t many tide gauges going back a long time, and anyway in many places the ground is moving relative to the ocean. Take Scandinavia. At the end of the last ice age Stockholm was buried under perhaps 2km of ice. No wonder Scandinavians today appear so cheerful – life is all about contrasts. As the ice melted, the load on the ground was removed and it is “springing back” into a pre-glacial position. So in many places around the globe the land is moving vertically relative to sea level.

In Nedre Gavle, Sweden, the land is moving up twice as fast as the average global sea level rise (so relative sea level is falling). In Galveston, Texas the land is moving down faster than sea level rise (more than doubling apparent sea level rise).

That is the first complication.

The second complication is due to wind and local density from salinity changes. So as an example, picture a constant sea level but Pacific winds change from W->E to E->W. The water will “pile up” in the west instead of the east, due to the force of the wind. Relative sea level will increase in the west and decrease in the east. Likewise, if the local density changes from melting ice (or ocean currents with different salinity) we can adjust the local sea level relative to the reference.

Here is AR5, chapter 3, p. 288:

Large-scale spatial patterns of sea level change are known to high precision only since 1993, when satellite altimetry became available.

These data have shown a persistent pattern of change since the early 1990s in the Pacific, with rates of rise in the Warm Pool of the western Pacific up to three times larger than those for GMSL, while rates over much of the eastern Pacific are near zero or negative.

The increasing sea level in the Warm Pool started shortly before the launch of TOPEX/Poseidon, and is caused by an intensification of the trade winds since the late 1980s that may be related to the Pacific Decadal Oscillation (PDO).

The lower rate of sea level rise since 1993 along the western coast of the United States has also been attributed to changes in the wind stress curl over the North Pacific associated with the PDO..

Measuring Systems

We can find a little about the new satellite systems in IPCC, AR5, chapter 3, p. 286:

Satellite radar altimeters in the 1970s and 1980s made the first nearly global observations of sea level, but these early measurements were highly uncertain and of short duration. The first precise record began with the launch of TOPEX/Poseidon (T/P) in 1992. This satellite and its successors (Jason-1, Jason-2) have provided continuous measurements of sea level variability at 10-day intervals between approximately ±66° latitude. Additional altimeters in different orbits (ERS-1, ERS-2, Envisat, Geosat Follow-on) have allowed for measurements up to ±82° latitude and at different temporal sampling (3 to 35 days), although these measurements are not as accurate as those from the T/P and Jason satellites.

Unlike tide gauges, altimetry measures sea level relative to a geodetic reference frame (classically a reference ellipsoid that coincides with the mean shape of the Earth, defined within a globally realized terrestrial reference frame) and thus will not be affected by VLM, although a small correction that depends on the area covered by the satellite (~0.3 mm yr–1) must be added to account for the change in location of the ocean bottom due to GIA relative to the reference frame of the satellite (Peltier, 2001; see also Section 13.1.2).

Tide gauges and satellite altimetry measure the combined effect of ocean warming and mass changes on ocean volume. Although variations in the density related to upper-ocean salinity changes cause regional changes in sea level, when globally averaged their effect on sea level rise is an order of magnitude or more smaller than thermal effects (Lowe and Gregory, 2006).

The thermal contribution to sea level can be calculated from in situ temperature measurements (Section 3.2). It has only been possible to directly measure the mass component of sea level since the launch of the Gravity Recovery and Climate Experiment (GRACE) in 2002 (Chambers et al., 2004). Before that, estimates were based either on estimates of glacier and ice sheet mass losses or using residuals between sea level measured by altimetry or tide gauges and estimates of the thermosteric component (e.g., Willis et al., 2004; Domingues et al., 2008), which allowed for the estimation of seasonal and interannual variations as well. GIA also causes a gravitational signal in GRACE data that must be removed in order to determine present-day mass changes; this correction is of the same order of magnitude as the expected trend and is still uncertain at the 30% level (Chambers et al., 2010).

The GRACE satellite lets us see how much ice has melted into the ocean. It’s not easy to calculate this otherwise.

The fourth assessment report from the IPCC in 2007 reported that sea level rise from the Antarctic ice sheet for the previous decade was between -0.3mm/yr and +0.5mm/yr. That is, without the new satellite measurements, it was very difficult to confirm whether Antarctica had been gaining or losing ice.

Historical Sea Level Rise

From AR5, chapter 3, p. 287:

From AR5, chapter 3

From AR5, chapter 3

Figure 1 – Click to expand

  • The top left graph shows that various researchers are fairly close in their calculations of overall sea level rise over the past 130 years
  • The bottom left graph shows that over the last 40 years the impact of melting ice has been more important than the expansion of a warmer ocean (“thermosteric component” = the effect of a warmer ocean expanding)
  • The bottom right graph shows that over the last 7 years the measurements are consistent – satellite measurement of sea level change matches the sum of mass loss (melting ice) plus an expanding ocean (the measurements from Argo turned into sea level rise).

This gives us the mean sea level. Remember that local winds, ocean currents and changes in salinity can change this trend locally.

Many people have written about the recent accelerating trends in sea level rise. Here is AR5 again, with a graph of the 18-year trend at each point in time. We can see that different researchers reach different conclusions and that the warming period in the first half of the 20th century created sea level rise comparable to today:

From AR5, chapter 3

From AR5, chapter 3

The conclusion in AR5:

It is virtually certain that globally averaged sea level has risen over the 20th century, with a very likely mean rate between 1900 and 2010 of 1.7 [1.5 to 1.9] mm/yr and 3.2 [2.8 and 3.6] mm/yr between 1993 and 2010.

This assessment is based on high agreement among multiple studies using different methods, and from independent observing systems (tide gauges and altimetry) since 1993.

It is likely that a rate comparable to that since 1993 occurred between 1920 and 1950, possibly due to a multi-decadal climate variation, as individual tide gauges around the world and all reconstructions of GMSL show increased rates of sea level rise during this period.

Forecast Future Sea Level Rise

AR5, chapter 13 is the place to find predictions of the future on sea level, p. 1140:

For the period 2081–2100, compared to 1986–2005, global mean sea level rise is likely (medium confidence) to be in the 5 to 95% range of projections from process-based models, which give:

  • 0.26 to 0.55 m for RCP2.6
  • 0.32 to 0.63 m for RCP4.5
  • 0.33 to 0.63 m for RCP6.0
  • 0.45 to 0.82 m for RCP8.5

For RCP8.5, the rise by 2100 is 0.52 to 0.98 m..

We have considered the evidence for higher projections and have concluded that there is currently insufficient evidence to evaluate the probability of specific levels above the assessed likely range. Based on current understanding, only the collapse of marine-based sectors of the Antarctic ice sheet, if initiated, could cause global mean sea level to rise substantially above the likely range during the 21st century.

This potential additional contribution cannot be precisely quantified but there is medium confidence that it would not exceed several tenths of a meter of sea level rise during the 21st century.

I highlighted RCP6.0 as this seems to correspond to past development pathways with little CO2 mitigation policies. No one knows the future, this is just my pick, barring major changes from the recent past.

In the next article we will consider impacts of future sea level rise in various regions.

Articles in this Series

Impacts – I – Introduction

Impacts – II – GHG Emissions Projections: SRES and RCP

Impacts – III – Population in 2100

Impacts – IV – Temperature Projections and Probabilities

Impacts – V – Climate change is already causing worsening storms, floods and droughts

References

Observations: Oceanic Climate Change and Sea Level. In: Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, NL Bindoff et al (2007)

Observations: Ocean. In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, M Rhein et al (2013)

Sea Level Change. In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, JA Church et al (2013)

Read Full Post »

I generally try and avoid the media as much as possible (although the 2016 Circus did suck me in) but it’s still impossible to miss claims like the following:

Climate change is already causing worsening storms, floods and droughts

Before looking at predictions for the future I thought it was worth reviewing this claim, seeing as it is so prevalent and is presented as being the current consensus of climate science.

Droughts

SREX 2012, p. 171:

There is medium confidence that since the 1950s some regions of the world have experienced more intense and longer droughts (e.g., southern Europe, west Africa) but also opposite trends exist in other regions (e.g., central North America, northwestern Australia).

The report cites Sheffield and Wood 2008 who show graphs on a variety of drought metrics from around the world over the last 50 years – click to enlarge:

From Sheffield & Wood 2008

From Sheffield & Wood 2008

Figure 1 – Click to enlarge

The results above were calculated from models based on available meteorological data. According to their analysis some places have experienced more droughts, and other places less droughts. Because they are based on models we can expect that alternative researchers may produce different results.

AR5, published a year after SREX, says, chapter 2, p. 214-215:

Because drought is a complex variable and can at best be incompletely represented by commonly used drought indices, discrepancies in the interpretation of changes can result. For example, Sheffield and Wood (2008) found decreasing trends in the duration, intensity and severity of drought globally. Conversely, Dai (2011a,b) found a general global increase in drought, although with substantial regional variation and individual events dominating trend signatures in some regions (e.g., the 1970s prolonged Sahel drought and the 1930s drought in the USA and Canadian Prairies). Studies subsequent to these continue to provide somewhat different conclusions on trends in global droughts and/ or dryness since the middle of the 20th century (Sheffield et al., 2012; Dai, 2013; Donat et al., 2013c; van der Schrier et al., 2013)..

..In summary, the current assessment concludes that there is not enough evidence at present to suggest more than low confidence in a global-scale observed trend in drought or dryness (lack of rainfall) since the middle of the 20th century, owing to lack of direct observations, geographical inconsistencies in the trends, and dependencies of inferred trends on the index choice.

Based on updated studies, AR4 conclusions regarding global increasing trends in drought since the 1970s were probably overstated.

The paper by Dai is Drought under global warming: a review, A Dai, Climate Change (2011) – for some reason I am unable to access it.

A later paper in Nature, Trenberth et al 2013 (including both Sheffield and Dai as co-authors) said:

Two recent papers looked at the question of whether large-scale drought has been increasing under climate change. A study in Nature by Sheffield et al entitled ‘Little change in global drought over the past 60 years’ was published at almost the same time that ‘Increasing drought under global warming in observations and models’ by Dai appeared in Nature Climate Change (published online in August 2012). How can two research groups arrive at such seemingly contradictory conclusions?

Another later paper on droughts, Orlowski & Seneviratne 2013, likewise shows overwhelming evidence of more droughts – click to enlarge:

From Orlowsky & Seneviratne 2013

From Orlowsky & Seneviratne 2013

Figure 2 – Click to enlarge

Floods

SREX 2012, p. 177:

Overall, there is low confidence (due to limited evidence) that anthropogenic climate change has affected the magnitude and frequency of floods, though it has detectably influenced several components of the hydrological cycle, such as precipitation and snowmelt, that may impact flood trends. The assessment of causes behind the changes in floods is inherently complex and difficult.

AR5, Chapter 2, p. 214:

AR5 WGII assesses floods in regional detail accounting for the fact that trends in floods are strongly influenced by changes in river management (see also Section 2.5.2). Although the most evident flood trends appear to be in northern high latitudes, where observed warming trends have been largest, in some regions no evidence of a trend in extreme flooding has been found, for example, over Russia based on daily river discharge (Shiklomanov et al., 2007).

Other studies for Europe (Hannaford and Marsh, 2008; Renard et al., 2008; Petrow and Merz, 2009; Stahl et al., 2010) and Asia (Jiang et al., 2008; Delgado et al., 2010) show evidence for upward, downward or no trend in the magnitude and frequency of floods, so that there is currently no clear and widespread evidence for observed changes in flooding except for the earlier spring flow in snow-dominated regions (Seneviratne et al., 2012).

In summary, there continues to be a lack of evidence and thus low confidence regarding the sign or trend in the magnitude and/or frequency of floods on a global scale.

[Note: the text in the bottom line cited says: “..regarding the sign of trend in the magnitude..” which I assume is a typo, and so I changed of into or]

Storms

SREX, p. 159:

Detection of trends in tropical cyclone metrics such as frequency, intensity, and duration remains a significant challenge..

..Natural variability combined with uncertainties in the historical data makes it difficult to detect trends in tropical cyclone activity. There have been no significant trends observed in global tropical cyclone frequency records, including over the present 40-year period of satellite observations (e.g., Webster et al., 2005). Regional trends in tropical cyclone frequency have been identified in the North Atlantic, but the fidelity of these trends is debated (Holland and Webster, 2007; Landsea, 2007; Mann et al., 2007a). Different methods for estimating undercounts in the earlier part of the North Atlantic tropical cyclone record provide mixed conclusions (Chang and Guo, 2007; Mann et al., 2007b; Kunkel et al., 2008; Vecchi and Knutson, 2008).

Regional trends have not been detected in other oceans (Chan and Xu, 2009; Kubota and Chan, 2009; Callaghan and Power, 2011). It thus remains uncertain whether any observed increases in tropical cyclone frequency on time scales longer than about 40 years are robust, after accounting for past changes in observing capabilities (Knutson et al., 2010)..

..Time series of power dissipation, an aggregate compound of tropical cyclone frequency, duration, and intensity that measures total energy consumption by tropical cyclones, show upward trends in the North Atlantic and weaker upward trends in the western North Pacific over the past 25 years (Emanuel, 2007), but interpretation of longer-term trends in this quantity is again constrained by data quality concerns.

The variability and trend of power dissipation can be related to SST and other local factors such as tropopause temperature and vertical wind shear (Emanuel, 2007), but it is a current topic of debate whether local SST or the difference between local SST and mean tropical SST is the more physically relevant metric (Swanson, 2008).

The distinction is an important one when making projections of changes in power dissipation based on projections of SST changes, particularly in the tropical Atlantic where SST has been increasing more rapidly than in the tropics as a whole (Vecchi et al., 2008). Accumulated cyclone energy, which is an integrated metric analogous to power dissipation, has been declining globally since reaching a high point in 2005, and is presently at a 40- year low point (Maue, 2009). The present period of quiescence, as well as the period of heightened activity leading up to the high point in 2005, does not clearly represent substantial departures from past variability (Maue, 2009)..

..The present assessment regarding observed trends in tropical cyclone activity is essentially identical to the WMO assessment (Knutson et al., 2010): there is low confidence that any observed long-term (i.e., 40 years or more) increases in tropical cyclone activity are robust, after accounting for past changes in observing capabilities.

AR5, Chapter 2, p. 216:

AR4 concluded that it was likely that an increasing trend had occurred in intense tropical cyclone activity since 1970 in some regions but that there was no clear trend in the annual numbers of tropical cyclones. Subsequent assessments, including SREX and more recent literature indicate that it is difficult to draw firm conclusions with respect to the confidence levels associated with observed trends prior to the satellite era and in ocean basins outside of the North Atlantic.

Lots more tropical storms:

From AR5, wg I

From AR5, wg I

Figure 3

Note that a more important metric than “how many?” is “how severe?” or a combination of both.

And for extra-tropical storms (i.e. outside the tropics), SREX p. 166:

In summary it is likely that there has been a poleward shift in the main Northern and Southern Hemisphere extratropical storm tracks during the last 50 years. There is medium confidence in an anthropogenic influence on this observed poleward shift. It has not formally been attributed.

There is low confidence in past changes in regional intensity.

And AR5, chapter 2, p. 217 & 220:

Some studies show an increase in intensity and number of extreme Atlantic cyclones (Paciorek et al., 2002; Lehmann et al., 2011) while others show opposite trends in eastern Pacific and North America (Gulev et al., 2001). Comparisons between studies are hampered because of the sensitivities in identification schemes and/ or different definitions for extreme cyclones (Ulbrich et al., 2009; Neu et al., 2012). The fidelity of research findings also rests largely with the underlying reanalyses products that are used..

..In summary, confidence in large scale changes in the intensity of extreme extratropical cyclones since 1900 is low. There is also low confidence for a clear trend in storminess proxies over the last century due to inconsistencies between studies or lack of long-term data in some parts of the world (particularly in the SH). Likewise, confidence in trends in extreme winds is low, owing to quality and consistency issues with analysed data.

Discussion

The IPCC SREX and AR5 reports were published in 2012 and 2013 respectively. There will be new research published since these reports analyzing the same data and possibly reaching different conclusions. When you have large decadal variability in poorly observed data with a small or non-existent trend then inevitably different groups will be able to reach different conclusions on these trends. And if you focus on specific regions you can demonstrate a clear and unmistakeable trend.

If you are looking for a soundbite just pick the right region.

The last 100 years have seen global warming. As this blog has made clear from the physics, more GHGs (all other things remaining equal) result in more warming. What proportion of the last 100 years is intrinsic climate variability vs the anthropogenic GHG proportion I have no idea.

The last century has seen no clear globally averaged change in floods, droughts or storms – as best as we can tell with very incomplete observing systems. Of course, some regions have definitely seen more, and some regions have definitely seen less. Whether this is different from the period from 1800-1900 or from 1700-1800 no one knows. Perhaps floods, droughts and tropical storms increased globally from 1700-1900. Perhaps they decreased. Perhaps the last 100 years have seen more variability. Perhaps not. (And in recognition of Poe’s law, I note that a few statements within the article presenting graphs did say the opposite of the graphs presented).

Articles in this Series

Impacts – I – Introduction

Impacts – II – GHG Emissions Projections: SRES and RCP

Impacts – III – Population in 2100

Impacts – IV – Temperature Projections and Probabilities

References

SREX = Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation Special Report, IPCC (2012)

Observations: Atmosphere and Surface. Chapter 2 of Working Group I to AR5, DL Hartmann et al (2013)

Global Trends and Variability in Soil Moisture and Drought Characteristics, 1950–2000, from Observation-Driven Simulations of the Terrestrial Hydrologic Cycle, Justin Sheffield & Eric Wood, Journal of Climate (2008) – free paper

Global warming and changes in drought, Kevin E Trenberth et al, Nature (2013) – free paper

Elusive drought: uncertainty in observed trends and short- and long-term CMIP5 projections, B Orlowsky & SI Seneviratne, Hydrology and Earth System Sciences (2013) – free paper

Read Full Post »

In Impacts – II – GHG Emissions Projections: SRES and RCP we looked at projections of emissions under various scenarios with the resulting CO2 (and other GHG) concentrations and resulting radiative forcing.

Why do we need these scenarios? Because even if climate models were perfect and could accurately calculate the temperature 100 years from now, we wouldn’t know how much “anthropogenic CO2” (and other GHGs) would have been emitted by that time. The scenarios allow climate modelers to produce temperature (and other climate variable) projections on the basis of each of these scenarios.

The IPCC AR5 (fifth assessment report) from 2013 says (chapter 12, p. 1031):

Global mean temperatures will continue to rise over the 21st century if greenhouse gas (GHG) emissions continue unabated.

Under the assumptions of the concentration-driven RCPs, global mean surface temperatures for 2081–2100, relative to 1986–2005 will likely be in the 5 to 95% range of the CMIP5 models:

  • 0.3°C to 1.7°C (RCP2.6)
  • 1.1°C to 2.6°C (RCP4.5)
  • 1.4°C to 3.1°C (RCP6.0)
  • 2.6°C to 4.8°C (RCP8.5)

Global temperatures averaged over the period 2081– 2100 are projected to likely exceed 1.5°C above 1850-1900 for RCP4.5, RCP6.0 and RCP8.5 (high confidence), are likely to exceed 2°C above 1850-1900 for RCP6.0 and RCP8.5 (high confidence) and are more likely than not to exceed 2°C for RCP4.5 (medium confidence). Temperature change above 2°C under RCP2.6 is unlikely (medium confidence). Warming above 4°C by 2081–2100 is unlikely in all RCPs (high confidence) except for RCP8.5, where it is about as likely as not (medium confidence).

I commented in Part II that RCP8.5 seemed to be a scenario that didn’t match up with the last 40-50 years of development. Of course, the various scenario developers give their caveats, for example, Riahi et al 2007:

Given the large number of variables and their interdependencies, we are of the opinion that it is impossible to assign objective likelihoods or probabilities to emissions scenarios. We have also not attempted to assign any subjective likelihoods to the scenarios either. The purpose of the scenarios presented in this Special Issue is, rather, to span the range of uncertainty without an assessment of likely, preferable, or desirable future developments..

Readers should exercise their own judgment on the plausibility of above scenario ‘storylines’..

To me RCP6.0 seems a more likely future (compared with RCP8.5) in a world that doesn’t have any significant attempt to tackle CO2 emissions. That is, no major change in climate policy to today’s world, but similar economic and population development (note 1).

Here is the graph of projected temperature anomalies for the different scenarios. :

From AR5, chapter 12

From AR5, chapter 12

Figure 1

That graph is hard to make out for 2100, here is the table of corresponding data. I highlighted RCP6.0 in 2100 – you can click to enlarge the table:

ar5-ch12-table12-2-temperature-anomaly-2100-499px

Figure 2 – Click to expand

Probabilities and Lists

The table above has a “1 std deviation” and a 5%-95% distribution. The graph (which has the same source data) has shading to indicate 5%-95% of models for each RCP scenario.

These have no relation to real probability distributions. That is, the range of 5-95% for RCP6.0 doesn’t equate to: “the probability is 90% likely that the average temperature 2080-2100 will be 1.4-3.1ºC higher than the 1986-2005 average”.

A number of climate models are used to produce simulations and the results from these “ensembles” are sometimes pressed into “probability service”. For some concept background on ensembles read Ensemble Forecasting.

Here is IPCC AR5 chapter 12:

Ensembles like CMIP5 do not represent a systematically sampled family of models but rely on self-selection by the modelling groups.

This opportunistic nature of MMEs [multi-model ensembles] has been discussed, for example, in Tebaldi and Knutti (2007) and Knutti et al. (2010a). These ensembles are therefore not designed to explore uncertainty in a coordinated manner, and the range of their results cannot be straightforwardly interpreted as an exhaustive range of plausible outcomes, even if some studies have shown how they appear to behave as well calibrated probabilistic forecasts for some large-scale quantities. Other studies have argued instead that the tail of distributions is by construction undersampled.

In general, the difficulty in producing quantitative estimates of uncertainty based on multiple model output originates in their peculiarities as a statistical sample, neither random nor systematic, with possible dependencies among the members and of spurious nature, that is, often counting among their members models with different degrees of complexities (different number of processes explicitly represented or parameterized) even within the category of general circulation models..

..In summary, there does not exist at present a single agreed on and robust formal methodology to deliver uncertainty quantification estimates of future changes in all climate variables. As a consequence, in this chapter, statements using the calibrated uncertainty language are a result of the expert judgement of the authors, combining assessed literature results with an evaluation of models demonstrated ability (or lack thereof) in simulating the relevant processes (see Chapter 9) and model consensus (or lack thereof) over future projections. In some cases when a significant relation is detected between model performance and reliability of its future projections, some models (or a particular parametric configuration) may be excluded but in general it remains an open research question to find significant connections of this kind that justify some form of weighting across the ensemble of models and produce aggregated future projections that are significantly different from straightforward one model–one vote ensemble results. Therefore, most of the analyses performed for this chapter make use of all available models in the ensembles, with equal weight given to each of them unless otherwise stated.

And from one of the papers cited in that section of chapter 12, Jackson et al 2008:

In global climate models (GCMs), unresolved physical processes are included through simplified representations referred to as parameterizations.

Parameterizations typically contain one or more adjustable phenomenological parameters. Parameter values can be estimated directly from theory or observations or by “tuning” the models by comparing model simulations to the climate record. Because of the large number of parameters in comprehensive GCMs, a thorough tuning effort that includes interactions between multiple parameters can be very computationally expensive. Models may have compensating errors, where errors in one parameterization compensate for errors in other parameterizations to produce a realistic climate simulation (Wang 2007; Golaz et al. 2007; Min et al. 2007; Murphy et al. 2007).

The risk is that, when moving to a new climate regime (e.g., increased greenhouse gases), the errors may no longer compensate. This leads to uncertainty in climate change predictions. The known range of uncertainty of many parameters allows a wide variance of the resulting simulated climate (Murphy et al. 2004; Stainforth et al. 2005; M. Collins et al. 2006). The persistent scatter in the sensitivities of models from different modeling groups, despite the effort represented by the approximately four generations of modeling improvements, suggests that uncertainty in climate prediction may depend on underconstrained details and that we should not expect convergence anytime soon.

Stainforth et al 2005 (referenced in the quote above) tried much larger ensembles of coarser resolution climate models, and was discussed in the comments of Models, On – and Off – the Catwalk – Part Four – Tuning & the Magic Behind the Scenes. Rowlands et al 2012 is similar in approach and was discussed in Natural Variability and Chaos – Five – Why Should Observations match Models?

The way I read the IPCC reports and various papers is that clearly the projections are not a probability distribution. Then the data gets inevitably gets used as a de facto probability distribution.

Conclusion

“All models are wrong but some are useful” as George Box said, actually in a quite unrelated field (i.e., not climate). But it’s a good saying.

Many people who describe themselves as “lukewarmers” believe that climate sensitivity as characterized by the IPCC is too high and the real climate has a lower sensitivity. I have no idea.

Models may be wrong, but I don’t have an alternative model to provide. And therefore, given that they represent climate better than any current alternative, their results are useful.

We can’t currently create a real probability distribution from a set of temperature prediction results (assuming a given emissions scenario).

How useful is it to know that under a scenario like RCP6.0 the average global temperature increase in 2100 has been simulated as variously 1ºC, 2ºC, 3ºC, 4ºC? (note, I haven’t checked the CMIP5 simulations to get each value). And the tropics will vary less, land more? As we dig into more details we will attempt to look at how reliable regional and seasonal temperature anomalies might be compared with the overall number. Likewise rainfall and other important climate values.

I do find it useful to keep the idea of a set of possible numbers with no probability assigned. Then at some stage we can say something like, “if this RCP scenario turns out to be correct and the global average surface temperature actually increases by 3ºC by 2100, we know the following are reasonable assumptions … but we currently can’t make any predictions about these other values..

References

Long-term Climate Change: Projections, Commitments and Irreversibility, M Collins et al (2013) – In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change

Scenarios of long-term socio-economic and environmental development under climate stabilization, Keywan Riahi et al, Technological Forecasting & Social Change (2007) – free paper

Error Reduction and Convergence in Climate Prediction, Charles S Jackson et al, Journal of Climate (2008) – free paper

Notes

Note 1: As explored a little in the last article, RCP6.0 does include some changes to climate policy but it seems they are not major. I believe a very useful scenario for exploring impact assessments would be the population and development path of RCP6.0 (let’s call it RCP6.0A) without any climate policies.

For reasons of”scenario parsimony” this interesting pathway avoids attention.

Read Full Post »

In Part II we looked at various scenarios for emissions. One important determinant is how the world population will change through this century and with a few comments on that topic I thought it worth digging a little.

Here is Lutz, Sanderson & Scherbov, Nature (2001):

The median value of our projections reaches a peak around 2070 at 9.0 billion people and then slowly decreases. In 2100, the median value of our projections is 8.4 billion people with the 80 per cent prediction interval bounded by 5.6 and 12.1 billion.

From Lutz 2001

From Lutz 2001

Figure 1 – Click to enlarge

This paper is behind a paywall but Lutz references the 1996 book he edited for assumptions, which is freely available (link below).

In it the authors comment, p. 22:

Some users clearly want population figures for the year 2100 and beyond. Should the demographer disappoint such expectations and leave it to others with less expertise to produce them? The answer given in this study is no. But as discussed below, we make a clear distinction between what we call projections up to 2030-2050 and everything beyond that time, which we term extensions for illustrative purposes.

[Emphasis added]

And then p.32:

Sanderson (1995) shows that it is impossible to produce “objective” confidence ranges for future population projections. Subjective confidence intervals are the best we can ever attain because assumptions are always involved.

Here are some more recent views.

Gerland et al 2014 – Gerland is from the Population Division of the UN:

The United Nations recently released population projections based on data until 2012 and a Bayesian probabilistic methodology. Analysis of these data reveals that, contrary to previous literature, world population is unlikely to stop growing this century. There is an 80% probability that world population, now 7.2 billion, will increase to between 9.6 and 12.3 billion in 2100. This uncertainty is much smaller than the range from the traditional UN high and low variants. Much of the increase is expected to happen in Africa, in part due to higher fertility and a recent slowdown in the pace of fertility decline..

..Among the most robust empirical findings in the literature on fertility transitions are that higher contraceptive use and higher female education are associated with faster fertility decline. These suggest that the projected rapid population growth could be moderated by greater investments in family planning programs to satisfy the unmet need for contraception, and in girls’ education. It should be noted, however, that the UN projections are based on an implicit assumption of a continuation of existing policies, but an intensification of current investments would be required for faster changes to occur

Wolfgang Lutz & Samir KC (2010). Lutz seems popular in this field:

The total size of the world population is likely to increase from its current 7 billion to 8–10 billion by 2050. This uncertainty is because of unknown future fertility and mortality trends in different parts of the world. But the young age structure of the population and the fact that in much of Africa and Western Asia, fertility is still very high makes an increase by at least one more billion almost certain. Virtually, all the increase will happen in the developing world. For the second half of the century, population stabilization and the onset of a decline are likely..

Although the paper doesn’t focus on 2100, but only up to 2050 it does include a graph for probalistic expectations to 2100 and has some interesting commentary around how different forecasting groups deal with uncertainty, how women’s education plays a huge role in reducing fertility and many other stories, for example:

The Demographic and Health Survey for Ethiopia, for instance, shows that women without any formal education have on average six children, whereas those with secondary education have only two (see http://www.measuredhs.com). Significant differentials can be found in most populations of all cultural traditions. Only in a few modern societies does the strongly negative association give way to a U-shaped pattern in which the most educated women have a somewhat higher fertility than those with intermediate education. But globally, the education differentials are so pervasive that education may well be called the single most important observable source of population heterogeneity after age and sex (Lutz et al. 1999). There are good reasons to assume that during the course of a demographic transition the fact that higher education leads to lower fertility is a true causal mechanism, where education facilitates better access to and information about family planning and most importantly leads to a change in attitude in which ‘quantity’ of children is replaced by ‘quality’, i.e. couples want to have fewer children with better life chances..

Lee 2011, another very interesting paper, makes this comment:

The U.N. projections assume that fertility will slowly converge toward replacement level (2.1 births per woman) by 2100

Lutz’s book had a similar hint that many demographers assume that somehow societies on mass will converge towards a steady state. Lee also comments that probability treatments for “low”, “medium” and “high” are not very realistic because the methods used assume a correlation between different countries, that isn’t true in practice. Lutz likewise has similar points. Here is Lee:

Special issues arise in constructing consistent probability intervals for individual countries, for regions, and for the world, because account must be taken of the positive or negative correlations among the country forecast errors within regions and across regions. Since error correlation is typically positive but less than 1.0, country errors tend to cancel under aggregation, and the proportional error bounds for the world population are far narrower than for individual countries. The NRC study (20) found that the average absolute country error was 21% while the average global error was only 3%. When the High, Medium and Low scenario approach is used, there is no cancellation of error under aggregation, so the probability coverage at different levels of aggregation cannot be handled consistently. An ongoing research collaboration between the U.N. Population Division and a team led by Raftery is developing new and very promising statistical methods for handling uncertainty in future forecasts.

And then on UN projections:

One might quibble with this or that assumption, but the UN projections have had an impressive record of success in the past, particularly at the global level, and I expect that to continue in the future. To a remarkable degree, the UN has sought out expert advice and experimented with cutting edge forecasting techniques, while maintaining consistency in projections. But in forecasting, errors are inevitable, and sound decision making requires that the likelihood of errors be taken into account. In this respect, there is much room for improvement in the UN projections and indeed in all projections by government statistical offices.

This comment looks like an oblique academic gentle slapping around (disguised as praise), but it’s hard to tell.

Conclusion

I don’t have a conclusion. I thought it would be interesting to find some demographic experts and show their views on future population trends. The future is always hard to predict – although in demography the next 20 years are usually easy to predict, short of global plagues and famines.

It does seem hard to have much idea about the population in 2100, but the difference between a population of 8bn and 11bn will have a large impact on CO2 emissions (without very significant CO2 mitigation policies).

References

The end of world population growth, Wolfgang Lutz, Warren Sanderson & Sergei Scherbov, Nature (2001) – paywall paper

The future population of the world – what can we assume?, edited Wolfgang Lutz, Earthscan Publications (1996) – freely available book

World Population Stabilization Unlikely This Century, Patrick Gerland et al, Science (2014) – free paper

Dimensions of global population projections: what do we know about future population trends and structures? Wolfgang Lutz & Samir KC, Phil. Trans. R. Soc. B (2010)

The Outlook for Population Growth, Ronald Lee, Science (2011) – free paper

Read Full Post »

In one of the iconic climate model tests, CO2 is doubled from a pre-industrial level of 280ppm to 560ppm “overnight” and we find the new steady state surface temperature. The change in CO2 is an input to the climate model, also known as a “forcing” because it is from outside. That is, humans create more CO2 from generating electricity, driving automobiles and other activities – this affects the climate and the climate responds.

These experiments with simple climate models were first done with 1d radiative-convective models in the 1960s. For example, Manabe & Wetherald 1967 who found a 2.3ºC surface temperature increase with constant relative humidity and 1.3ºC with constant absolute humidity (and for many reasons constant relative humidity seems more likely to be closer to reality than constant absolute humidity).

In other experiments, especially more recently, more more complex GCMs simulate 100 years with the CO2 concentration being gradually increased, in line with projections about future emissions – and we see what happens to temperature with time.

There are also other GHGs (“greenhouse” gases / radiatively-active gases) in the atmosphere that are changing due to human activity – especially methane (CH4) and nitrous oxide (N2O). And of course, the most important GHG is water vapor, but changes in water vapor concentration are a climate feedback – that is, changes in water vapor result from temperature (and circulation) changes.

And there are aerosols, some internally generated within the climate and others emitted by human activity. These also affect the climate in a number of ways.

We don’t know what future anthropogenic emissions will be. What will humans do? Build lots more coal-fire power stations to meet energy demand of the future? Run the entire world’s power grid from wind and solar by 2040? Finally invent practical nuclear fusion? How many people will there be?

So for this we need some scenarios of future human activity (note 1).

Scenarios – SRES and RCP

SRES was published in 2000:

In response to a 1994 evaluation of the earlier IPCC IS92 emissions scenarios, the 1996 Plenary of the IPCC requested this Special Report on Emissions Scenarios (SRES) (see Appendix I for the Terms of Reference). This report was accepted by the Working Group III (WGIII) plenary session in March 2000. The long-term nature and uncertainty of climate change and its driving forces require scenarios that extend to the end of the 21st century. This Report describes the new scenarios and how they were developed.

The SRES scenarios cover a wide range of the main driving forces of future emissions, from demographic to technological and economic developments. As required by the Terms of Reference, none of the scenarios in the set includes any future policies that explicitly address climate change, although all scenarios necessarily encompass various policies of other types.

The set of SRES emissions scenarios is based on an extensive assessment of the literature, six alternative modeling approaches, and an “open process” that solicited wide participation and feedback from many groups and individuals. The SRES scenarios include the range of emissions of all relevant species of greenhouse gases (GHGs) and sulfur and their driving forces..

..A set of scenarios was developed to represent the range of driving forces and emissions in the scenario literature so as to reflect current understanding and knowledge about underlying uncertainties. They exclude only outlying “surprise” or “disaster” scenarios in the literature. Any scenario necessarily includes subjective elements and is open to various interpretations. Preferences for the scenarios presented here vary among users. No judgment is offered in this Report as to the preference for any of the scenarios and they are not assigned probabilities of occurrence, neither must they be interpreted as policy recommendations..

..By 2100 the world will have changed in ways that are difficult to imagine – as difficult as it would have been at the end of the 19th century to imagine the changes of the 100 years since. Each storyline assumes a distinctly different direction for future developments, such that the four storylines differ in increasingly irreversible ways. Together they describe divergent futures that encompass a significant portion of the underlying uncertainties in the main driving forces. They cover a wide range of key “future” characteristics such as demographic change, economic development, and technological change. For this reason, their plausibility or feasibility should not be considered solely on the basis of an extrapolation of current economic, technological, and social trends.

The RCPs were in part a new version of the same idea as SRES and published in 2011. My understanding is that the Representative Concentration Pathways worked more towards final values of radiative forcing in 2100 that were considered in the modeling literature, and you can see this in the names of each RCP.

from A special issue on the RCPs, van Vuuren et al (2011)

By design, the RCPs, as a set, cover the range of radiative forcing levels examined in the open literature and contain relevant information for climate model runs.

[Emphasis added]

From The representative concentration pathways: an overview, van Vuuren et al (2011)

This paper summarizes the development process and main characteristics of the Representative Concentration Pathways (RCPs), a set of four new pathways developed for the climate modeling community as a basis for long-term and near-term modeling experiments.

The four RCPs together span the range of year 2100 radiative forcing values found in the open literature, i.e. from 2.6 to 8.5 W/m². The RCPs are the product of an innovative collaboration between integrated assessment modelers, climate modelers, terrestrial ecosystem modelers and emission inventory experts. The resulting product forms a comprehensive data set with high spatial and sectoral resolutions for the period extending to 2100..

..The RCPs are named according to radiative forcing target level for 2100. The radiative forcing estimates are based on the forcing of greenhouse gases and other forcing agents. The four selected RCPs were considered to be representative of the literature, and included one mitigation scenario leading to a very low forcing level (RCP2.6), two medium stabilization scenarios (RCP4.5/RCP6) and one very high baseline emission scenarios (RCP8.5).

Here are some graphs from the RCP introduction paper:

Population and GDP scenarios:

rcp-population-and-gdp-fig2-499px

Figure 1 – Click to expand

I was surprised by the population graph for RCP 8.5 and 6 (similar scenarios are generated in SRES). From reading various sources (but not diving into any detailed literature) I understood that the consensus was for population to peak mid-century at around 9bn people and then reduce back to something like 7-8bn people by the end of the century. This is because all countries that have experienced rising incomes have significantly reduced average fertility rates.

Here is Angus Deaton, in his fascinating and accessible book for people interested in The Great Escape as he calls it (that is, our escape from poverty and poor health):

In Africa in 1950, each woman could expect to give birth to 6.6 children; by 2000, that number had fallen to 5.1, and the UN estimates that it is 4.4 today. In Asia as well as in Latin America and the Caribbean, the decline has been even larger, from 6 children to just over 2..

The annual rate of growth of the world’s population, which reached 2.2% in 1960, was only half of that in 2011.

The GDP graph on the right (above) is lacking a definition. From the other papers covering the scenarios I understand it to be total world GDP in US$ trillions (at 2000 values, i.e. adjusted for inflation), although the numbers don’t seem to align exactly.

Energy consumption for the different scenarios:

Figure 2 – Click to expand

Annual emissions:

Figure 3 – Click to expand

Resulting concentrations in the atmosphere for CO2, CH4 (methane) and N2O (nitrous oxide):

rcp-fig3-ghg-concentrations-499px

Figure 4 – Click to expand

Radiative forcing (for explanation of this term, see for example Wonderland, Radiative Forcing and the Rate of Inflation):

rcp-fig10-rf-499px

Figure 5  – Click to expand

We can see from this figure (fig 5, their fig 10) that the RCP numbers refer to the expected radiative forcing in 2100 – so RCP8.5, often known as the “business as usual” scenario has a radiative forcing, compared to pre-industrial values, of 8.5 W/m². And RCP6 has a radiative forcing in 2100, of 6 W/m².

We can also see from the figure on the right that increases in CO2 are the cause of almost all of most of the increase from current values. For example, only RCP8.5 has a higher methane (CH4) forcing than today.

Business as usual – RCP 8.5 or RCP 6?

I’ve seen RCP8.5 described as “business as usual” but it seems quite an unlikely scenario. Perhaps we need to dive into this scenario more in another article. In the meantime, part of the description from Riahi et al (2011):

The scenario’s storyline describes a heterogeneous world with continuously increasing global population, resulting in a global population of 12 billion by 2100. Per capita income growth is slow and both internationally as well as regionally there is only little convergence between high and low income countries. Global GDP reaches around 250 trillion US2005$ in 2100.

The slow economic development also implies little progress in terms of efficiency. Combined with the high population growth, this leads to high energy demands. Still, international trade in energy and technology is limited and overall rates of technological progress is modest. The inherent emphasis on greater self-sufficiency of individual countries and regions assumed in the scenario implies a reliance on domestically available resources. Resource availability is not necessarily a constraint but easily accessible conventional oil and gas become relatively scarce in comparison to more difficult to harvest unconventional fuels like tar sands or oil shale.

Given the overall slow rate of technological improvements in low-carbon technologies, the future energy system moves toward coal-intensive technology choices with high GHG emissions. Environmental concerns in the A2 world are locally strong, especially in high and medium income regions. Food security is also a major concern, especially in low-income regions and agricultural productivity increases to feed a steadily increasing population.

Compared to the broader integrated assessment literature, the RCP8.5 represents thus a scenario with high global population and intermediate development in terms of total GDP (Fig. 4).

Per capita income, however, stays at comparatively low levels of about 20,000 US $2005 in the long term (2100), which is considerably below the median of the scenario literature. Another important characteristic of the RCP8.5 scenario is its relatively slow improvement in primary energy intensity of 0.5% per year over the course of the century. This trend reflects the storyline assumption of slow technological change. Energy intensity improvement rates are thus well below historical average (about 1% per year between 1940 and 2000). Compared to the scenario literature RCP8.5 depicts thus a relatively conservative business as usual case with low income, high population and high energy demand due to only modest improvements in energy intensity.

When I heard the term “business as usual” I’m sure I wasn’t alone in understanding it like this: the world carries on without adopting serious CO2 limiting policies. That is, no international agreements on CO2 reductions, no carbon pricing, etc. And the world continues on its current trajectory of growth and development. When you look at the last 40 years, it has been quite amazing. Why would growth slow, population not follow the pathway it has followed in all countries that have seen rising prosperity, and why would technological innovation and adoption slow? It would be interesting to see a “business as usual” scenario for emissions, CO2 concentrations and radiative forcing that had a better fit to the name.

RCP 6 seems to be a closer fit than RCP 8.5 to the name “business as usual”.

RCP6 is a climate-policy intervention scenario. That is, without explicit policies designed to reduce emissions, radiative forcing would exceed 6.0 W/m² in the year 2100.

However, the degree of GHG emissions mitigation required over the period 2010 to 2060 is small, particularly compared to RCP4.5 and RCP2.6, but also compared to emissions mitigation requirement subsequent to 2060 in RCP6 (Van Vuuren et al., 2011). The IPCC Fourth Assessment Report classified stabilization scenarios into six categories as shown in Table 1. RCP6 scenario falls into the border between the fifth category and the sixth category.

Its global mean long-term, steady-state equilibrium temperature could be expected to rise 4.9° centigrade, assuming a climate sensitivity of 3.0 and its CO2 equivalent concentration could be 855 ppm (Metz et al. 2007).

Some of the background to RCP 8.5 assumptions is in an earlier paper also by the same lead author – Riahi et al 2007, another freely accessible paper (reference below) which is worth a read, for example:

The task ahead of anticipating the possible developments over a time frame as ‘ridiculously’ long as a century is wrought with difficulties. Particularly, readers of this Journal will have sympathy for the difficulties in trying to capture social and technological changes over such a long time frame. One wonders how Arrhenius’ scenario of the world in 1996 would have looked, perhaps filled with just more of the same of his time—geopolitically, socially, and technologically. Would he have considered that 100 years later:

  • backward and colonially exploited China would be in the process of surpassing the UK’s economic output, eventually even that of all of Europe or the USA?
  • the existence of a highly productive economy within a social welfare state in his home country Sweden would elevate the rural and urban poor to unimaginable levels of personal affluence, consumption, and free time?
  • the complete obsolescence of the dominant technology cluster of the day-coal-fired steam engines?

How he would have factored in the possibility of the emergence of new technologies, especially in view of Lord Kelvin’s sobering ‘conclusion’ of 1895 that “heavier-than-air flying machines are impossible”?

Note on Comments

The Etiquette and About this Blog both explain the commenting policy in this blog. I noted briefly in the Introduction that of course questions about 100 years from now mean some small relaxation of the policy. But, in a large number of previous articles, we have discussed the “greenhouse” effect (just about to death) and so people who question it are welcome to find a relevant article and comment there – for example, The “Greenhouse” Effect Explained in Simple Terms which has many links to related articles. Questions on climate sensitivity, natural variation, and likelihood of projected future temperatures due to emissions are, of course, all still fair game in this series.

But I’ll just delete comments that question the existence of the greenhouse effect. Draconian, no doubt.

References

Emissions Scenarios, IPCC (2000) – free report

A special issue on the RCPs, Detlef P van Vuuren et al, Climatic Change (2011) – free paper

The representative concentration pathways: an overview, Detlef P van Vuuren et al, Climatic Change (2011) – free paper

RCP4.5: a pathway for stabilization of radiative forcing by 2100, Allison M. Thomson et al, Climatic Change (2011) – free paper

An emission pathway for stabilization at 6 Wm−2 radiative forcing,  Toshihiko Masui et al, Climatic Change (2011) – free paper

RCP 8.5—A scenario of comparatively high greenhouse gas emissions, Keywan Riahi et al, Climatic Change (2011) – free paper

Scenarios of long-term socio-economic and environmental development under climate stabilization, Keywan Riahi et al, Technological Forecasting & Social Change (2007) – free paper

Thermal equilibrium of the atmosphere with a given distribution of relative humidity, S Manabe, RT Wetherald, Journal of the Atmospheric Sciences (1967) – free paper

The Great Escape, Health, Wealth and the Origins of Inequality, Angus Deaton, Princeton University Press (2013) – book

Notes

Note 1: Even if we knew future anthropogenic emissions accurately it wouldn’t give us the whole picture. The climate has sources and sinks for CO2 and methane and there is some uncertainty about them, especially how well they will operate in the future. That is, anthropogenic emissions are modified by the feedback of sources and sinks for these emissions.

Read Full Post »

A long time ago, in About this Blog I wrote:

Opinions
Opinions are often interesting and sometimes entertaining. But what do we learn from opinions? It’s more useful to understand the science behind the subject. What is this particular theory built on? How long has the theory been “established”? What lines of evidence support this theory? What evidence would falsify this theory? What do opposing theories say?

Now I would like to look at impacts of climate change. And so opinions and value judgements are inevitable.

In physics we can say something like “95% of radiation at 667 cm-1 is absorbed within 1m at the surface because of the absorption properties of CO2″ and be judged true or false. It’s a number. It’s an equation. And therefore the result is falsifiable – the essence of science. Perhaps in some cases all the data is not in, or the formula is not yet clear, but this can be noted and accepted. There is evidence in favor or against, or a mix of evidence.

As we build equations into complex climate models, judgements become unavoidable. For example, “convection is modeled as a sub-grid parameterization therefore..”. Where the conclusion following “therefore” is the judgement. We could call it an opinion. We could call it an expert opinion. We could call it science if the result is falsifiable. But it starts to get a bit more “blurry” – at some point we move from a region of settled science to a region of less-settled science.

And once we consider the impacts in 2100 it seems that certainty and falsifiability must be abandoned. “Blurry” is the best case.

 

Less than a year ago listening to America and the New Global Economy by Timothy Taylor (via audible.com) I remember he said something like “the economic cost of climate change was all lumped into a fat tail – if the temperature change was on the higher side”. Sorry for my inaccurate memory (and the downside of audible.com vs a real book). Well it sparked my interest in another part of the climate journey.

I’ve been reading IPCC Working Group II (wgII) – some of the “TAR” (= third assessment report) from 2001 for background and AR5, the latest IPCC report from 2014. Some of the impacts also show up in Working Group I which is about the physical climate science, and the IPCC Special Report on Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation from 2012, known as SREX (Special Report on Extremes). These are all available at the IPCC website.

The first chapter of the TAR, Working Group II says:

The world community faces many risks from climate change. Clearly it is important to understand the nature of those risks, where natural and human systems are likely to be most vulnerable, and what may be achieved by adaptive responses. To understand better the potential impacts and associated dangers of global climate change, Working Group II of the Intergovernmental Panel on Climate Change (IPCC) offers this Third Assessment Report (TAR) on the state of knowledge concerning the sensitivity, adaptability, and vulnerability of physical, ecological, and social systems to climate change.

A couple of common complaints in the blogosphere that I’ve noticed are:

  • “all the impacts are supposed to be negative but there are a lot of positives from warming”
  • “CO2 will increase plant growth so we’ll be better off”

Within the field of papers and IPCC reports it’s clear that CO2 increasing plant growth is not ignored. Likewise, there are expected to be winners and losers (often, but definitely not exclusively, geographically distributed), even though the IPCC summarizes the expected overall effect as negative.

Of course, there is a highly entertaining field of “recycled press releases about the imminent catastrophe of climate change” which I’m sure ignores any positives or tradeoffs. Even in what could charitably be called “respected media outlets” there seem to be few correspondents with basic scientific literacy. Not even the ability to add up the numbers on an electricity bill or distinguish between the press release of a company planning to get wonderful results in 2025 vs today’s reality.

Anyway, entertaining as it is to shoot fish in a barrel, we will try to stay away from discussing newsotainment and stay with the scientific literature and IPCC assessments. Inevitably, we’ll stray a little.

I haven’t tried to do a comprehensive summary of the issues believed to impact humanity, but here are some:

  • sea level rise
  • heatwaves
  • droughts
  • floods
  • more powerful cyclones and storms
  • food production
  • ocean acidification
  • extinction of animal and plant species
  • more pests (added, thanks Tom, corrected thanks DeWitt)
  • disease (added, thanks Tom)

Possibly I’ve missed some.

Covering the subject is not easy but it’s an interesting field.

Read Full Post »

In Planck, Stefan-Boltzmann, Kirchhoff and LTE one of our commenters asked a question about emissivity. The first part of that article is worth reading as a primer in the basics for this article. I don’t want to repeat all the basics, except to say that if a body is a “black body” it emits radiation according to a simple formula. This is the maximum that any body can emit. In practice, a body will emit less.

The ratio between actual and the black body is the emissivity. It has a value between 0 and 1.

The question that this article tries to help readers understand is the origin and use of the emissivity term in the Stefan-Boltzmann equation:

E = ε’σT4

where E = total flux, ε’ = “effective emissivity” (a value between 0 and 1), σ is a constant and T = temperature in Kelvin (i.e., absolute temperature).

The term ε’ in the Stefan-Boltzmann equation is not really a constant. But it is often treated as a constant in articles that related to climate. Is this valid? Not valid? Why is it not a constant?

There is a constant material property called emissivity, but it is a function of wavelength. For example, if we found that the emissivity of a body at 10.15 μm was 0.55 then this would be the same regardless of whether the body was in Antarctica (around 233K = -40ºC), the tropics (around 303K = 30ºC) or at the temperature of the sun’s surface (5800K). How do we know this? From experimental work over more than a century.

Hopefully some graphs will illuminate the difference between emissivity the material property (that doesn’t change), and the “effective emissivity” (that does change) we find in the Stefan-Boltzmann equation. In each graph you can see:

  • (top) the blackbody curve
  • (middle) the emissivity of this fictional material as a function of wavelength
  • (bottom) the actual emitted radiation due to the emissivity – and a calculation of the “effective emissivity”.

The calculation of “effective emissivity” = total actual emitted radiation / total blackbody emitted radiation (note 1).

At 288K – effective emissivity = 0.49:

emissivity-288k

At 300K – effective emissivity = 0.49:

emissivity-300k

At 400K – effective emissivity = 0.44:

emissivity-400k

At 500K – effective emissivity = 0.35:

emissivity-500k

At 5800K, that is solar surface temperature — effective emissivity = 0.00 (note the scale on the bottom graph is completely different from the scale of the top graph):

emissivity-5800k

Hopefully this helps people trying to understand what emissivity really relates to in the Stefan Boltzmann equation. It is not a constant except in rare cases. But you can see that treating it as a constant over a range of temperatures is a reasonable approximation (depending on the accuracy you want), but change the temperature “too much” and your “effective emissivity” can change massively.

As always with approximations and useful formulas, you need to understand the basis behind them to know when you can and can’t use them.

Any questions, just ask in the comments.

Note 1 – The flux was calculated for the wavelength range of 0.01 μm to 50μm. If you use the Stefan Boltzmann equation for 288K you will get E = 5.67×10-8 x 2884 = 390 W/m2. The reason my graph has 376 W/m2 is because I don’t include the wavelength range from 50 to infinity. It doesn’t change the practical results you see.

Read Full Post »