Feeds:
Posts
Comments

In recent articles we have looked at rainfall and there is still more to discuss. This article changes tack to look at tropical cyclones, prompted by the recent US landfall of Harvey and Irma along with questions from readers about attribution and the future.

It might be surprising to find the following statement from leading climate scientists (Kevin Walsh and many co-authors in 2015):

At present, there is no climate theory that can predict the formation rate of tropical cyclones from the mean climate state.

The subject gets a little involved so let’s dig into a few papers. First from Gabriel Vecchi and some co-authors in 2008 in the journal Science. The paper is very brief and essentially raises one question – has the recent rise in total Atlantic cyclone intensity been a result of increases in absolute sea surface temperature (SST) or relative sea surface temperature:

From Vecchi et al 2008

Figure 1

The top graph (above) shows a correlation of 0.79 between SST and PDI (power dissipation index). The bottom graph shows a correlation of 0.79 between relative SST (local sea surface temperature minus the average tropical sea surface temperature) and PDI.

With more CO2 in the atmosphere from burning fossil fuels we expect a warmer SST in the tropical Atlantic in 2100 than today. But we don’t expect the tropical Atlantic to warm faster than the tropics in general.

If cyclone intensity is dependent on local SST we expect more cyclones, or more powerful cyclones. If cyclone intensity is dependent on relative SST we expect no increase in cyclones. This is because climate models predict warmer SSTs in the future but not warmer Atlantic SSTs than the tropics. The paper also shows a few high resolution models – green symbols – sitting close to the zero change line.

Now predicting tropical cyclones with GCMs has a fundamental issue – the scale of a modern high resolution GCM is around 100km. But cyclone prediction requires a higher resolution due to their relatively small size.

Thomas Knutson and co-authors (including the great Isaac Held) produced a 2007 paper with an interesting method (of course, the idea is not at all new). They input actual meteorological data (i.e. real history from NCEP reanalysis) into a high resolution model which covered just the Atlantic region. Their aim was to see how well this model could reproduce tropical storms. There are some technicalities to the model – the output is constantly “nudged” back towards the actual climatology and out at the boundaries of the model we can’t expect good simulation results. The model resolution is 18km.

The main question addressed here is the following: Assuming one has essentially perfect knowledge of large-scale atmospheric conditions in the Atlantic over time, how well can one then simulate past variations in Atlantic hurricane activity using a dynamical model?

They comment that the cause of the recent (at that time) upswing in hurricane activity “remains unresolved”. (Of course, fast forward to 2016, prior to the recent two large landfall hurricanes, and the overall activity is at a 1970 low. In early 2018, this may be revised again..).

Two interesting graphs emerge. First an excellent match between model and observations for overall frequency year on year:

From Knutson et al 2007

Figure 2

Second, an inability to predict the most intense hurricanes. The black dots are observations, the red dots are simulations from the model. The vertical axis, a little difficult to read, is SLP, or sea level pressure:

From Knutson et al 2007

Figure 3

These results are a common theme of many papers – inputting the historical climatological data into a model we can get some decent results on year to year variation in tropical cyclones. But models under-predict the most intense cyclones (hurricanes).

Here is Morris Bender and co-authors (including Thomas Knutson, Gabriel Vecchi – a frequent author or co-author in this genre, and of course Isaac Held) from 2010:

Some statistical analyses suggest a link between warmer Atlantic SSTs and increased hurricane activity, although other studies contend that the spatial structure of the SST change may be a more important control on tropical cyclone frequency and intensity. A few studies suggest that greenhouse warming has already produced a substantial rise in Atlantic tropical cyclone activity, but others question that conclusion.

This is a very typical introduction in papers on this topic. I note in passing this is a huge blow to the idea that climate scientists only ever introduce more certainty and alarm on the harm from future CO2 emissions. They don’t. However, it is also true that some climate scientists believe that recent events have been accentuated due to the last century of fossil fuel burning and these perspectives might be reported in the media. I try to ignore the media and that is my recommendation to readers on just about all subjects except essential ones like the weather and celebrity news.

This paper used a weather prediction model starting a few days before each storm to predict the outcome. If you understand the idea behind Knutson 2007 then this is just one step further – a few days prior to the emergence of an intense storm, input the actual climate data into a high resolution model and see how well the high res model predicts the observations. They also used projected future climates from CMIP3 models (note 1).

In the set of graphs below there are three points I want to highlight – and you probably need to click on the graph to enlarge it.

First, in graph B, “Zetac” is the model used by Knutson et al 2007, whereas GFDL is the weather prediction model getting better results in this paper – you can see that observations and the GFDL are pretty close in the maximum wind speed distribution. Second, the climate change predictions in E show that predictions of the future show an overall reduction in frequency of tropical storms, but an increase in the frequency of storms with the highest wind speeds – this is a common theme in papers from this genre. Third, in graph F, the results (from the weather prediction model) fed by different GCMs for future climate show quite different distributions. For example, the UKMO model produces a distribution of future wind speeds that is lower than current values.

From Bender et al 2010

Figure 4 – Click to enlarge

In this graph (S3 from the Supplementary data) we see graphs of the difference between future projected climatologies and current climatologies for three relevant parameters for each of the four different models shown in graph F in the figure above:

From Bender et al 2010

Figure 5 – Click to enlarge

This illustrates that different projected future climatologies, which all show increased SST in the Atlantic region, generate quite different hurricane intensities. The paper suggests that the reduction in wind shear in the UKMO model produces a lower frequency of higher intensity hurricanes.

Conclusion

This article illustrates that feeding higher resolution models with current data can generate realistic cyclone data in some aspects, but less so in other aspects. As we increase the model resolution we can get even better results – but this is dependent on inputting the correct climate data. As we look towards 2100 the questions are – How realistic is the future climate data? How does that affect projections of hurricane frequencies and intensities?

Articles in this Series

Impacts – I – Introduction

Impacts – II – GHG Emissions Projections: SRES and RCP

Impacts – III – Population in 2100

Impacts – IV – Temperature Projections and Probabilities

Impacts – V – Climate change is already causing worsening storms, floods and droughts

Impacts – VI – Sea Level Rise 1

Impacts – VII – Sea Level 2 – Uncertainty

Impacts – VIII – Sea level 3 – USA

Impacts – IX – Sea Level 4 – Sinking Megacities

Impacts – X – Sea Level Rise 5 – Bangladesh

Impacts XI – Rainfall 1

Impacts – XII – Rainfall 2

Impacts – XIII – Rainfall 3

References

Hurricanes and climate: the US CLIVAR working group on hurricanes, American Meteorological Society, Kevin Walsh et al (2015) – free paper

Whither Hurricane Activity? Gabriel A Vecchi, Kyle L Swanson & Brian J. Soden, Science (2008) – free paper

Simulation of the Recent Multidecadal Increase of Atlantic Hurricane Activity Using an 18-km-Grid Regional Model, Thomas Knutson et al, American Meteorological Society, (2007) – free paper

Modeled Impact of Anthropogenic Warming on the Frequency of Intense Atlantic Hurricanes, Morris A Bender et al, Science (2010) – free paper

Notes

Note 1: The scenario is A1B, which is similar to RCP6 – that is, an approximate doubling of CO2 by the end of the century. The simulations came from the CMIP3 suite of model results.

Advertisements

In XII – Rainfall 2 we saw the results of many models on rainfall as GHGs increase. They project wetter tropics, drier subtropics and wetter higher latitude regions. We also saw an expectation that rainfall will increase globally, with something like 2-3% per ºC of warming.

Here is a (too small) graph from Allen & Ingram (2002) showing the model response of rainfall under temperature changes from GHG increases. The dashed line marked “C-C” is the famous (in climate physics) Clausius–Clapeyron relation which, at current temperatures, shows a 7% change in water vapor per ºC of warming. The red triangles are the precipitation changes from model simulations showing about half of that.

From Allen & Ingram (2002)

Figure 1

Here is another graph from the same paper showing global mean temperature change (top) and rainfall over land (bottom):

From Allen & Ingram (2002)

Figure 2

The temperature has increased over the last 50 years, and models and observations show that the precipitation has.. oh, it’s not changed. What is going on?

First, the authors explain some important background:

The distribution of moisture in the troposphere (the part of the atmosphere that is strongly coupled to the surface) is complex, but there is one clear and strong control: moisture condenses out of supersaturated air.

This constraint broadly accounts for the humidity of tropospheric air parcels above the boundary layer, because almost all such parcels will have reached saturation at some point in their recent history. Physically, therefore, it has long seemed plausible that the distribution of relative humidity would remain roughly constant under climate change, in which case the Clausius-Clapeyron relation implies that specific humidity would increase roughly exponentially with temperature.

This reasoning is strongest at higher latitudes where air is usually closer to saturation, and where relative humidity is indeed roughly constant through the substantial temperature changes of the seasonal cycle. For lower latitudes it has been argued that the real-world response might be different. But relative humidity seems to change little at low latitudes under a global warming scenario, even in models of very high vertical resolution, suggesting this may be a robust ’emergent constraint’ on which models have already converged.

They continue:

If tropospheric moisture loading is controlled by the constraints of (approximately) unchanged relative humidity and the Clausius-Clapeyron relation, should we expect a corresponding exponential increase in global precipitation and the overall intensity of the hydrological cycle as global temperatures rise?

This is certainly not what is observed in models.

To clarify, the point in the last sentence is that models do show an increase in precipitation, but not at the same rate as the expected increase in specific humidity (see note 1 for new readers).

They describe their figure 2 (our figure 1 above) and explain:

The explanation for these model results is that changes in the overall intensity of the hydrological cycle are controlled not by the availability of moisture, but by the availability of energy: specifically, the ability of the troposphere to radiate away latent heat released by precipitation.

At the simplest level, the energy budgets of the surface and troposphere can be summed up as a net radiative heating of the surface (from solar radiation, partly offset by radiative cooling) and a net radiative cooling of the troposphere to the surface and to space (R) being balanced by an upward latent heat flux (LP, where L is the latent heat of evaporation and P is global-mean precipitation): evaporation cools the surface and precipitation heats the troposphere.

[Emphasis added].

Basics Digression

Picture the atmosphere over a long period of time (like a decade), and for the whole globe. If it hasn’t heated up or cooled down we know that the energy in must equal energy out (or if it has only done so only marginally then energy in is almost equal to energy out). This is the first law of thermodynamics – energy is conserved.

What energy comes into the atmosphere?

  1. Solar radiation is partly absorbed by the atmosphere (most is transmitted through and heats the surface of the earth)
  2. Radiation emitted from the earth’s surface (we’ll call this terrestrial radiation) is mostly absorbed by the atmosphere (some is transmitted straight through to space)
  3. Warm air is convected up from the surface
  4. Heat stored in evaporated water vapor (latent heat) is convected up from the surface and the water vapor condenses out, releasing heat into the atmosphere when this happens

How does the atmosphere lose energy?

  1. It radiates downwards to the surface
  2. It radiates out to space

..end of digression

Changing Energy Budget

In a warmer world, if we have more evaporation we have more latent heat transfer from the surface into the troposphere. But the atmosphere has to be able to radiate this heat away. If it can’t, then the atmosphere becomes warmer, and this reduces convection. So with a warmer surface we may have a plentiful potential supply of latent heat (via water vapor) but the atmosphere needs a mechanism to radiate away this heat.

Allen & Ingram put forward a simple conceptual equation:

ΔRc + ΔRT = LΔP

where the change in radiative cooling ΔR, is split into two components: ΔRc that is independent of the change in atmospheric temperature; and ΔRT that depends only on the temperature

L = latent heat of water vapor (a constant), ΔP = change in rainfall (= change in evaporation, as evaporation is balanced by rainfall)

LΔP is about 1W/m² per 1% increase in global precipitation.

Now, if we double CO2, then before any temperature changes we decrease the outgoing longwave radiation through the tropopause (the top of the troposphere) by about 3-4W/m² and we increase atmospheric radiation to the surface by about 1W/m².

So doubling CO2, ΔRc = -2 to -3W/m²; prior to a temperature change ΔRT = 0; and so ΔP reduces.

The authors comment that increasing CO2 before any temperature change takes place reduces the intensity of the hydrological cycle and this effect was seen in early modeling experiments using prescribed sea surface temperatures.

Now, of course, the idea of doubling CO2 without any temperature change is just a thought experiment. But it’s an important thought experiment because it lets us isolate different factors.

The authors then consider their factor ΔRT:

The enhanced radiative cooling due to tropospheric warming, ΔRT, is approximately proportional to ΔT: tropospheric temperatures scale with the surface temperature change and warmer air radiates more energy, so ΔRT = kΔT, with k=3W/(m²K)..

All this is saying is that as the surface warms, the atmosphere warms at about the same rate, and the atmosphere then emits more radiation. This is why the model results of rainfall in our figure 2 above show no trend in rainfall over 50 years, and also match the observations – the constraint on rainfall is the changing radiative balance in the troposphere.

And so they point out:

Thus, although there is clearly usable information in fig. 3 [our figure 2], it would be physically unjustified to estimate ΔP/ΔT directly from 20th century observations and assume that the same quantity will apply in the future, when the balance between climate drivers will be very different.

There is a lot of other interesting commentary in their paper, although the paper itself is now quite dated (and unfortunately behind a paywall). In essence they discuss the difficulties of modeling precipitation changes, especially for a given region, and are looking for “emergent constraints” from more fundamental physics that might help constrain forecasts.

A forecasting system that rules out some currently conceivable futures as unlikely could be far more useful for long-range planning than a small number of ultra-high-resolution forecasts that simply rule in some (very detailed futures as possibilities).

This is a very important point when considering impacts.

Conclusion

Increasing the surface temperature by 1ºC is expected to increase the humidity over the ocean by about 7%. This is simply the basic physics of saturation. However, climate models predict an increase in mean rainfall of maybe 2-3% per ºC. The fundamental reason is that the movement of latent heat from the surface to the atmosphere has to be radiated away by the atmosphere, and so the constraint is the ability of the atmosphere to do this. And so the limiting factor in increasing rainfall is not the humidity increase, it is the radiative cooling of the atmosphere.

We also see that despite 50 years of warming, mean rainfall hasn’t changed. Models also predict this. This is believed to be a transient state, for reasons explained in the article.

References

Constraints on future changes in climate and the hydrologic cycle, MR Allen & WJ Ingram, Nature (2002)  – freely available [thanks, Robert]

Notes

1 Relative humidity is measured as a percentage. If the relative humidity = 100% it means the air is saturated with water vapor – it can’t hold any more water vapor. If the relative humidity = 0% it means the air is completely dry. As temperature increases the ability of air to hold water vapor increases non-linearly.

For example, at 0ºC, 1kg of air can carry around 4g of water vapor, at 10ºC that has doubled to 8g, and at 20ºC it has doubled again to 15g (I’m using approximate values).

So now imagine saturated air over the ocean at 20ºC rising up and therefore cooling (it is cooler higher up in the atmosphere). By the time the air parcel has cooled down to 0ºC (this might be anything from 2km to 5km altitude) it is still saturated but is only carrying 4g of water vapor, having condensed out 11g into water droplets.

At least 99.9% of physicists believe the theory of gravity, and the heliocentric model of the solar system. The debate is over. There is no doubt that we can send a manned (and woman-ed) mission to Mars.

Some “skeptics” say it can’t be done. They are denying basic science! Gravity is plainly true. So is the heliocentric model. Everyone agrees. There is an overwhelming consensus. So the time for discussion is over. There is no doubt about the Mars mission.

I create this analogy (note 1) for people who don’t understand the relationship between five completely different ideas:

  • the “greenhouse” effect
  • burning fossil fuels adds CO2 to the atmosphere, increasing the “greehouse” effect
  • climate models
  • crop models
  • economic models

The first two items on the list are fundamental physics and chemistry, and while advanced to prove (see The “Greenhouse” Effect Explained in Simple Terms for the first one) to people who want to work through a proof, they are indisputable. Together they create the theory of AGW (anthropogenic global warming). This says that humans are contributing to global warming by burning fossil fuels.

99.9% of people who understand atmospheric physics believe this unassailable idea (note 2).

This means that if we continue with “business as usual” (note 3) and keep using fossil fuels to generate energy, then by 2100 the world will be warmer than today.

How much warmer?

For that we need climate models.

Climate Models

These are models which break the earth’s surface, ocean and atmosphere into a big grid so that we can use physics equations (momentum, heat transfer and others) to calculate future climate (this class of model is called finite element analysis). These models include giant fudge-factors that can’t be validated (by giant fudge factors I mean “sub-grid parameterizations” and unknown parameters, but I’m writing this article for a non-technical audience).

One way to validate models is to model the temperature over the last 100 years. Another way is to produce a current climatology that matches observations. Generally temperature is the parameter with most attention (note 4).

Some climate models predict that if we double CO2 in the atmosphere (from pre-industrial periods) then surface temperature will be around 4.5ºC warmer. Others that the temperature will be 1.5ºC warmer. And everything in between.

Surely we can just look at which models reproduced the last 100 years temperature anomaly the best and work with those?

From Mauritsen et al 2012

If the model that predicts 1.5ºC in 2100 is close to the past, while the one that predicts 4.5ºC has a big overshoot, we will know that 1.5ºC is a more likely future. Conversely, if the model that predicts 4.5ºC in 2100 is close to the past but the 1.5ºC model woefully under-predicts the last 100 years of warming then we can expect more like 4.5ºC in 2100.

You would think so, but you would be wrong.

All the models get the last 100 years of temperature changes approximately correct. Jeffrey Kiehl produced a paper 10 years ago which analyzed the then current class of models and gently pointed out the reason. Models with large future warming included a high negative effect from aerosols over the last 100 years. Models with small future warming included a small negative effect from aerosols over the last 100 years. So both reproduced the past but with a completely different value of aerosol cooling. You might think we can just find out the actual cooling effect of aerosols around 1950 and then we will know which climate model to believe – but we can’t. We didn’t have satellites to measure the cooling effect of aerosols back then.

This is the challenge of models with many parameters that we don’t know. When a modeler is trying to reproduce the past, or the present, they pick the values of parameters which make the model match reality as best as they can. This is a necessary first step (note 5).

So how warm will it be in 2100 if we double CO2 in the atmosphere?

Somewhat warmer

Models also predict rainfall, drought and storms. But they aren’t as good as they are at temperature. Bray and von Storch survey climate scientists periodically on a number of topics. Here is their response to:

How would you rate the ability of regional climate models to make 50 year projections of convective rain storms/thunder storms? (1 = very poor to 7 = very good)

Similar ratings are obtained for rainfall predictions. The last 50 years has seen no apparent global worsening of storms, droughts and floods, at least according to the IPCC consensus (see Impacts – V – Climate change is already causing worsening storms, floods and droughts).

Sea level is expected to rise between around 0.3m to 0.6m (see Impacts – VI – Sea Level Rise 1 and IX – Sea Level 4 – Sinking Megacities) – this is from AR5 of the IPCC (under scenario RCP6). I mention this because the few people I’ve polled thought that sea level was expected to be 5-10m higher in 2100.

Actual reports with uneventful projections don’t generate headlines.

Crop Models

Crop models build on climate models. Once we know rainfall, drought and temperature we can work out how this impacts crops.

Will we starve to death? Or will there be plentiful food?

Past predictions of disaster haven’t been very accurate, although they are wildly popular with generating media headlines and book sales, as Paul Ehrlich found to his benefit. But that doesn’t mean future predictions of disaster are necessarily wrong.

There are a number of problems with trying to answer the question.

Even if climate models could predict the global temperature, when it comes to a region the size of, say, northern California their accuracy is much lower. Likewise for rainfall. Models which produce similar global temperature changes often have completely different regional precipitation changes. For example, from the IPCC Special Report on Extremes (SREX), p. 154:

At regional scales, there is little consensus in GCM projections regarding the sign of future change in monsoon characteristics, such as circulation and rainfall. For instance, while some models project an intense drying of the Sahel under a global warming scenario, others project an intensification of the rains, and some project more frequent extreme events..

In a warmer world with more CO2 (helps some plants) and maybe more rainfall, or maybe less what can we expect out of crop yields? It’s not clear. The IPCC AR5 wg II, ch 7, p 496:

For example, interactions among CO2 fertilization, temperature, soil nutrients, O3, pests, and weeds are not well understood (Soussana et al., 2010) and therefore most crop models do not include all of these effects.

Of course, as climate changes over the next 80 years agricultural scientists will grow different crops, and develop new ones. In 1900, almost half the US population worked in farming. Today the figure is 2-3%. Agriculture has changed unimaginably.

In the left half of this graph we can see global crop yield improvements over 50 years (the right side is projections to 2050):

From Ray et al 2013

Economic Models

What will the oil price be in 2020? Economic models give you the answer. Well, they give you an answer. And if you consult lots of models they give you lots of different answers. When the oil price changes a lot, which it does from time to time, all of the models turn out to be wrong. Predicting future prices of commodities is very hard, even when it is of paramount concern for major economies, and even when a company could make vast profits from accurate prediction.

AR5 of the IPCC report, wg 2, ch 7, p.512, had this to say about crop prices in 2050:

Changes in temperature and precipitation, without considering effects of CO2, will contribute to increased global food prices by 2050, with estimated increases ranging from 3 to 84% (medium confidence). Projections that include the effects of CO2 changes, but ignore O3 and pest and disease impacts, indicate that global price increases are about as likely as not, with a range of projected impacts from –30% to +45% by 2050..

..One lesson from recent model intercomparison experiments (Nelson et al., 2014) is that the choice of economic model matters at least as much as the climate or crop model for determining  price response to climate change, indicating the critical role of economic uncertainties for projecting the magnitude of price impacts.

In 2001, the 3rd report (often called TAR) said, ch 5, p.238, perhaps a little more clearly:

..it should be noted however that hunger estimates are based on the assumptions that food prices will rise with climate change, which is highly uncertain

Economic models are not very good at predicting anything. As Herbert Stein said, summarizing a lifetime in economics:

  • Economists do not know very much
  • Other people, including the politicians who make economic policy, know even less about economics than economists do

Conclusion

Recently a group, Cook et al 2013, reviewed over 10,000 abstracts of climate papers and concluded that 97% believed in the proposition of AGW – the proposition that humans are contributing to global warming by burning fossil fuels. I’m sure if the question were posed the right way directly to thousands of climate scientists, the number would be over 99%.

It’s not in dispute.

AGW is a necessary theory for Catastrophic Anthropogenic Global Warming (CAGW). But not sufficient by itself.

Likewise we know for sure that gravity is real and the planets orbit the sun. But it doesn’t follow that we can get humans safely to Mars and back. Maybe we can. Understanding gravity and the heliocentric theory is a necessary condition for the mission, but a lot more needs to be demonstrated.

The uncertainties in CAGW are huge.

Economic models that have no predictive skill are built on limited crop models which are built on climate models which have a wide range of possible global temperatures and no consensus on regional rainfall.

Human ingenuity somehow solved the problem of going from 2.5bn people in the middle of the 20th century to more than 7bn people today, and yet the proportion of the global population in abject poverty (note 6) has dropped from over 40% to maybe 15%. This was probably unimaginable 70 years ago.

Perhaps reasonable people can question if climate change is definitely the greatest threat facing humanity?

Perhaps questioning the predictive power of economic models is not denying science?

Perhaps it is ok to be unsure about the predictive power of climate models that contain sub-grid parameterizations (giant fudge factors) and that collectively provide a wide range of forecasts?

Perhaps people who question the predictions aren’t denying basic (or advanced) science, and haven’t lost their reason or their moral compass?

—-

[Note to commenters, added minutes after this post was written – this article is not intended to restart debate over the “greenhouse” effect, please post your comments in one of the 10s (100s?) of articles that have covered that subject, for example – The “Greenhouse” Effect Explained in Simple Terms – Comments on the reality of the “greenhouse” effect posted here will be deleted. Thanks for understanding.]

References

Twentieth century climate model response and climate sensitivity, Jeffrey Kiehl (2007)

Tuning the climate of a global model, Mauritsen et al (2012)

Yield Trends Are Insufficient to Double Global Crop Production by 2050, Deepak K. Ray et al (2013)

Quantifying the consensus on anthropogenic global warming in the scientific literature, Cook et al, Environmental Research Letters (2013)

The Great Escape, Angus Deaton, Princeton University Press (2013)

The various IPCC reports cited are all available at their website

Notes

1. An analogy doesn’t prove anything. It is for illumination.

2. How much we have contributed to the last century’s warming is not clear. The 5th IPCC report (AR5) said it was 95% certain that more than 50% of recent warming was caused by human activity. Well, another chapter in the same report suggested that this was a bogus statistic and I agree, but that doesn’t mean I think that the percentage of warming caused by human activity is lower than 50%. I have no idea. It is difficult to assess, likely impossible. See Natural Variability and Chaos – Three – Attribution & Fingerprints for more.

3. Reports on future climate often come with the statement “under a conservative business as usual scenario” but refer to a speculative and hard to believe scenario called RCP8.5 – see Impacts – II – GHG Emissions Projections: SRES and RCP. I think RCP 6 is much closer to the world of 2100 if we do little about carbon emissions and the world continues on the kind of development pathways that we have seen over the last 60 years. RCP8.5 was a scenario created to match a possible amount of CO2 in the atmosphere and how we might get there. Calling it “a conservative business as usual case” is a value-judgement with no evidence.

4. More specifically the change in temperature gets the most attention. This is called the “temperature anomaly”. Many models that do “well” on temperature anomaly actually do quite badly on the actual surface temperature. See Models, On – and Off – the Catwalk – Part Four – Tuning & the Magic Behind the Scenes – you can see that many “fit for purpose” models have current climate halfway to the last ice age even though they reproduce the last 100 years of temperature changes pretty well. That is, they model temperature changes quite well, but not temperature itself.

5. This is a reasonable approach used in modeling (not just climate modeling) – the necessary next step is to try to constrain the unknown parameters and giant fudge factors (sub-grid parameterizations). Climate scientists work very hard on this problem. Many confused people writing blogs think that climate modelers just pick the values they like, produce the model results and go have coffee. This is not the case, and can easily be seen by just reviewing lots of papers. The problem is well-understood among climate modelers. But the world is a massive place, detailed past measurements with sufficient accuracy are mostly lacking, and sub-grid parameterizations of non-linear processes are a very difficult challenge (this is one of the reasons why turbulent flow is a mostly unsolved problem).

6. This is a very imprecise term. I refer readers to the 2015 Nobel Prize winner Angus Deaton and his excellent book, The Great Escape (2013) for more.

In a few large companies I observed the same phenomenon – over here are corporate dreams and over there is reality. Team – your job is to move reality over to where corporate dreams are.

It wasn’t worded like that. Anyway, reality won each time. Reality is pretty stubborn. Of course, refusal to accept “reality” is what has created great inventions and companies. It’s not always clear what is reality and what is today’s lack of vision vs tomorrow’s idea that just needs lots of work to make a revolution. So ideas should be challenged to find “reality”. But reality itself is hard to change.

I starting checking on Carbon Brief via my blog feed a few months back. It has some decent articles although they are more “reporting press releases or executive summaries” than any critical analysis. But at least they lack hysterical headlines and good vs evil doesn’t even appear in the subtext, which is refreshing. I’ve been too busy with other projects recently to devote any time to writing about climate science or impacts, but their article today – In-depth: How a smart flexible grid could save the UK £40bn – did inspire me to read one of the actual reports referenced. Part of the reason my interest was piqued was I’ve seen many articles where “inflexible baseload” is compared with “smart decentralized grids” and “flexible systems”. All lovely words, which must mean they are better ways to create an electricity grid. A company I used to work for created a few products with “smart” in the name. All good marketing. But what about reality? Let’s have a look.

The report in question is An analysis of electricity system flexibility for Great Britain from November 2016 by Carbon Trust. The UK government has written into legislation to reduce carbon emissions to almost nothing by 2050 and so they need to get to work.

What is fascinating reading the report is that all of the points I made in previous articles in this series show up, but dressed up in a very positive way:

We’re choosing between all these great options on the best way to save money

For those who like a short story, I’ll rewrite that summary:

We’re choosing between all these expensive options trying to understand which one (or what mix) will be the least expensive. Unfortunately we don’t know but we need to start now because we’ve already committed to this huge carbon reduction by 2050. If we make a good pick then we’ll spend the least amount of money, but if we get it wrong we will be left with lots of negative outcomes and high costs for a long time

Well, when you pay for the report you should be allowed to get the window dressing that you like. That’s a minimum.

The imponderables are that wind power is intermittent (and there’s not much solar at high latitudes) so you have some difficult choices:

I’ll just again repeat something I’ve said a few times in this series. I’m not trying to knock renewable energy or decarbonizing energy. But solving a problem requires understanding the scale of the problem and especially the hardest challenges – before you start on the main project.

As a digression, there is a lovely irony about the use of the words “flexible” for renewable energy vs “inflexible” for conventional energy. Planning conventional energy grids is pretty easy – you can be very flexible because a) you have dispatchable power, and b) you can stick the next power station right next to the new demand as and when it appears. So the current system is incredibly flexible and you don’t need to be much of a crystal ball gazer. That said, it’s just my appreciation of irony and how I can’t help enjoying the excitement other people have in taking up inspirational words for ideas they like.. anyway, it has zero bearing on the difficult questions at hand.

As the article from Carbon Brief said, there’s £40bn of savings to be had. Here is the report:

The modelling for the analysis has shown that the deployment of flexibility technologies could save the UK energy system £17-40 billion cumulative to 2050 against a counterfactual where flexibility technologies are not available

Ok, so it’s not £40bn of savings. The modeling says getting it wrong will cost £40bn more than picking better options. Or if the technologies don’t appear then it will be more expensive..

What are these “flexible grid technologies”?

Demand Management

The first one is the effectively untested idea of demand management (see XVIII – Demand Management & Levelized Cost) which allows the grid operator to shift peoples’ demand to when supply is available. (Remember that the biggest current challenge of an electricity grid is that second by second and minute by minute the grid operators have to match supply with demand – this is a big challenge but has been conquered with dispatchable power and a variety of mechanisms for the different timescales). I say untested because only small-scale trials have been done with very mixed results, and some large-scale trials are needed. They will be expensive. As the report says:

Demand side response has a key role in providing flexibility but also has the greatest uncertainty in terms of cost and uptake

However, with a big enough stick you get the result you want. The question is how palatable that is to voters and what kind of stomach politicians have for voter unrest. For example, increase the cost of electricity to £100/kWhr when little is available. Once you hear that a few friends received a £10,000 bill that they can’t get out of and are being taken to court you will be running around the house turning everything off and paying close attention to the tariff changes. When the tariff soars, you are all sitting in your house in your winter coats (perhaps with a small bootleg butane heater) with the internet off, the TV off, the lights off and singing entertaining songs about your favorite politicians.

I present this not in parody, but just to demonstrate that it is completely possible to get demand management to work. Just need a strong group of principled politicians with the courage of their convictions and no fear of voters.. (yes, that last bit was parody, if you are a politician you have to be afraid of voters, it’s the job requirement).

So the challenge isn’t “the technology”, it’s the cost of rolling out the technology and how inflexible consumers are with their demand preferences. What is the elasticity of demand? What results will you get? And the timescale matters. If you need people to delay using energy by one hour, you get one result. If you need people to delay using energy by two days, you get a completely different result. There is no data on this.

Pick a few large cities, design the experiments, implement the technology and use it to test different time horizons in different weather over a two year period and see how well it works. This is an urgent task that a few countries should have seriously started years ago. Data is needed.

Storage

Table 26 in the appendices has some storage costs, which for bulk storage “Includes a basket of technologies such as pumped hydro and compressed air energy storage” and is costed in £/kW – with a range of about £700 – 1,700/kW ($900 – 2,200/kW). This is for a 12 hour duration – typical daily cycle. These increase somewhat over the time period in question (to 2050) as you might expect.

For distributed storage “Based on a basket of lithium ion battery technologies” ranges from £900 – 1,300/kW today falling to £400 – 900/kW by 2050. This is for a 2 hour duration (and a 5-year lifetime). Meaning that the cost per unit of energy stored is £450 – 650/kWhr today falling to £200 – 450/kWhr by 2050. So they don’t have super-optimistic cost reductions for storage.

The storage calculations under various scenarios range from 10-20GW with a couple of outliers (5GW and 28GW).

My back of the envelope calculation says that if you can’t expand pumped hydro, don’t build your gas plants, and do need to rely on batteries, then for a 2-day wind hiatus and no demand management you would spend “quite a bit”. This is based on the expected energy use (below) of about 60GW = 2,880 GWhr for 48 hours. Converting to kWhr we get 2,880 x 106 and multiplying by the cost of £300/kWhr = £864bn every 5 years, or £170bn per year. UK GDP is about £2,000bn per year at the moment. This gives an idea of the cost of batteries when you want to back up power for a period of days.

Backup Plants

The backup gas plants show as around 20GW of CCGT and somewhere between 30-90GW of peaking plants added by 2050 (depending on the scenario). This makes sense. You need something less expensive than storage. It appears the constraint is the requirement to cut emissions so much that even running these plants as backup for low wind / no wind is a problem.

Expected Energy Use

The consumed electricity for 2020 is given (in the appendix) as 320-340 TWhr. Dividing by the number of hours in the year gives us the average output of 36-39 GW, which seems about right (recent figures from memory were about 30GW for the UK on average).

In 2050 the estimate is for 410-610 TWhr or an average of 47-70GW. This includes electric vehicles and heating – that is, all energy is coming from the grid – so on the surface it seems too low (current electricity usage is about 40% of total energy). Still, I’ve never tried to calculate it and they probably have some assumptions (not in this report) on improved energy efficiency.

Cost of Electricity in 2050 under These Various Scenarios

n/a

Conclusion

The key challenges for large-scale reductions in CO2 emissions haven’t changed. It is important to try and identify what future cost scenarios vs current plans will result in the most pain, but it’s clear that the important data to chart the right course is largely unknown. Luckily, report summaries can put some nice window-dressing on the problems.

As always with reports for public consumption the executive summary and the press release are best avoided. The chapters themselves and especially the appendices give some data that can be evaluated.

It’s clear that large-scale interconnectors across the country are needed to deliver power from places where high wind exists (e.g. west coast of Scotland) to demand locations (e.g. London). But it’s not clear that inter-connecting to Europe will solve many problems because most of northern and central Europe will be likewise looking for power when their wind output is low on a cold winter evening. Perhaps inter-connecting to further locations, as reviewed in XII – Windpower as Baseload and SuperGrids is an option, although this wasn’t reviewed in the paper.

It wasn’t clear to me from the report whether gas plants without storage/demand management/importing large quantities of European electricity would solve the problem except for too aggressive CO2 reduction targets. It sorted of hinted that the constraint of CO2 emissions forced the gas plants to less and less backup use, even though their available capacity was still very high in 2050. Wind turbines plus interconnectors around the country plus gas plants are simple and relatively quantifiable (current gas plants aren’t really optimized for this kind of backup but it’s not peering into a crystal ball to make an intelligent estimate).

The cost of electricity in 2050 for these scenarios wasn’t given in this report.

Articles in this Series

Renewable Energy I – Introduction

Renewables II – Solar and Free Lunches – Solar power

Renewables III – US Grid Operators’ Opinions – The grid operators’ concerns

Renewables IV – Wind, Forecast Horizon & Backups – Some more detail about wind power – what do we do when the wind goes on vacation

Renewables V – Grid Stability As Wind Power Penetration Increases

Renewables VI – Report says.. 100% Renewables by 2030 or 2050

Renewables VII – Feasibility and Reality – Geothermal example

Renewables VIII – Transmission Costs And Outsourcing Renewable Generation

Renewables IX – Onshore Wind Costs

Renewables X – Nationalism vs Inter-Nationalism

Renewables XI – Cost of Gas Plants vs Wind Farms

Renewables XII – Windpower as Baseload and SuperGrids

Renewables XIII – One of Wind’s Hidden Costs

Renewables XIV – Minimized Cost of 99.9% Renewable Study

Renewables XV – Offshore Wind Costs

Renewables XVI – JP Morgan advises

Renewables XVII – Demand Management 1

Renewables XVIII – Demand Management & Levelized Cost

Renewables XIX – Behind the Executive Summary and Reality vs Dreams

A long time ago I wrote The Confirmation Bias – Or Why None of Us are Really Skeptics, with a small insight from Nassim Taleb. Right now I’m rereading The Righteous Mind: Why Good People are Divided by Politics and Religion by Jonathan Haidt.

This is truly a great book if you want to understand more about how we think and how we delude ourselves. Through experiments cognitive psychologists demonstrate that once our “moral machinery” has clicked in, which happens very easily, our reasoning is just an after-the-fact rationalization of what we already believe.

Haidt gives the analogy of a rider on an elephant. The elephant starts going one way rather than another, and the rider, unaware of why, starts coming up with invented reasons for the new direction. It’s like the rider is the PR guy for the elephant. In Haidt’s analogy, the rider is our reasoning, and the elephant is our moral machinery. The elephant is in charge. The rider thinks he is.

An an intuitionist, I’d say that the worship of reason is itself an illustration of one of the most long-lived delusions in Western history: the rationalist delusion..

..The French cognitive scientists Hugo Mercier and Dan Sperber recently reviewed the vast research literature on motivated reasoning (in social psychology) and on the biases and errors of reasoning (in cognitive psychology). They concluded that most of the bizarre and depressing research findings make perfect sense once you see reasoning as having evolved not to help us find truth but to help us engage in arguments, persuasion and manipulation in the context of discussions with other people.

As they put it, “skilled arguers ..are not after the truth but after arguments supporting their views.” This explains why the confirmation bias is so powerful and so ineradicable. How hard could it be to teach students to look on the other side, to look for evidence against their favored view? Yet it’s very hard, and nobody has yet found a way to do it. It’s hard because the confirmation bias is a built-in feature (of an argumentative mind), not a bug that can be removed (from a platonic mind)..

..In the same way, each individual reasoner is really good at one thing: finding evidence to support the position he or she already holds, usually for intuitive reasons..

..I have tried to make a reasoned case that our moral capacities are best described from an intuitionist perspective. I do not claim to have examined the question from all sides, nor to have offered irrefutable proof.

Because of the insurmountable power of the confirmation bias, counterarguments will have to be produced by those who disagree with me.

Haidt also highlights some research showing that more intelligence and education makes you better at generating more arguments for your side of the argument, but not for finding reasons on the other side. “Smart people make really good lawyers and press secretaries.. people invest their IQ in buttressing their own case rather than in exploring the entire issue more fully and evenhandedly.”

The whole book is very readable and full of studies and explanations.

If you fancy a bucket of ice cold water thrown over the rationalist delusion then this is a good way to get it.

I probably should have started a separate series on rainfall and then woven the results back into the Impacts series. It might take a few articles working through the underlying physics and how models and observations of current and past climate compare before being able to consider impacts.

There are a number of different ways to look at rainfall models and reality:

  • What underlying physics provides definite constraints regardless of individual models, groups of models or parameterizations?
  • How well do models represent the geographical distribution of rain over a climatological period like 30 years? (e.g. figure 8 in Impacts XI – Rainfall 1)
  • How well do models represent the time series changes of rainfall?
  • How well do models represent the land vs ocean? (when we think about impacts, rainfall over land is what we care about)
  • How well do models represent the distribution of rainfall and the changing distribution of rainfall, from lightest to heaviest?

In this article I thought I would highlight a set of conclusions from one paper among many. It’s a good starting point. The paper is A canonical response of precipitation characteristics to global warming from CMIP5 models by Lau and his colleagues, and is freely available, and as always I recommend people read the whole paper, along with the supporting information that is also available via the link.

As an introduction, the underlying physics perhaps provides some constraints. This is strongly believed in the modeling community. The constraint is a simple one – if we warm the ocean by 1K (= 1ºC) then the amount of water vapor above the ocean surface increases by about 7%. So we expect a warmer world to have more water vapor – at least in the boundary layer (typically 1km) and over the ocean. If we have more water vapor then we expect more rainfall. But GCMs and also simple models suggest a lower value, like 2-3% per K, not 7%/K. We will come back to why in another article.

It also seems from models that with global warming, rainfall increases more in regions and times of already high rainfall and reduces in regions and times of low rainfall – the “wet get wetter and the dry get drier”. (Also a marketing mantra that introducing a catchy slogan ensures better progress of an idea). So we also expect changes in the distribution of rainfall. One reason for this is a change in the tropical circulation. All to be covered later, so onto the paper..

We analyze the outputs of 14 CMIP5 models based on a 140 year experiment with a prescribed 1% per year increase in CO2 emission. This rate of CO2 increase is comparable to that prescribed for the RCP8.5, a relatively conservative business-as-usual scenario, except the latter includes also changes in other GHG and aerosols, besides CO2.

A 27-year period at the beginning of the integration is used as the control to compute rainfall and temperature statistics, and to compare with climatology (1979–2005) of rainfall data from the Global Precipitation Climatology Project (GPCP). Two similar 27-year periods in the experiment that correspond approximately to a doubling of CO2 emissions (DCO2) and a tripling of CO2 emissions (TCO2) compared to the control are chosen respectively to compute the same statistics..

Just a note that I disagree with the claim that RCP8.5 is a “relatively conservative business as usual scenario” (see Impacts – II – GHG Emissions Projections: SRES and RCP), but that’s just an opinion, as are all views about where the world will be in population, GDP and cumulative emissions 100-150 years from now. It doesn’t detract from the rainfall analysis in the paper.

For people wondering “what is CMIP5?” – this is the model inter-comparison project for the most recent IPCC report (AR5) where many models have to address the same questions so they can be compared.

Here we see (and along with other graphs you can click to enlarge) what the models show in temperature (top left), mean global rainfall (top right), zonal rainfall anomaly by latitude (bottom left) and the control vs the tripled CO2 comparison (bottom right). The many different colors in the first three graphs are each model, while the black line is the mean of the models (“ensemble mean”). The bottom right graph helps put the changes shown in the bottom left into a perspective – with the different between the red and the blue being the difference between tripling CO2 and today:

From Lau et al 2013

Figure 1 – Click to enlarge

In the figure above, the bottom left graph shows anomalies. We see one of the characteristics of models as a result of more GHGs – wetter tropics and drier sub-tropics, along with wetter conditions at higher latitudes.

From the supplementary material, below we see a better regional breakdown of fig 1d (bottom right in the figure above). I’ll highlight the bottom left graph (c) for the African region. Over the continent, the differences between present day and tripling CO2 seem minor as far as model predictions go for mean rainfall:

From Lau et al 2013

Figure 2 – Click to enlarge

The supplementary material also has a comparison between models and observations. The first graph below is what we are looking at (the second graph we will consider afterwards). TRMM (Tropical Rainfall Measuring Mission) is satellite data and GPCP one rainfall climatology that we met in the last article – so they are both observational datasets. We see that the models over-estimate tropic rainfall, especially south of the equator:

From Lau et al 2013

Figure 3 – Click to enlarge

Rainfall Distribution from Light through to Heavy Rain

Lau and his colleagues then look at rainfall distribution in terms of light rainfall through to heavier rainfall. So, take global rainfall and divide it into frequency of occurrence, with light rainfall to the left and heavy rainfall to the right. Take a look back at the bottom graph in the figure above (figure 3, their figure S1). Note that the horizontal axis is logarithmic, with a ratio of over 1000 from left to right.

It isn’t an immediately intuitive graph. Basically there are two sets of graphs. The left “cluster” is how often that rainfall amount occurred, and the black line is GPCP observations. The “right cluster” is how much rainfall fell (as a percentage of total rainfall) for that rainfall amount and again black is observations.

So lighter rainfall, like 1mm/day and below accounts for 50% of time, but being light rainfall accounts for less than 10% of total rainfall.

To facilitate discussion regarding rainfall characteristics in this work, we define, based on the ensemble model PDF, three major rain types: light rain (LR), moderate rain (MR), and heavy rain (HR) respectively as those with monthly mean rain rate below the 20th percentile (<0.3 mm/day), between response (TCO2 minus control, black) and the inter-model 1s the 40th–70th percentile (0.9–2.4mm/day), and above the 98.5% percentile (>9mm/day). An extremely heavy rain (EHR) type defined at the 99.9th percentile (>24 mm day1) will also be referred to, as appropriate.

Here is a geographical breakdown of the total and then the rainfall in these three categories, model mean on the left and observations on the right:

From Lau et al 2013

Figure 4 – Click to enlarge

We can see that the models tend to overestimate the heavy rain and underestimate the light rain. These graphics are excellent because they help us to see the geographical distribution.

Now in the graphs below we see at the top the changes in frequency of mean precipitation (60S-60N) as a function of rain rate; and at the bottom we see the % change in rainfall per K of temperature change, again as a function of rain rate. Note that the bottom graph also has a logarithmic scale for the % change, so as you move up each grid square the value is doubled.

The different models are also helpfully indicated so the spread can be seen:

From Lau et al 2013

Figure 5 – Click to enlarge

Notice that the models are all predicting quite a high % change in rainfall per K for the heaviest rain – something around 50%. In contrast the light rainfall is expected to be up a few % per K and the medium rainfall is expected to be down a few % per K.

Globally, rainfall increases by 4.5%, with a sensitivity (dP/P/dT) of 1.4% per K

Here is a table from their supplementary material with a zonal breakdown of changes in mean rainfall (so not divided into heavy, light etc). For the non-maths people the first row, dP/P is just the % change in precipitation (“d” in front of a variable means “change in that variable”), the second row is change in temperature and the third row is the % change in rainfall per K (or ºC) of warming from GHGs:

From Lau et al 2013

Figure 6 – Click to enlarge

Here are the projected geographical distributions of the changes in mean (top left), heavy (top right), medium (bottom left) and light rain (bottom right) – using their earlier definitions – under tripling CO2:

From Lau et al 2013

Figure 7 – Click to enlarge

And as a result of these projections, the authors also show the number of dry months and the projected changes in number of dry months:

From Lau et al 2013

Figure 8 – Click to enlarge

The authors conclude:

The IPCC CMIP5 models project a robust, canonical global response of rainfall characteristics to CO2 warming, featuring an increase in heavy rain, a reduction in moderate rain, and an increase in light rain occurrence and amount globally.

For a scenario of 1% CO2 increase per year, the model ensemble mean projects at the time of approximately tripling of the CO2 emissions, the probability of occurring of extremely heavy rain (monthly mean >24mm/day) will increase globally by 100%–250%, moderate rain will decrease by 5%–10% and light rain will increase by 10%–15%.

The increase in heavy rain is most pronounced in the equatorial central Pacific and the Asian monsoon regions. Moderate rain is reduced over extensive oceanic regions in the subtropics and extratropics, but increased over the extratropical land regions of North America, and Eurasia, and extratropical Southern Oceans. Light rain is mostly found to be inversely related to moderate rain locally, and with heavy rain in the central Pacific.

The model ensemble also projects a significant global increase up to 16% more frequent in the occurrences of dry months (drought conditions), mostly over the subtropics as well as marginal convective zone in equatorial land regions, reflecting an expansion of the desert and arid zones..

 

..Hence, the canonical global rainfall response to CO2 warming captured in the CMIP5 model projection suggests a global scale readjustment involving changes in circulation and rainfall characteristics, including possible teleconnection of extremely heavy rain and droughts separated by far distances. This adjustment is strongly constrained geographically by climatological rainfall pattern, and most likely by the GHG warming induced sea surface temperature anomalies with unstable moister and warmer regions in the deep tropics getting more heavy rain, at the expense of nearby marginal convective zones in the tropics and stable dry zones in the subtropics.

Our results are generally consistent with so-called “the rich-getting-richer, poor-getting-poorer” paradigm for precipitation response under global warming..

Conclusion

This article has basically presented the results of one paper, which demonstrates consistency in model response of rainfall to doubling and tripling of CO2 in the atmosphere. In subsequent articles we will look at the underlying physics constraints, at time-series over recent decades and try to make some kind of assessment.

Articles in this Series

Impacts – I – Introduction

Impacts – II – GHG Emissions Projections: SRES and RCP

Impacts – III – Population in 2100

Impacts – IV – Temperature Projections and Probabilities

Impacts – V – Climate change is already causing worsening storms, floods and droughts

Impacts – VI – Sea Level Rise 1

Impacts – VII – Sea Level 2 – Uncertainty

Impacts – VIII – Sea level 3 – USA

Impacts – IX – Sea Level 4 – Sinking Megacities

Impacts – X – Sea Level Rise 5 – Bangladesh

Impacts XI – Rainfall 1

References

A canonical response of precipitation characteristics to global warming from CMIP5 models, William K.-M. Lau, H.-T. Wu, & K.-M. Kim, GRL (2013) – free paper

Further Reading

Here are a bunch of papers that I found useful for readers who want to dig into the subject. Most of them are available for free via Google Scholar, but one of the most helpful to me (first in the list) was Allen & Ingram 2002 and the only way I could access it was to pay $4 to rent it for a couple of days.

Allen MR, Ingram WJ (2002) Constraints on future changes in climate and the hydrologic cycle. Nature 419:224–232

Allan RP (2006) Variability in clear-sky longwave radiative cooling of the atmosphere. J Geophys Res 111:D22, 105

Allan, R. P., B. J. Soden, V. O. John, W. Ingram, and P. Good (2010), Current changes in tropical precipitation, Environ. Res. Lett., doi:10.1088/ 1748-9326/5/52/025205

Physically Consistent Responses of the Global Atmospheric Hydrological Cycle in Models and Observations, Richard P. Allan et al, Surv Geophys (2014)

Held IM, Soden BJ (2006) Robust responses of the hydrological cycle to global warming. J Clim 19:5686–5699

Changes in temperature and precipitation extremes in the CMIP5 ensemble, VV Kharin et al, Climatic Change (2013)

Energetic Constraints on Precipitation Under Climate Change, Paul A. O’Gorman et al, Surv Geophys (2012) 33:585–608

Trenberth, K. E. (2011), Changes in precipitation with climate change, Clim. Res., 47, 123–138, doi:10.3354/cr00953

Zahn M, Allan RP (2011) Changes in water vapor transports of the ascending branch of the tropical circulation. J Geophys Res 116:D18111

If we want to assess forecasts of floods, droughts and crop yields then we will need to know rainfall. We will also need to know temperature of course.

The forte of climate models is temperature. Rainfall is more problematic.

Before we get to model predictions about the future we need to review observations and the ability of models to reproduce them. Observations are also problematic – rainfall varies locally and over short durations. And historically we lacked effective observation systems in many locations and regions of the world, so data has to be pieced together and estimated from reanalysis.

Smith and his colleagues created a new rainfall dataset. Here is a comment from their 2012 paper:

Although many land regions have long precipitation records from gauges, there are spatial gaps in the sampling for undeveloped regions, areas with low populations, and over oceans. Since 1979 satellite data have been used to fill in those sampling gaps. Over longer periods gaps can only be filled using reconstructions or reanalyses..

Here are two views of the global precipitation data from a dataset which starts with the satellite era, that is, 1979 onwards – GPCP (Global Precipitation Climatology Project):

From Adler et al 2003

Figure 1

From Adler et al 2003

Figure 2

For historical data before satellites we only have rain gauge data. The GPCC dataset, explained in Becker et al 2013, shows the number of stations over time by region:

From Becker et al 2013

Figure 3- Click to expand

And the geographical distribution of rain gauge stations at different times:

From Becker et al 2013

Figure 4 – Click to expand

The IPCC compared the global trends over land from four different datasets over the last century and the last half-century:

From IPCC AR5 Ch. 2

Figure 5 – Click to expand

And the regional trends:

From IPCC AR5 Ch. 2

Figure 6 – Click to expand

The graphs for the annual change in rainfall, note the different scales for each region (as we would expect given the difference in average rainfall in different region):

From IPCC AR5 ch 2

Figure 7

We see that the decadal or half-decadal variation is much greater than any apparent long term trend. The trend data (as reviewed by the IPCC in figs 5 & 6) shows significant differences in the datasets but when we compare the time series it appears that the datasets match up better than indicated by the trend comparisons.

The data with the best historical coverage is 30ºN – 60ºN and the trend values for 1951-2000 (from different reconstructions) range from an annual increase of 1 to 1.5 mm/yr per decade (fig 6 / table 2.10 of IPCC report). This is against an absolute value of about 1000 mm/yr in this region (reading off the climatology in figure 2).

This is just me trying to put the trend data in perspective.

Models

Here is the IPCC AR5 chapter 9 on model comparisons to satellite-era rainfall observations. Top left is observations (basically the same dataset as figure 1 in this article over a slightly longer period with different colors) and bottom right is percentage error of model average with respect to observations:

From IPCC AR5 ch 9

Figure 8 – Click to expand

We can see that the average of all models has substantial errors on mean rainfall.

Articles in this Series

Impacts – I – Introduction

Impacts – II – GHG Emissions Projections: SRES and RCP

Impacts – III – Population in 2100

Impacts – IV – Temperature Projections and Probabilities

Impacts – V – Climate change is already causing worsening storms, floods and droughts

Impacts – VI – Sea Level Rise 1

Impacts – VII – Sea Level 2 – Uncertainty

Impacts – VIII – Sea level 3 – USA

Impacts – IX – Sea Level 4 – Sinking Megacities

Impacts – X – Sea Level Rise 5 – Bangladesh

References

IPCC AR5 Chapter 2

Improved Reconstruction of Global Precipitation since 1900, Smith, Arken, Ren & Shen, Journal of Atmospheric and Oceanic Technology (2012)

The Version-2 Global Precipitation Climatology Project (GPCP) Monthly Precipitation Analysis (1979–Present), Adler et al, Journal of Hydrometeorology (2003)

A description of the global land-surface precipitation data products of the Global Precipitation Climatology Centre with sample applications including centennial (trend) analysis from 1901–present, A Becker, Earth Syst. Sci. Data (2013)