At least 99.9% of physicists believe the theory of gravity, and the heliocentric model of the solar system. The debate is over. There is no doubt that we can send a manned (and woman-ed) mission to Mars.

Some “skeptics” say it can’t be done. They are denying basic science! Gravity is plainly true. So is the heliocentric model. Everyone agrees. There is an overwhelming consensus. So the time for discussion is over. There is no doubt about the Mars mission.

I create this analogy (note 1) for people who don’t understand the relationship between five completely different ideas:

  • the “greenhouse” effect
  • burning fossil fuels adds CO2 to the atmosphere, increasing the “greehouse” effect
  • climate models
  • crop models
  • economic models

The first two items on the list are fundamental physics and chemistry, and while advanced to prove (see The “Greenhouse” Effect Explained in Simple Terms for the first one) to people who want to work through a proof, they are indisputable. Together they create the theory of AGW (anthropogenic global warming). This says that humans are contributing to global warming by burning fossil fuels.

99.9% of people who understand atmospheric physics believe this unassailable idea (note 2).

This means that if we continue with “business as usual” (note 3) and keep using fossil fuels to generate energy, then by 2100 the world will be warmer than today.

How much warmer?

For that we need climate models.

Climate Models

These are models which break the earth’s surface, ocean and atmosphere into a big grid so that we can use physics equations (momentum, heat transfer and others) to calculate future climate (this class of model is called finite element analysis). These models include giant fudge-factors that can’t be validated (by giant fudge factors I mean “sub-grid parameterizations” and unknown parameters, but I’m writing this article for a non-technical audience).

One way to validate models is to model the temperature over the last 100 years. Another way is to produce a current climatology that matches observations. Generally temperature is the parameter with most attention (note 4).

Some climate models predict that if we double CO2 in the atmosphere (from pre-industrial periods) then surface temperature will be around 4.5ºC warmer. Others that the temperature will be 1.5ºC warmer. And everything in between.

Surely we can just look at which models reproduced the last 100 years temperature anomaly the best and work with those?

From Mauritsen et al 2012

If the model that predicts 1.5ºC in 2100 is close to the past, while the one that predicts 4.5ºC has a big overshoot, we will know that 1.5ºC is a more likely future. Conversely, if the model that predicts 4.5ºC in 2100 is close to the past but the 1.5ºC model woefully under-predicts the last 100 years of warming then we can expect more like 4.5ºC in 2100.

You would think so, but you would be wrong.

All the models get the last 100 years of temperature changes approximately correct. Jeffrey Kiehl produced a paper 10 years ago which analyzed the then current class of models and gently pointed out the reason. Models with large future warming included a high negative effect from aerosols over the last 100 years. Models with small future warming included a small negative effect from aerosols over the last 100 years. So both reproduced the past but with a completely different value of aerosol cooling. You might think we can just find out the actual cooling effect of aerosols around 1950 and then we will know which climate model to believe – but we can’t. We didn’t have satellites to measure the cooling effect of aerosols back then.

This is the challenge of models with many parameters that we don’t know. When a modeler is trying to reproduce the past, or the present, they pick the values of parameters which make the model match reality as best as they can. This is a necessary first step (note 5).

So how warm will it be in 2100 if we double CO2 in the atmosphere?

Somewhat warmer

Models also predict rainfall, drought and storms. But they aren’t as good as they are at temperature. Bray and von Storch survey climate scientists periodically on a number of topics. Here is their response to:

How would you rate the ability of regional climate models to make 50 year projections of convective rain storms/thunder storms? (1 = very poor to 7 = very good)

Similar ratings are obtained for rainfall predictions. The last 50 years has seen no apparent global worsening of storms, droughts and floods, at least according to the IPCC consensus (see Impacts – V – Climate change is already causing worsening storms, floods and droughts).

Sea level is expected to rise between around 0.3m to 0.6m (see Impacts – VI – Sea Level Rise 1 and IX – Sea Level 4 – Sinking Megacities) – this is from AR5 of the IPCC (under scenario RCP6). I mention this because the few people I’ve polled thought that sea level was expected to be 5-10m higher in 2100.

Actual reports with uneventful projections don’t generate headlines.

Crop Models

Crop models build on climate models. Once we know rainfall, drought and temperature we can work out how this impacts crops.

Will we starve to death? Or will there be plentiful food?

Past predictions of disaster haven’t been very accurate, although they are wildly popular with generating media headlines and book sales, as Paul Ehrlich found to his benefit. But that doesn’t mean future predictions of disaster are necessarily wrong.

There are a number of problems with trying to answer the question.

Even if climate models could predict the global temperature, when it comes to a region the size of, say, northern California their accuracy is much lower. Likewise for rainfall. Models which produce similar global temperature changes often have completely different regional precipitation changes. For example, from the IPCC Special Report on Extremes (SREX), p. 154:

At regional scales, there is little consensus in GCM projections regarding the sign of future change in monsoon characteristics, such as circulation and rainfall. For instance, while some models project an intense drying of the Sahel under a global warming scenario, others project an intensification of the rains, and some project more frequent extreme events..

In a warmer world with more CO2 (helps some plants) and maybe more rainfall, or maybe less what can we expect out of crop yields? It’s not clear. The IPCC AR5 wg II, ch 7, p 496:

For example, interactions among CO2 fertilization, temperature, soil nutrients, O3, pests, and weeds are not well understood (Soussana et al., 2010) and therefore most crop models do not include all of these effects.

Of course, as climate changes over the next 80 years agricultural scientists will grow different crops, and develop new ones. In 1900, almost half the US population worked in farming. Today the figure is 2-3%. Agriculture has changed unimaginably.

In the left half of this graph we can see global crop yield improvements over 50 years (the right side is projections to 2050):

From Ray et al 2013

Economic Models

What will the oil price be in 2020? Economic models give you the answer. Well, they give you an answer. And if you consult lots of models they give you lots of different answers. When the oil price changes a lot, which it does from time to time, all of the models turn out to be wrong. Predicting future prices of commodities is very hard, even when it is of paramount concern for major economies, and even when a company could make vast profits from accurate prediction.

AR5 of the IPCC report, wg 2, ch 7, p.512, had this to say about crop prices in 2050:

Changes in temperature and precipitation, without considering effects of CO2, will contribute to increased global food prices by 2050, with estimated increases ranging from 3 to 84% (medium confidence). Projections that include the effects of CO2 changes, but ignore O3 and pest and disease impacts, indicate that global price increases are about as likely as not, with a range of projected impacts from –30% to +45% by 2050..

..One lesson from recent model intercomparison experiments (Nelson et al., 2014) is that the choice of economic model matters at least as much as the climate or crop model for determining  price response to climate change, indicating the critical role of economic uncertainties for projecting the magnitude of price impacts.

In 2001, the 3rd report (often called TAR) said, ch 5, p.238, perhaps a little more clearly:

..it should be noted however that hunger estimates are based on the assumptions that food prices will rise with climate change, which is highly uncertain

Economic models are not very good at predicting anything. As Herbert Stein said, summarizing a lifetime in economics:

  • Economists do not know very much
  • Other people, including the politicians who make economic policy, know even less about economics than economists do


Recently a group, Cook et al 2013, reviewed over 10,000 abstracts of climate papers and concluded that 97% believed in the proposition of AGW – the proposition that humans are contributing to global warming by burning fossil fuels. I’m sure if the question were posed the right way directly to thousands of climate scientists, the number would be over 99%.

It’s not in dispute.

AGW is a necessary theory for Catastrophic Anthropogenic Global Warming (CAGW). But not sufficient by itself.

Likewise we know for sure that gravity is real and the planets orbit the sun. But it doesn’t follow that we can get humans safely to Mars and back. Maybe we can. Understanding gravity and the heliocentric theory is a necessary condition for the mission, but a lot more needs to be demonstrated.

The uncertainties in CAGW are huge.

Economic models that have no predictive skill are built on limited crop models which are built on climate models which have a wide range of possible global temperatures and no consensus on regional rainfall.

Human ingenuity somehow solved the problem of going from 2.5bn people in the middle of the 20th century to more than 7bn people today, and yet the proportion of the global population in abject poverty (note 6) has dropped from over 40% to maybe 15%. This was probably unimaginable 70 years ago.

Perhaps reasonable people can question if climate change is definitely the greatest threat facing humanity?

Perhaps questioning the predictive power of economic models is not denying science?

Perhaps it is ok to be unsure about the predictive power of climate models that contain sub-grid parameterizations (giant fudge factors) and that collectively provide a wide range of forecasts?

Perhaps people who question the predictions aren’t denying basic (or advanced) science, and haven’t lost their reason or their moral compass?


[Note to commenters, added minutes after this post was written – this article is not intended to restart debate over the “greenhouse” effect, please post your comments in one of the 10s (100s?) of articles that have covered that subject, for example – The “Greenhouse” Effect Explained in Simple Terms – Comments on the reality of the “greenhouse” effect posted here will be deleted. Thanks for understanding.]


Twentieth century climate model response and climate sensitivity, Jeffrey Kiehl (2007)

Tuning the climate of a global model, Mauritsen et al (2012)

Yield Trends Are Insufficient to Double Global Crop Production by 2050, Deepak K. Ray et al (2013)

Quantifying the consensus on anthropogenic global warming in the scientific literature, Cook et al, Environmental Research Letters (2013)

The Great Escape, Angus Deaton, Princeton University Press (2013)

The various IPCC reports cited are all available at their website


1. An analogy doesn’t prove anything. It is for illumination.

2. How much we have contributed to the last century’s warming is not clear. The 5th IPCC report (AR5) said it was 95% certain that more than 50% of recent warming was caused by human activity. Well, another chapter in the same report suggested that this was a bogus statistic and I agree, but that doesn’t mean I think that the percentage of warming caused by human activity is lower than 50%. I have no idea. It is difficult to assess, likely impossible. See Natural Variability and Chaos – Three – Attribution & Fingerprints for more.

3. Reports on future climate often come with the statement “under a conservative business as usual scenario” but refer to a speculative and hard to believe scenario called RCP8.5 – see Impacts – II – GHG Emissions Projections: SRES and RCP. I think RCP 6 is much closer to the world of 2100 if we do little about carbon emissions and the world continues on the kind of development pathways that we have seen over the last 60 years. RCP8.5 was a scenario created to match a possible amount of CO2 in the atmosphere and how we might get there. Calling it “a conservative business as usual case” is a value-judgement with no evidence.

4. More specifically the change in temperature gets the most attention. This is called the “temperature anomaly”. Many models that do “well” on temperature anomaly actually do quite badly on the actual surface temperature. See Models, On – and Off – the Catwalk – Part Four – Tuning & the Magic Behind the Scenes – you can see that many “fit for purpose” models have current climate halfway to the last ice age even though they reproduce the last 100 years of temperature changes pretty well. That is, they model temperature changes quite well, but not temperature itself.

5. This is a reasonable approach used in modeling (not just climate modeling) – the necessary next step is to try to constrain the unknown parameters and giant fudge factors (sub-grid parameterizations). Climate scientists work very hard on this problem. Many confused people writing blogs think that climate modelers just pick the values they like, produce the model results and go have coffee. This is not the case, and can easily be seen by just reviewing lots of papers. The problem is well-understood among climate modelers. But the world is a massive place, detailed past measurements with sufficient accuracy are mostly lacking, and sub-grid parameterizations of non-linear processes are a very difficult challenge (this is one of the reasons why turbulent flow is a mostly unsolved problem).

6. This is a very imprecise term. I refer readers to the 2015 Nobel Prize winner Angus Deaton and his excellent book, The Great Escape (2013) for more.

In a few large companies I observed the same phenomenon – over here are corporate dreams and over there is reality. Team – your job is to move reality over to where corporate dreams are.

It wasn’t worded like that. Anyway, reality won each time. Reality is pretty stubborn. Of course, refusal to accept “reality” is what has created great inventions and companies. It’s not always clear what is reality and what is today’s lack of vision vs tomorrow’s idea that just needs lots of work to make a revolution. So ideas should be challenged to find “reality”. But reality itself is hard to change.

I starting checking on Carbon Brief via my blog feed a few months back. It has some decent articles although they are more “reporting press releases or executive summaries” than any critical analysis. But at least they lack hysterical headlines and good vs evil doesn’t even appear in the subtext, which is refreshing. I’ve been too busy with other projects recently to devote any time to writing about climate science or impacts, but their article today – In-depth: How a smart flexible grid could save the UK £40bn – did inspire me to read one of the actual reports referenced. Part of the reason my interest was piqued was I’ve seen many articles where “inflexible baseload” is compared with “smart decentralized grids” and “flexible systems”. All lovely words, which must mean they are better ways to create an electricity grid. A company I used to work for created a few products with “smart” in the name. All good marketing. But what about reality? Let’s have a look.

The report in question is An analysis of electricity system flexibility for Great Britain from November 2016 by Carbon Trust. The UK government has written into legislation to reduce carbon emissions to almost nothing by 2050 and so they need to get to work.

What is fascinating reading the report is that all of the points I made in previous articles in this series show up, but dressed up in a very positive way:

We’re choosing between all these great options on the best way to save money

For those who like a short story, I’ll rewrite that summary:

We’re choosing between all these expensive options trying to understand which one (or what mix) will be the least expensive. Unfortunately we don’t know but we need to start now because we’ve already committed to this huge carbon reduction by 2050. If we make a good pick then we’ll spend the least amount of money, but if we get it wrong we will be left with lots of negative outcomes and high costs for a long time

Well, when you pay for the report you should be allowed to get the window dressing that you like. That’s a minimum.

The imponderables are that wind power is intermittent (and there’s not much solar at high latitudes) so you have some difficult choices:

I’ll just again repeat something I’ve said a few times in this series. I’m not trying to knock renewable energy or decarbonizing energy. But solving a problem requires understanding the scale of the problem and especially the hardest challenges – before you start on the main project.

As a digression, there is a lovely irony about the use of the words “flexible” for renewable energy vs “inflexible” for conventional energy. Planning conventional energy grids is pretty easy – you can be very flexible because a) you have dispatchable power, and b) you can stick the next power station right next to the new demand as and when it appears. So the current system is incredibly flexible and you don’t need to be much of a crystal ball gazer. That said, it’s just my appreciation of irony and how I can’t help enjoying the excitement other people have in taking up inspirational words for ideas they like.. anyway, it has zero bearing on the difficult questions at hand.

As the article from Carbon Brief said, there’s £40bn of savings to be had. Here is the report:

The modelling for the analysis has shown that the deployment of flexibility technologies could save the UK energy system £17-40 billion cumulative to 2050 against a counterfactual where flexibility technologies are not available

Ok, so it’s not £40bn of savings. The modeling says getting it wrong will cost £40bn more than picking better options. Or if the technologies don’t appear then it will be more expensive..

What are these “flexible grid technologies”?

Demand Management

The first one is the effectively untested idea of demand management (see XVIII – Demand Management & Levelized Cost) which allows the grid operator to shift peoples’ demand to when supply is available. (Remember that the biggest current challenge of an electricity grid is that second by second and minute by minute the grid operators have to match supply with demand – this is a big challenge but has been conquered with dispatchable power and a variety of mechanisms for the different timescales). I say untested because only small-scale trials have been done with very mixed results, and some large-scale trials are needed. They will be expensive. As the report says:

Demand side response has a key role in providing flexibility but also has the greatest uncertainty in terms of cost and uptake

However, with a big enough stick you get the result you want. The question is how palatable that is to voters and what kind of stomach politicians have for voter unrest. For example, increase the cost of electricity to £100/kWhr when little is available. Once you hear that a few friends received a £10,000 bill that they can’t get out of and are being taken to court you will be running around the house turning everything off and paying close attention to the tariff changes. When the tariff soars, you are all sitting in your house in your winter coats (perhaps with a small bootleg butane heater) with the internet off, the TV off, the lights off and singing entertaining songs about your favorite politicians.

I present this not in parody, but just to demonstrate that it is completely possible to get demand management to work. Just need a strong group of principled politicians with the courage of their convictions and no fear of voters.. (yes, that last bit was parody, if you are a politician you have to be afraid of voters, it’s the job requirement).

So the challenge isn’t “the technology”, it’s the cost of rolling out the technology and how inflexible consumers are with their demand preferences. What is the elasticity of demand? What results will you get? And the timescale matters. If you need people to delay using energy by one hour, you get one result. If you need people to delay using energy by two days, you get a completely different result. There is no data on this.

Pick a few large cities, design the experiments, implement the technology and use it to test different time horizons in different weather over a two year period and see how well it works. This is an urgent task that a few countries should have seriously started years ago. Data is needed.


Table 26 in the appendices has some storage costs, which for bulk storage “Includes a basket of technologies such as pumped hydro and compressed air energy storage” and is costed in £/kW – with a range of about £700 – 1,700/kW ($900 – 2,200/kW). This is for a 12 hour duration – typical daily cycle. These increase somewhat over the time period in question (to 2050) as you might expect.

For distributed storage “Based on a basket of lithium ion battery technologies” ranges from £900 – 1,300/kW today falling to £400 – 900/kW by 2050. This is for a 2 hour duration (and a 5-year lifetime). Meaning that the cost per unit of energy stored is £450 – 650/kWhr today falling to £200 – 450/kWhr by 2050. So they don’t have super-optimistic cost reductions for storage.

The storage calculations under various scenarios range from 10-20GW with a couple of outliers (5GW and 28GW).

My back of the envelope calculation says that if you can’t expand pumped hydro, don’t build your gas plants, and do need to rely on batteries, then for a 2-day wind hiatus and no demand management you would spend “quite a bit”. This is based on the expected energy use (below) of about 60GW = 2,880 GWhr for 48 hours. Converting to kWhr we get 2,880 x 106 and multiplying by the cost of £300/kWhr = £864bn every 5 years, or £170bn per year. UK GDP is about £2,000bn per year at the moment. This gives an idea of the cost of batteries when you want to back up power for a period of days.

Backup Plants

The backup gas plants show as around 20GW of CCGT and somewhere between 30-90GW of peaking plants added by 2050 (depending on the scenario). This makes sense. You need something less expensive than storage. It appears the constraint is the requirement to cut emissions so much that even running these plants as backup for low wind / no wind is a problem.

Expected Energy Use

The consumed electricity for 2020 is given (in the appendix) as 320-340 TWhr. Dividing by the number of hours in the year gives us the average output of 36-39 GW, which seems about right (recent figures from memory were about 30GW for the UK on average).

In 2050 the estimate is for 410-610 TWhr or an average of 47-70GW. This includes electric vehicles and heating – that is, all energy is coming from the grid – so on the surface it seems too low (current electricity usage is about 40% of total energy). Still, I’ve never tried to calculate it and they probably have some assumptions (not in this report) on improved energy efficiency.

Cost of Electricity in 2050 under These Various Scenarios



The key challenges for large-scale reductions in CO2 emissions haven’t changed. It is important to try and identify what future cost scenarios vs current plans will result in the most pain, but it’s clear that the important data to chart the right course is largely unknown. Luckily, report summaries can put some nice window-dressing on the problems.

As always with reports for public consumption the executive summary and the press release are best avoided. The chapters themselves and especially the appendices give some data that can be evaluated.

It’s clear that large-scale interconnectors across the country are needed to deliver power from places where high wind exists (e.g. west coast of Scotland) to demand locations (e.g. London). But it’s not clear that inter-connecting to Europe will solve many problems because most of northern and central Europe will be likewise looking for power when their wind output is low on a cold winter evening. Perhaps inter-connecting to further locations, as reviewed in XII – Windpower as Baseload and SuperGrids is an option, although this wasn’t reviewed in the paper.

It wasn’t clear to me from the report whether gas plants without storage/demand management/importing large quantities of European electricity would solve the problem except for too aggressive CO2 reduction targets. It sorted of hinted that the constraint of CO2 emissions forced the gas plants to less and less backup use, even though their available capacity was still very high in 2050. Wind turbines plus interconnectors around the country plus gas plants are simple and relatively quantifiable (current gas plants aren’t really optimized for this kind of backup but it’s not peering into a crystal ball to make an intelligent estimate).

The cost of electricity in 2050 for these scenarios wasn’t given in this report.

Articles in this Series

Renewable Energy I – Introduction

Renewables II – Solar and Free Lunches – Solar power

Renewables III – US Grid Operators’ Opinions – The grid operators’ concerns

Renewables IV – Wind, Forecast Horizon & Backups – Some more detail about wind power – what do we do when the wind goes on vacation

Renewables V – Grid Stability As Wind Power Penetration Increases

Renewables VI – Report says.. 100% Renewables by 2030 or 2050

Renewables VII – Feasibility and Reality – Geothermal example

Renewables VIII – Transmission Costs And Outsourcing Renewable Generation

Renewables IX – Onshore Wind Costs

Renewables X – Nationalism vs Inter-Nationalism

Renewables XI – Cost of Gas Plants vs Wind Farms

Renewables XII – Windpower as Baseload and SuperGrids

Renewables XIII – One of Wind’s Hidden Costs

Renewables XIV – Minimized Cost of 99.9% Renewable Study

Renewables XV – Offshore Wind Costs

Renewables XVI – JP Morgan advises

Renewables XVII – Demand Management 1

Renewables XVIII – Demand Management & Levelized Cost

Renewables XIX – Behind the Executive Summary and Reality vs Dreams

A long time ago I wrote The Confirmation Bias – Or Why None of Us are Really Skeptics, with a small insight from Nassim Taleb. Right now I’m rereading The Righteous Mind: Why Good People are Divided by Politics and Religion by Jonathan Haidt.

This is truly a great book if you want to understand more about how we think and how we delude ourselves. Through experiments cognitive psychologists demonstrate that once our “moral machinery” has clicked in, which happens very easily, our reasoning is just an after-the-fact rationalization of what we already believe.

Haidt gives the analogy of a rider on an elephant. The elephant starts going one way rather than another, and the rider, unaware of why, starts coming up with invented reasons for the new direction. It’s like the rider is the PR guy for the elephant. In Haidt’s analogy, the rider is our reasoning, and the elephant is our moral machinery. The elephant is in charge. The rider thinks he is.

An an intuitionist, I’d say that the worship of reason is itself an illustration of one of the most long-lived delusions in Western history: the rationalist delusion..

..The French cognitive scientists Hugo Mercier and Dan Sperber recently reviewed the vast research literature on motivated reasoning (in social psychology) and on the biases and errors of reasoning (in cognitive psychology). They concluded that most of the bizarre and depressing research findings make perfect sense once you see reasoning as having evolved not to help us find truth but to help us engage in arguments, persuasion and manipulation in the context of discussions with other people.

As they put it, “skilled arguers ..are not after the truth but after arguments supporting their views.” This explains why the confirmation bias is so powerful and so ineradicable. How hard could it be to teach students to look on the other side, to look for evidence against their favored view? Yet it’s very hard, and nobody has yet found a way to do it. It’s hard because the confirmation bias is a built-in feature (of an argumentative mind), not a bug that can be removed (from a platonic mind)..

..In the same way, each individual reasoner is really good at one thing: finding evidence to support the position he or she already holds, usually for intuitive reasons..

..I have tried to make a reasoned case that our moral capacities are best described from an intuitionist perspective. I do not claim to have examined the question from all sides, nor to have offered irrefutable proof.

Because of the insurmountable power of the confirmation bias, counterarguments will have to be produced by those who disagree with me.

Haidt also highlights some research showing that more intelligence and education makes you better at generating more arguments for your side of the argument, but not for finding reasons on the other side. “Smart people make really good lawyers and press secretaries.. people invest their IQ in buttressing their own case rather than in exploring the entire issue more fully and evenhandedly.”

The whole book is very readable and full of studies and explanations.

If you fancy a bucket of ice cold water thrown over the rationalist delusion then this is a good way to get it.

I probably should have started a separate series on rainfall and then woven the results back into the Impacts series. It might take a few articles working through the underlying physics and how models and observations of current and past climate compare before being able to consider impacts.

There are a number of different ways to look at rainfall models and reality:

  • What underlying physics provides definite constraints regardless of individual models, groups of models or parameterizations?
  • How well do models represent the geographical distribution of rain over a climatological period like 30 years? (e.g. figure 8 in Impacts XI – Rainfall 1)
  • How well do models represent the time series changes of rainfall?
  • How well do models represent the land vs ocean? (when we think about impacts, rainfall over land is what we care about)
  • How well do models represent the distribution of rainfall and the changing distribution of rainfall, from lightest to heaviest?

In this article I thought I would highlight a set of conclusions from one paper among many. It’s a good starting point. The paper is A canonical response of precipitation characteristics to global warming from CMIP5 models by Lau and his colleagues, and is freely available, and as always I recommend people read the whole paper, along with the supporting information that is also available via the link.

As an introduction, the underlying physics perhaps provides some constraints. This is strongly believed in the modeling community. The constraint is a simple one – if we warm the ocean by 1K (= 1ºC) then the amount of water vapor above the ocean surface increases by about 7%. So we expect a warmer world to have more water vapor – at least in the boundary layer (typically 1km) and over the ocean. If we have more water vapor then we expect more rainfall. But GCMs and also simple models suggest a lower value, like 2-3% per K, not 7%/K. We will come back to why in another article.

It also seems from models that with global warming, rainfall increases more in regions and times of already high rainfall and reduces in regions and times of low rainfall – the “wet get wetter and the dry get drier”. (Also a marketing mantra that introducing a catchy slogan ensures better progress of an idea). So we also expect changes in the distribution of rainfall. One reason for this is a change in the tropical circulation. All to be covered later, so onto the paper..

We analyze the outputs of 14 CMIP5 models based on a 140 year experiment with a prescribed 1% per year increase in CO2 emission. This rate of CO2 increase is comparable to that prescribed for the RCP8.5, a relatively conservative business-as-usual scenario, except the latter includes also changes in other GHG and aerosols, besides CO2.

A 27-year period at the beginning of the integration is used as the control to compute rainfall and temperature statistics, and to compare with climatology (1979–2005) of rainfall data from the Global Precipitation Climatology Project (GPCP). Two similar 27-year periods in the experiment that correspond approximately to a doubling of CO2 emissions (DCO2) and a tripling of CO2 emissions (TCO2) compared to the control are chosen respectively to compute the same statistics..

Just a note that I disagree with the claim that RCP8.5 is a “relatively conservative business as usual scenario” (see Impacts – II – GHG Emissions Projections: SRES and RCP), but that’s just an opinion, as are all views about where the world will be in population, GDP and cumulative emissions 100-150 years from now. It doesn’t detract from the rainfall analysis in the paper.

For people wondering “what is CMIP5?” – this is the model inter-comparison project for the most recent IPCC report (AR5) where many models have to address the same questions so they can be compared.

Here we see (and along with other graphs you can click to enlarge) what the models show in temperature (top left), mean global rainfall (top right), zonal rainfall anomaly by latitude (bottom left) and the control vs the tripled CO2 comparison (bottom right). The many different colors in the first three graphs are each model, while the black line is the mean of the models (“ensemble mean”). The bottom right graph helps put the changes shown in the bottom left into a perspective – with the different between the red and the blue being the difference between tripling CO2 and today:

From Lau et al 2013

Figure 1 – Click to enlarge

In the figure above, the bottom left graph shows anomalies. We see one of the characteristics of models as a result of more GHGs – wetter tropics and drier sub-tropics, along with wetter conditions at higher latitudes.

From the supplementary material, below we see a better regional breakdown of fig 1d (bottom right in the figure above). I’ll highlight the bottom left graph (c) for the African region. Over the continent, the differences between present day and tripling CO2 seem minor as far as model predictions go for mean rainfall:

From Lau et al 2013

Figure 2 – Click to enlarge

The supplementary material also has a comparison between models and observations. The first graph below is what we are looking at (the second graph we will consider afterwards). TRMM (Tropical Rainfall Measuring Mission) is satellite data and GPCP one rainfall climatology that we met in the last article – so they are both observational datasets. We see that the models over-estimate tropic rainfall, especially south of the equator:

From Lau et al 2013

Figure 3 – Click to enlarge

Rainfall Distribution from Light through to Heavy Rain

Lau and his colleagues then look at rainfall distribution in terms of light rainfall through to heavier rainfall. So, take global rainfall and divide it into frequency of occurrence, with light rainfall to the left and heavy rainfall to the right. Take a look back at the bottom graph in the figure above (figure 3, their figure S1). Note that the horizontal axis is logarithmic, with a ratio of over 1000 from left to right.

It isn’t an immediately intuitive graph. Basically there are two sets of graphs. The left “cluster” is how often that rainfall amount occurred, and the black line is GPCP observations. The “right cluster” is how much rainfall fell (as a percentage of total rainfall) for that rainfall amount and again black is observations.

So lighter rainfall, like 1mm/day and below accounts for 50% of time, but being light rainfall accounts for less than 10% of total rainfall.

To facilitate discussion regarding rainfall characteristics in this work, we define, based on the ensemble model PDF, three major rain types: light rain (LR), moderate rain (MR), and heavy rain (HR) respectively as those with monthly mean rain rate below the 20th percentile (<0.3 mm/day), between response (TCO2 minus control, black) and the inter-model 1s the 40th–70th percentile (0.9–2.4mm/day), and above the 98.5% percentile (>9mm/day). An extremely heavy rain (EHR) type defined at the 99.9th percentile (>24 mm day1) will also be referred to, as appropriate.

Here is a geographical breakdown of the total and then the rainfall in these three categories, model mean on the left and observations on the right:

From Lau et al 2013

Figure 4 – Click to enlarge

We can see that the models tend to overestimate the heavy rain and underestimate the light rain. These graphics are excellent because they help us to see the geographical distribution.

Now in the graphs below we see at the top the changes in frequency of mean precipitation (60S-60N) as a function of rain rate; and at the bottom we see the % change in rainfall per K of temperature change, again as a function of rain rate. Note that the bottom graph also has a logarithmic scale for the % change, so as you move up each grid square the value is doubled.

The different models are also helpfully indicated so the spread can be seen:

From Lau et al 2013

Figure 5 – Click to enlarge

Notice that the models are all predicting quite a high % change in rainfall per K for the heaviest rain – something around 50%. In contrast the light rainfall is expected to be up a few % per K and the medium rainfall is expected to be down a few % per K.

Globally, rainfall increases by 4.5%, with a sensitivity (dP/P/dT) of 1.4% per K

Here is a table from their supplementary material with a zonal breakdown of changes in mean rainfall (so not divided into heavy, light etc). For the non-maths people the first row, dP/P is just the % change in precipitation (“d” in front of a variable means “change in that variable”), the second row is change in temperature and the third row is the % change in rainfall per K (or ºC) of warming from GHGs:

From Lau et al 2013

Figure 6 – Click to enlarge

Here are the projected geographical distributions of the changes in mean (top left), heavy (top right), medium (bottom left) and light rain (bottom right) – using their earlier definitions – under tripling CO2:

From Lau et al 2013

Figure 7 – Click to enlarge

And as a result of these projections, the authors also show the number of dry months and the projected changes in number of dry months:

From Lau et al 2013

Figure 8 – Click to enlarge

The authors conclude:

The IPCC CMIP5 models project a robust, canonical global response of rainfall characteristics to CO2 warming, featuring an increase in heavy rain, a reduction in moderate rain, and an increase in light rain occurrence and amount globally.

For a scenario of 1% CO2 increase per year, the model ensemble mean projects at the time of approximately tripling of the CO2 emissions, the probability of occurring of extremely heavy rain (monthly mean >24mm/day) will increase globally by 100%–250%, moderate rain will decrease by 5%–10% and light rain will increase by 10%–15%.

The increase in heavy rain is most pronounced in the equatorial central Pacific and the Asian monsoon regions. Moderate rain is reduced over extensive oceanic regions in the subtropics and extratropics, but increased over the extratropical land regions of North America, and Eurasia, and extratropical Southern Oceans. Light rain is mostly found to be inversely related to moderate rain locally, and with heavy rain in the central Pacific.

The model ensemble also projects a significant global increase up to 16% more frequent in the occurrences of dry months (drought conditions), mostly over the subtropics as well as marginal convective zone in equatorial land regions, reflecting an expansion of the desert and arid zones..


..Hence, the canonical global rainfall response to CO2 warming captured in the CMIP5 model projection suggests a global scale readjustment involving changes in circulation and rainfall characteristics, including possible teleconnection of extremely heavy rain and droughts separated by far distances. This adjustment is strongly constrained geographically by climatological rainfall pattern, and most likely by the GHG warming induced sea surface temperature anomalies with unstable moister and warmer regions in the deep tropics getting more heavy rain, at the expense of nearby marginal convective zones in the tropics and stable dry zones in the subtropics.

Our results are generally consistent with so-called “the rich-getting-richer, poor-getting-poorer” paradigm for precipitation response under global warming..


This article has basically presented the results of one paper, which demonstrates consistency in model response of rainfall to doubling and tripling of CO2 in the atmosphere. In subsequent articles we will look at the underlying physics constraints, at time-series over recent decades and try to make some kind of assessment.

Articles in this Series

Impacts – I – Introduction

Impacts – II – GHG Emissions Projections: SRES and RCP

Impacts – III – Population in 2100

Impacts – IV – Temperature Projections and Probabilities

Impacts – V – Climate change is already causing worsening storms, floods and droughts

Impacts – VI – Sea Level Rise 1

Impacts – VII – Sea Level 2 – Uncertainty

Impacts – VIII – Sea level 3 – USA

Impacts – IX – Sea Level 4 – Sinking Megacities

Impacts – X – Sea Level Rise 5 – Bangladesh

Impacts XI – Rainfall 1


A canonical response of precipitation characteristics to global warming from CMIP5 models, William K.-M. Lau, H.-T. Wu, & K.-M. Kim, GRL (2013) – free paper

Further Reading

Here are a bunch of papers that I found useful for readers who want to dig into the subject. Most of them are available for free via Google Scholar, but one of the most helpful to me (first in the list) was Allen & Ingram 2002 and the only way I could access it was to pay $4 to rent it for a couple of days.

Allen MR, Ingram WJ (2002) Constraints on future changes in climate and the hydrologic cycle. Nature 419:224–232

Allan RP (2006) Variability in clear-sky longwave radiative cooling of the atmosphere. J Geophys Res 111:D22, 105

Allan, R. P., B. J. Soden, V. O. John, W. Ingram, and P. Good (2010), Current changes in tropical precipitation, Environ. Res. Lett., doi:10.1088/ 1748-9326/5/52/025205

Physically Consistent Responses of the Global Atmospheric Hydrological Cycle in Models and Observations, Richard P. Allan et al, Surv Geophys (2014)

Held IM, Soden BJ (2006) Robust responses of the hydrological cycle to global warming. J Clim 19:5686–5699

Changes in temperature and precipitation extremes in the CMIP5 ensemble, VV Kharin et al, Climatic Change (2013)

Energetic Constraints on Precipitation Under Climate Change, Paul A. O’Gorman et al, Surv Geophys (2012) 33:585–608

Trenberth, K. E. (2011), Changes in precipitation with climate change, Clim. Res., 47, 123–138, doi:10.3354/cr00953

Zahn M, Allan RP (2011) Changes in water vapor transports of the ascending branch of the tropical circulation. J Geophys Res 116:D18111

If we want to assess forecasts of floods, droughts and crop yields then we will need to know rainfall. We will also need to know temperature of course.

The forte of climate models is temperature. Rainfall is more problematic.

Before we get to model predictions about the future we need to review observations and the ability of models to reproduce them. Observations are also problematic – rainfall varies locally and over short durations. And historically we lacked effective observation systems in many locations and regions of the world, so data has to be pieced together and estimated from reanalysis.

Smith and his colleagues created a new rainfall dataset. Here is a comment from their 2012 paper:

Although many land regions have long precipitation records from gauges, there are spatial gaps in the sampling for undeveloped regions, areas with low populations, and over oceans. Since 1979 satellite data have been used to fill in those sampling gaps. Over longer periods gaps can only be filled using reconstructions or reanalyses..

Here are two views of the global precipitation data from a dataset which starts with the satellite era, that is, 1979 onwards – GPCP (Global Precipitation Climatology Project):

From Adler et al 2003

Figure 1

From Adler et al 2003

Figure 2

For historical data before satellites we only have rain gauge data. The GPCC dataset, explained in Becker et al 2013, shows the number of stations over time by region:

From Becker et al 2013

Figure 3- Click to expand

And the geographical distribution of rain gauge stations at different times:

From Becker et al 2013

Figure 4 – Click to expand

The IPCC compared the global trends over land from four different datasets over the last century and the last half-century:

From IPCC AR5 Ch. 2

Figure 5 – Click to expand

And the regional trends:

From IPCC AR5 Ch. 2

Figure 6 – Click to expand

The graphs for the annual change in rainfall, note the different scales for each region (as we would expect given the difference in average rainfall in different region):

From IPCC AR5 ch 2

Figure 7

We see that the decadal or half-decadal variation is much greater than any apparent long term trend. The trend data (as reviewed by the IPCC in figs 5 & 6) shows significant differences in the datasets but when we compare the time series it appears that the datasets match up better than indicated by the trend comparisons.

The data with the best historical coverage is 30ºN – 60ºN and the trend values for 1951-2000 (from different reconstructions) range from an annual increase of 1 to 1.5 mm/yr per decade (fig 6 / table 2.10 of IPCC report). This is against an absolute value of about 1000 mm/yr in this region (reading off the climatology in figure 2).

This is just me trying to put the trend data in perspective.


Here is the IPCC AR5 chapter 9 on model comparisons to satellite-era rainfall observations. Top left is observations (basically the same dataset as figure 1 in this article over a slightly longer period with different colors) and bottom right is percentage error of model average with respect to observations:

From IPCC AR5 ch 9

Figure 8 – Click to expand

We can see that the average of all models has substantial errors on mean rainfall.

Articles in this Series

Impacts – I – Introduction

Impacts – II – GHG Emissions Projections: SRES and RCP

Impacts – III – Population in 2100

Impacts – IV – Temperature Projections and Probabilities

Impacts – V – Climate change is already causing worsening storms, floods and droughts

Impacts – VI – Sea Level Rise 1

Impacts – VII – Sea Level 2 – Uncertainty

Impacts – VIII – Sea level 3 – USA

Impacts – IX – Sea Level 4 – Sinking Megacities

Impacts – X – Sea Level Rise 5 – Bangladesh


IPCC AR5 Chapter 2

Improved Reconstruction of Global Precipitation since 1900, Smith, Arken, Ren & Shen, Journal of Atmospheric and Oceanic Technology (2012)

The Version-2 Global Precipitation Climatology Project (GPCP) Monthly Precipitation Analysis (1979–Present), Adler et al, Journal of Hydrometeorology (2003)

A description of the global land-surface precipitation data products of the Global Precipitation Climatology Centre with sample applications including centennial (trend) analysis from 1901–present, A Becker, Earth Syst. Sci. Data (2013)

FitzGerald et al 2008:

Sea-level rise (SLR) poses a particularly ominous threat because 10% of the world’s population (634 million people) lives in low-lying coastal regions within 10 m elevation of sea level (McGranahan et al. 2007). Much of this population resides in portions of 17 of the world’s 30 largest cities, including Mumbai, India; Shanghai, China; Jakarta, Indonesia; Bangkok, Thailand; London; and New York.

In the last article – Sinking Megacities – we saw that some of these cities are sinking due to ground water depletion. To those megacities, this is a much more serious threat than global sea level rise (probably why we see so many marches and protests about ground water depletion).

The paper continues:

..The potential loss of life in low-lying areas is even more graphically illustrated by the 1970 Bhola cyclone that traveled northward through the Bay of Bengal producing a 12-m-high wall of water that drowned a half million people in East Pakistan (now Bangladesh) (Garrison 2005).

In Bangladesh, storms and cyclones are much more of a threat than sea level rise. Here is Karim and Mimura (2008) listing the serious cyclones over the last 60 years:

From Karim and Mimura 2008

Figure 1 – Click to expand

There is an interesting World Bank Report from 2011. First on floods:

In an average year, nearly one quarter of Bangladesh is inundated, with more than three-fifths of land area at risk of floods of varying intensity (Ahmed and Mirza 2000). Every four or five years, a severe flood occurs during the monsoon season, submerging more than three-fifths of the land..

The most recent exceptional flood, which occurred in 2007, inundated 62,300 km² or 42 percent of total land area, causing 1,110 deaths and affecting 14 million people; 2.1 million ha of standing crop land were submerged, 85,000 houses completely destroyed, and 31,533 km of roads damaged. Estimated asset losses from this one event totaled US$1.1 billion (BWDB 2007).

Flooding in Bangladesh results from a complex set of factors, key among which are extremely low and flat topography, uncertain transboundary flow, heavy monsoon rainfall, and high vulnerability to tidal waves and congested drainage channels. Two-thirds of Bangladesh’s land area is less than 5 m above sea level. Each year, an average flow of 1,350 billion m³ of water from the GBM [Ganges, Brahmaputra, and Meghna] basin drains through the country.

From World Bank 2011

Figure 2

I recommend this World Bank report, very interesting, and you can see some idea of the costs of mitigating against floods. These problems are already present – floods are a regular occurrence, some mitigation has already taken place, and more mitigation continues.

I read the entire report and all I could find was that rising sea levels would exacerbate the problems already faced from storm surges: p.6:

Increase in ocean surface temperature and rising sea levels are likely to intensify cyclonic storm surges and further increase the depth and extent of storm surge induced coastal inundation.

However, the projections indicate that sea level rise is much less of a problem compared with possible increases in future storm surges and possible increases in future flooding. And compared with current storm surges and current flooding. We will look at floods and storm surges in future articles.

In the report it’s clear that floods and storms are already major problems. Sea level is harder to analyze. Trying to account for a sea level rise of 0.3m by 2050 when severe storm surges are already 5-10m is not going to make much of a difference. If we had accurate prediction of storm surges, to +/- 0.3m, then sea level rise of 0.3m should definitely be accounted for. But we don’t have anything like that kind of accuracy.

Well, they do some calculations of adaption against storm surges for projected changes up to 2050:

Under the baseline scenario, the adaptation costs total $2.46 billion. In a changing climate, the additional adaptation cost totals US$892 million.

In essence the question is “what is the storm surge for a once in a 10 year storm in 2050”? (I’m sure Bangladesh would really prefer to build protection against a once in 100 year storm). An extra $1bn for future problems, or a total of $3.5bn to cover existing and future problems, seems like money that would be very well spent, representing excellent value.

Nicholls and Cazenave (2010), in relation to the susceptible coastline of Asia and Africa, comment on adaption:

Many impact studies do not consider adaptation, and hence determine worst-case impacts. Yet, the history of the human relationship with the coast is one of an increasing capacity to adapt to adverse change. In addition, the world’s populated coasts became increasingly managed and engineered over the 20th century. The subsiding cities discussed above all remain protected to date, despite large relative SLR.

Analysis based on benefit-cost methods show that protection would be widespread as well-populated coastal areas have a high value and actual impacts would only be a small fraction of the potential impacts, even assuming high-SLR (>1 m/century) scenarios. This suggests that the common assumption of a widespread forced retreat from the shore in the face of SLR is not inevitable. In many densely populated coastal areas, communities advanced the coast seaward via land claim owing to the high value of land (e.g., Singapore).

Yet, protection often attracts new development in low lying areas, which may not be desirable, and coastal defense failures have occurred, such as New Orleans in 2005. Hence, we must choose between protection, accommodation, and planned retreat adaptation options. This choice is both technical and sociopolitical, addressing which measures are desirable, affordable, and sustainable in the long term. Adaptation remains a major uncertainty concerning the actual impacts of SLR.

In the World Bank 2011 report, in chapter 4, after their analysis on risks and costs of storm-induced inundations in 2050 resulting from projected higher cyclonic wind speeds and a projected increase in sea level of 0.27m, they comment, p.24:

As a cautionary note, it should be noted that this analysis did not address the out-migration from coastal zones that a rise in sea level and intensified cyclonic storm surges might induce.

In fact the cost data assumes population growth in the vulnerable regions.

Likewise, here is Hinkel et al (2014):

Coastal flood damages are expected to increase significantly during the 21st century as sea levels rise and socioeconomic development increases the number of people and value of assets in the coastal floodplain.

[Emphasis added].

This assumption bias creates an interpretation challenge. It would be useful to see notes to the effect: “If the population migrates away from this area due to the higher risk, instead the cost will be $X assuming a reduction of Y% in population in this region by 2050“. This extra item of data would create a useful contrast and I’m guessing that we would see impact assessments reduce by a factor of 5 or 10.

It is difficult to see realistic global sea level changes, even to the end of the century, having a big impact on Bangladesh compared with their current problems of annual flooding and frequent large storm surges. Of course, adding an extra 0.5m to the sea level doesn’t improve the situation, but it is an order of magnitude smaller than storm surges.

The adaption costs estimated by the World Bank to protect against storm surges (already required but at least a work in progress) seem moderate in value.

Lastly, I wasn’t able to find a detailed elevation map (with, say, 0.5m resolution), instead the ones I found graded the elevation with respect to sea level in fairly coarse steps. I’m sure the information exists but may be proprietary (in GIS data for example):

Figure 2 – Click to expand

I have to admit that I believed something like 25% of the Bangladesh population were around 1.0m or less above current sea level. This map says that the 0-3m area is quite small. If anyone does have a better resolution map I will post it up.

Articles in this Series

Impacts – I – Introduction

Impacts – II – GHG Emissions Projections: SRES and RCP

Impacts – III – Population in 2100

Impacts – IV – Temperature Projections and Probabilities

Impacts – V – Climate change is already causing worsening storms, floods and droughts

Impacts – VI – Sea Level Rise 1

Impacts – VII – Sea Level 2 – Uncertainty

Impacts – VIII – Sea level 3 – USA

Impacts – IX – Sea Level 4 – Sinking Megacities


Coastal Impacts Due to Sea-Level Rise, Duncan M. FitzGerald et al, Annual Rev. Earth Planet. Sci. (2008)

Impacts of climate change and sea-level rise on cyclonic storm surge floods in Bangladesh, Mohammed Fazlul Karim & Nobuo Mimura, Global Environmental Change (2008) – free paper

The Cost of Adapting to Extreme Weather Events in a Changing Climate – Bangladesh, World Bank (2011) – free report

Sea-Level Rise and Its Impact on Coastal Zones, Robert J Nicholls & Anny Cazenave, Science (2010) – free paper

Coastal flood damage and adaptation costs under 21st century sea-level rise, Jochen Hinkel et al, PNAS (2014) – free paper

In Impacts – VIII – Sea level 3 – USA I suggested this conclusion:

So the cost of sea level rise for 2100 in the US seems to be a close to zero cost problem.

Probably the provocative way I wrote the conclusion confused some people. I should have said that it was a very expensive problem. But that it wasn’t a problem that society should pay for, given that anyone moving to the coast since 2005 at the latest would have known that future sea level was considered to be a major problem. By 2100 the youngest people still living right on the sea front, who bought property there before 2005, would be at least 115 years old.

The idea is that “externalities” as economists call them should be paid by the creators of the problem, not the people that incur the problem. In this case, the “victims” are people who ignored the evidence and moved to the coast anyway. Are they still victims? That was my point.

Well, what about outside the US?

Some mega cities have huge problems. Here is Nicholls 2011:

Coastal areas constitute important habitats, and they contain a large and growing population, much of it located in economic centers such as London, New York, Tokyo, Shanghai, Mumbai, and Lagos. The range of coastal hazards includes climate-induced sea level rise, a long-term threat that demands broad response.

Global sea levels rose 17 cm through the twentieth century, and are likely to rise more rapidly through the twenty-first century when a rise of more than 1 m is possible.

In some locations, these changes may be exacerbated by

(1) increases in storminess due to climate change, although this scenario is less certain
(2) widespread human-induced subsidence due to ground fluid withdrawal from, and drainage of, susceptible soils, especially in deltas.


Over the twentieth century, the parts of Tokyo and Osaka built on deltaic areas subsided up to 5 m and 3 m, respectively, a large part of Shanghai subsided up to 3 m, and Bangkok subsided up to 2 m.

This human-induced subsidence can be mitigated by stopping shallow, subsurface fluid withdrawals and managing water levels, but natural “background” rates of subsidence will continue, and RSLR will still exceed global trends in these areas. A combination of policies to mitigate subsidence has been instituted in the four delta cities mentioned above, combined with improved flood defenses and pumped drainage systems designed to avoid submergence and/ or frequent flooding.

In contrast, Jakarta and Metro Manila are subsiding significantly, with maximum subsidence of 4 m and 1 m to date, respectively (e.g., Rodolfo and Siringan, 2006; Ward et al., 2011), but little systematic policy response is in place in either city, and future flooding problems are anticipated.

Subsidence graphic:

From Nicholls 2011

Figure 1

To put these figures in context, sea level rise from 1900-2000 was about 0.2m and according to the latest IPCC report the forecast of sea level rise by 2100 might be around an additional 0.5m (for RCP 6.0, see earlier article). In the light of the idea that global society should pay for problems to people caused by global society, perhaps the problems of Shanghai, Bangkok and other sinking cities are not global problems?

Here is Wang et al from 2012:

Shanghai is low-lying, with an elevation of 3–4 m. A quarter of the area lies below 3 m. The city’s flood-control walls are currently more than 6 m high. However, given the trend of sea level rise and land subsidence, this is inadequate. Shanghai is frequently affected by extreme tropical storm surges. The risk of flooding from overtopping is considerable..

..From 1921 to 1965, the average cumulative subsidence of the city center was 1.76 m, with a maximum of 2.63 m. From 1966 to 1985, a monitoring network was established and subsidence was mitigated through artificial recharge. Land subsidence was stabilized at an average of 0.9 mm/year. As a result of rapid urban development and large-scale construction projects between 1986 and 1997, subsidence of the downtown area increased rapidly, at an average rate of 10.2 mm/year..

..In 2100, sea level rise and land subsidence will be far greater than before. Sea level rise is estimated to be 43 cm, while land subsidence is estimated to be 3–229 cm, and neotectonic subsidence is estimated to be 14 cm. Flooding will be severe in 2100 (Fig. 8).

[Note I changed the data in the last paragraph cited to round numbers in cm from their values quoted to 0.01cm – for example, 43cm instead of the paper’s values of 43.31 etc].

So for Shanghai at least global sea level rise is not really the problem.

Given that I don’t pay much attention to media outlets I probably missed the big Marches against Ground Water Depletion Slightly Accentuating Global Warming’s Sea Level Rise in Threatened Megacities.

As with the USA data the question of increased storm surges accentuating global sea level rise is still on the agenda (i.e., has not yet been discussed).


Planning for the impacts of sea level rise, RJ Nicholls, Oceanography (2011)

Evaluation of the combined risk of sea level rise, land subsidence, and storm surges on the coastal areas of Shanghai, China, Jun Wang, Wei Gao, Shiyuan Xu & Lizhong Yu, Climatic Change (2012)