Feeds:
Posts
Comments

Over in another article, a commenter claims:

..Catastrophic predictions depend on accelerated forcings due to water vapour feedback. This water vapour feedback is simply written into climate models as parameters. It is not derived from any kind simulation of first principles in the General Circulation Model runs (GCMs)..

[Emphasis added]

I’ve seen this article of faith a lot. If you frequent fantasy climate blogs where people learn first principles and modeling basics from comments by other equally well-educated commenters this is the kind of contribution you will be able to make after years of study.

None of us knowed nothing, so we all sat around and teached each other.

Actually how the atmospheric section of climate models work is pretty simple in principle. The atmosphere is divided up into a set of blocks (a grid) with each block having dimensions something like 200 km x 200km x 500m high. The values vary a lot and depend on the resolution of the model, this is just to give you an idea.

Then each block has an E-W wind; a N-S wind; a vertical velocity; temperature; pressure; the concentrations of CO2, water vapor, methane; cloud fractions, and so on.

Then the model “steps forward in time” and uses equations to calculate the new values of each item.

The earth is spinning and conservation of momentum, heat, mass is applied to each block. The principles of radiation through each block in each direction apply via paramaterizations (note 1).

Specifically on water vapor – the change in mass of water vapor in each block is calculated from the amount of water evaporated, the amount of water vapor condensed, and the amount of rainfall taking water out of the block. And from the movement of air via E-W, N-S and up/down winds. The final amount of water vapor in each time step affects the radiation emitted upwards and downwards.

It’s more involved and you can read whole books on the subject.

I doubt that anyone who has troubled themselves to read even one paper on climate modeling basics could reach the conclusion so firmly believed in fantasy climate blogs and repeated above. If you never need to provide evidence for your claims..

For this blog we do like to see proof of claims, so please take a read of Description of the NCAR Community Atmosphere Model (CAM 4.0) and just show where this water vapor feedback is written in. Or pick another climate model used by a climate modeling group.

This is the kind of exciting stuff you find in the 200+ pages of an atmospheric model description:

From CAM4 Technical Note

You can also find details of the shortwave and longwave radiation parameterization schemes and how they apply to water vapor.

Here is a quote from The Global Circulation of the Atmosphere (ref below):

Essentially all GCMs yield water vapor feedback consistent with that which would result from holding relative humidity approximately fixed as climate changes. This is an emergent property of the simulated climate system; fixed relative humidity is not in any way built into the model physics, and the models offer ample means by which relative humidity could change.

From Water Vapor Feedback and Global Warming, a paper well-worth anyone reading for who wants to understand this key question in climate:

Water vapor is the dominant greenhouse gas, the most important gaseous source of infrared opacity in the atmosphere. As the concentrations of other greenhouse gases, particularly carbon dioxide, increase because of human activity, it is centrally important to predict how the water vapor distribution will be affected. To the extent that water vapor concentrations increase in a warmer world, the climatic effects of the other greenhouse gases will be amplified. Models of the Earth’s climate indicate that this is an important positive feedback that increases the sensitivity of surface temperatures to carbon dioxide by nearly a factor of two when considered in isolation from other feedbacks, and possibly by as much as a factor of three or more when interactions with other feedbacks are considered. Critics of this consensus have attempted to provide reasons why modeling results are overestimating the strength of this feedback..

Remember, just a few years of study at fantasy climate blogs can save an hour or more of reading papers on atmospheric physics.

References

Description of the NCAR Community Atmosphere Model (CAM 4)– free paper

On the Relative Humidity of the Atmosphere, Chapter 6 of The Global Circulation of the Atmosphere, edited by Tapio Schneider & Adam Sobel, Princeton University Press (2007)

Water Vapor Feedback and Global Warming, Held & Soden, Annu. Rev. Energy Environ (2000) – free paper

Radiative forcing by well-mixed greenhouse gases: Estimates from climate models in the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4), WD Collins et al, JGR (2006)

Notes

Note 1: The very accurate calculation of radiation transfer is done via line by line calculations but they are computationally very expensive and so a simpler approximation is used in GCMs. Of course there are many studies comparing parameterizations vs line by line calculations. One example is Radiative forcing by well-mixed greenhouse gases: Estimates from climate models in the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4), WD Collins et al, JGR (2006).

Advertisements

Two Basic Foundations

This article will be a placeholder article to filter out a select group of people. The many people who arrive and confidently explain that atmospheric physics is fatally flawed (without the benefit of having read a textbook). They don’t think they are confused, in their minds they are helpfully explaining why the standard theory is wrong. There have been a lot of such people.

Almost none of them ever provides an equation. If on rare occasions they do provide a random equation, they never explain what is wrong with the 65-year old equation of radiative transfer (explained by Nobel prize winner Subrahmanyan Chandrasekhar, see note 1) which is derived from fundamental physics. Or an explanation for why observation matches the standard theory. For example (and I have lots of others), here is a graph produced nearly 50 years ago (referenced almost 30 years ago) of the observed spectrum at the top of atmosphere vs the calculated spectrum from the standard theory.

Why is it so accurate?

From Atmospheric Radiation, Goody (1989)

From Atmospheric Radiation, Goody (1989)

If it was me, and I thought the theory was wrong, I would read a textbook and try and explain why the textbook was wrong. But I’m old school and generally expect physics textbooks to be correct, short of some major revolution. Conventionally, when you “prove” textbook theory wrong you are expected to explain why everyone got it wrong before.

There is a simple reason why our many confident visitors never do that. They don’t know anything about the basic theory. Entertaining as that is, and I’ll be the first to admit that it has been highly entertaining, it’s time to prune comments from overconfident and confused visitors.

I am not trying to push away people with questions. If you have questions please ask. This article is just intended to limit the tsunami of comments from visitors with their overconfident non-textbook understanding of physics – that have often dominated comment threads. 

So here are my two questions for the many visitors with huge confidence in their physics knowledge. Dodging isn’t an option. You can say “not correct” and explain your alternative formulation with evidence, but you can’t dodge.

Answer these two questions:

1. Is the equation of radiative transfer correct or not?

Iλ(0) = Iλm)em + ∫ Bλ(T)e [16]

The intensity at the top of atmosphere equals.. The surface radiation attenuated by the transmittance of the atmosphere, plus.. The sum of all the contributions of atmospheric radiation – each contribution attenuated by the transmittance from that location to the top of atmosphere

Of course (and I’m sure I don’t even need to spell it out) we need to integrate across all wavelengths, λ, to get the flux value.

For the derivation see Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations. If you don’t agree it is correct then explain why.

[Note that other articles explain the basics. For example – The “Greenhouse” Effect Explained in Simple Terms, which has many links to other in depth articles].

If you don’t understand the equation you don’t understand the core of radiative atmospheric physics.

—-

2. Is this graphic with explanation from an undergraduate heat transfer textbook (Fundamentals of Heat and Mass Transfer, 6th edition, Incropera and DeWitt 2007) correct or not?

From "Fundamentals of Heat and Mass Transfer, 6th edition", Incropera and DeWitt (2007)

From “Fundamentals of Heat and Mass Transfer, 6th edition”, Incropera and DeWitt (2007)

You can see that radiation is emitted from a hot surface and absorbed by a cool surface. And that radiation is emitted from a cool surface and absorbed by a hot surface. More examples of this principle, including equations, in Amazing Things we Find in Textbooks – The Real Second Law of Thermodynamics – scanned pages from six undergraduate heat transfer textbooks (seven textbooks if we include the one added in comments after entertaining commenter Bryan suggested the first six were “cherry-picked” and offered his preferred textbook which had exactly the same equations).

—-

What I will be doing for the subset of new visitors with their amazing and confident insights is to send them to this article and ask for answers. In the past I have never been able to get a single member of this group to commit. The reason why is obvious.

But – if you don’t answer, your comments may never be published.

Once again, this is not designed to stop regular visitors asking questions. Most people interested in climate don’t understand equations, calculus, radiative physics or thermodynamics – and that is totally fine.

Call it censorship if it makes you sleep better at night.

Notes

Note 1 – I believe the theory is older than Chandrasekhar but I don’t have older references. It derives from basic emission (Planck), absorption (Beer Lambert) and the first law of thermodynamics. Chandrasekhar published this in his 1952 book Radiative Transfer (the link is the 1960 reprint). This isn’t the “argument from authority”, I’m just pointing out that the theory has been long established. Punters are welcome to try and prove it wrong, just no one ever does.

In recent articles we have looked at rainfall and there is still more to discuss. This article changes tack to look at tropical cyclones, prompted by the recent US landfall of Harvey and Irma along with questions from readers about attribution and the future.

It might be surprising to find the following statement from leading climate scientists (Kevin Walsh and many co-authors in 2015):

At present, there is no climate theory that can predict the formation rate of tropical cyclones from the mean climate state.

The subject gets a little involved so let’s dig into a few papers. First from Gabriel Vecchi and some co-authors in 2008 in the journal Science. The paper is very brief and essentially raises one question – has the recent rise in total Atlantic cyclone intensity been a result of increases in absolute sea surface temperature (SST) or relative sea surface temperature:

From Vecchi et al 2008

Figure 1

The top graph (above) shows a correlation of 0.79 between SST and PDI (power dissipation index). The bottom graph shows a correlation of 0.79 between relative SST (local sea surface temperature minus the average tropical sea surface temperature) and PDI.

With more CO2 in the atmosphere from burning fossil fuels we expect a warmer SST in the tropical Atlantic in 2100 than today. But we don’t expect the tropical Atlantic to warm faster than the tropics in general.

If cyclone intensity is dependent on local SST we expect more cyclones, or more powerful cyclones. If cyclone intensity is dependent on relative SST we expect no increase in cyclones. This is because climate models predict warmer SSTs in the future but not warmer Atlantic SSTs than the tropics. The paper also shows a few high resolution models – green symbols – sitting close to the zero change line.

Now predicting tropical cyclones with GCMs has a fundamental issue – the scale of a modern high resolution GCM is around 100km. But cyclone prediction requires a higher resolution due to their relatively small size.

Thomas Knutson and co-authors (including the great Isaac Held) produced a 2007 paper with an interesting method (of course, the idea is not at all new). They input actual meteorological data (i.e. real history from NCEP reanalysis) into a high resolution model which covered just the Atlantic region. Their aim was to see how well this model could reproduce tropical storms. There are some technicalities to the model – the output is constantly “nudged” back towards the actual climatology and out at the boundaries of the model we can’t expect good simulation results. The model resolution is 18km.

The main question addressed here is the following: Assuming one has essentially perfect knowledge of large-scale atmospheric conditions in the Atlantic over time, how well can one then simulate past variations in Atlantic hurricane activity using a dynamical model?

They comment that the cause of the recent (at that time) upswing in hurricane activity “remains unresolved”. (Of course, fast forward to 2016, prior to the recent two large landfall hurricanes, and the overall activity is at a 1970 low. In early 2018, this may be revised again..).

Two interesting graphs emerge. First an excellent match between model and observations for overall frequency year on year:

From Knutson et al 2007

Figure 2

Second, an inability to predict the most intense hurricanes. The black dots are observations, the red dots are simulations from the model. The vertical axis, a little difficult to read, is SLP, or sea level pressure:

From Knutson et al 2007

Figure 3

These results are a common theme of many papers – inputting the historical climatological data into a model we can get some decent results on year to year variation in tropical cyclones. But models under-predict the most intense cyclones (hurricanes).

Here is Morris Bender and co-authors (including Thomas Knutson, Gabriel Vecchi – a frequent author or co-author in this genre, and of course Isaac Held) from 2010:

Some statistical analyses suggest a link between warmer Atlantic SSTs and increased hurricane activity, although other studies contend that the spatial structure of the SST change may be a more important control on tropical cyclone frequency and intensity. A few studies suggest that greenhouse warming has already produced a substantial rise in Atlantic tropical cyclone activity, but others question that conclusion.

This is a very typical introduction in papers on this topic. I note in passing this is a huge blow to the idea that climate scientists only ever introduce more certainty and alarm on the harm from future CO2 emissions. They don’t. However, it is also true that some climate scientists believe that recent events have been accentuated due to the last century of fossil fuel burning and these perspectives might be reported in the media. I try to ignore the media and that is my recommendation to readers on just about all subjects except essential ones like the weather and celebrity news.

This paper used a weather prediction model starting a few days before each storm to predict the outcome. If you understand the idea behind Knutson 2007 then this is just one step further – a few days prior to the emergence of an intense storm, input the actual climate data into a high resolution model and see how well the high res model predicts the observations. They also used projected future climates from CMIP3 models (note 1).

In the set of graphs below there are three points I want to highlight – and you probably need to click on the graph to enlarge it.

First, in graph B, “Zetac” is the model used by Knutson et al 2007, whereas GFDL is the weather prediction model getting better results in this paper – you can see that observations and the GFDL are pretty close in the maximum wind speed distribution. Second, the climate change predictions in E show that predictions of the future show an overall reduction in frequency of tropical storms, but an increase in the frequency of storms with the highest wind speeds – this is a common theme in papers from this genre. Third, in graph F, the results (from the weather prediction model) fed by different GCMs for future climate show quite different distributions. For example, the UKMO model produces a distribution of future wind speeds that is lower than current values.

From Bender et al 2010

Figure 4 – Click to enlarge

In this graph (S3 from the Supplementary data) we see graphs of the difference between future projected climatologies and current climatologies for three relevant parameters for each of the four different models shown in graph F in the figure above:

From Bender et al 2010

Figure 5 – Click to enlarge

This illustrates that different projected future climatologies, which all show increased SST in the Atlantic region, generate quite different hurricane intensities. The paper suggests that the reduction in wind shear in the UKMO model produces a lower frequency of higher intensity hurricanes.

Conclusion

This article illustrates that feeding higher resolution models with current data can generate realistic cyclone data in some aspects, but less so in other aspects. As we increase the model resolution we can get even better results – but this is dependent on inputting the correct climate data. As we look towards 2100 the questions are – How realistic is the future climate data? How does that affect projections of hurricane frequencies and intensities?

Articles in this Series

Impacts – I – Introduction

Impacts – II – GHG Emissions Projections: SRES and RCP

Impacts – III – Population in 2100

Impacts – IV – Temperature Projections and Probabilities

Impacts – V – Climate change is already causing worsening storms, floods and droughts

Impacts – VI – Sea Level Rise 1

Impacts – VII – Sea Level 2 – Uncertainty

Impacts – VIII – Sea level 3 – USA

Impacts – IX – Sea Level 4 – Sinking Megacities

Impacts – X – Sea Level Rise 5 – Bangladesh

Impacts XI – Rainfall 1

Impacts – XII – Rainfall 2

Impacts – XIII – Rainfall 3

References

Hurricanes and climate: the US CLIVAR working group on hurricanes, American Meteorological Society, Kevin Walsh et al (2015) – free paper

Whither Hurricane Activity? Gabriel A Vecchi, Kyle L Swanson & Brian J. Soden, Science (2008) – free paper

Simulation of the Recent Multidecadal Increase of Atlantic Hurricane Activity Using an 18-km-Grid Regional Model, Thomas Knutson et al, American Meteorological Society, (2007) – free paper

Modeled Impact of Anthropogenic Warming on the Frequency of Intense Atlantic Hurricanes, Morris A Bender et al, Science (2010) – free paper

Notes

Note 1: The scenario is A1B, which is similar to RCP6 – that is, an approximate doubling of CO2 by the end of the century. The simulations came from the CMIP3 suite of model results.

In XII – Rainfall 2 we saw the results of many models on rainfall as GHGs increase. They project wetter tropics, drier subtropics and wetter higher latitude regions. We also saw an expectation that rainfall will increase globally, with something like 2-3% per ºC of warming.

Here is a (too small) graph from Allen & Ingram (2002) showing the model response of rainfall under temperature changes from GHG increases. The dashed line marked “C-C” is the famous (in climate physics) Clausius–Clapeyron relation which, at current temperatures, shows a 7% change in water vapor per ºC of warming. The red triangles are the precipitation changes from model simulations showing about half of that.

From Allen & Ingram (2002)

Figure 1

Here is another graph from the same paper showing global mean temperature change (top) and rainfall over land (bottom):

From Allen & Ingram (2002)

Figure 2

The temperature has increased over the last 50 years, and models and observations show that the precipitation has.. oh, it’s not changed. What is going on?

First, the authors explain some important background:

The distribution of moisture in the troposphere (the part of the atmosphere that is strongly coupled to the surface) is complex, but there is one clear and strong control: moisture condenses out of supersaturated air.

This constraint broadly accounts for the humidity of tropospheric air parcels above the boundary layer, because almost all such parcels will have reached saturation at some point in their recent history. Physically, therefore, it has long seemed plausible that the distribution of relative humidity would remain roughly constant under climate change, in which case the Clausius-Clapeyron relation implies that specific humidity would increase roughly exponentially with temperature.

This reasoning is strongest at higher latitudes where air is usually closer to saturation, and where relative humidity is indeed roughly constant through the substantial temperature changes of the seasonal cycle. For lower latitudes it has been argued that the real-world response might be different. But relative humidity seems to change little at low latitudes under a global warming scenario, even in models of very high vertical resolution, suggesting this may be a robust ’emergent constraint’ on which models have already converged.

They continue:

If tropospheric moisture loading is controlled by the constraints of (approximately) unchanged relative humidity and the Clausius-Clapeyron relation, should we expect a corresponding exponential increase in global precipitation and the overall intensity of the hydrological cycle as global temperatures rise?

This is certainly not what is observed in models.

To clarify, the point in the last sentence is that models do show an increase in precipitation, but not at the same rate as the expected increase in specific humidity (see note 1 for new readers).

They describe their figure 2 (our figure 1 above) and explain:

The explanation for these model results is that changes in the overall intensity of the hydrological cycle are controlled not by the availability of moisture, but by the availability of energy: specifically, the ability of the troposphere to radiate away latent heat released by precipitation.

At the simplest level, the energy budgets of the surface and troposphere can be summed up as a net radiative heating of the surface (from solar radiation, partly offset by radiative cooling) and a net radiative cooling of the troposphere to the surface and to space (R) being balanced by an upward latent heat flux (LP, where L is the latent heat of evaporation and P is global-mean precipitation): evaporation cools the surface and precipitation heats the troposphere.

[Emphasis added].

Basics Digression

Picture the atmosphere over a long period of time (like a decade), and for the whole globe. If it hasn’t heated up or cooled down we know that the energy in must equal energy out (or if it has only done so only marginally then energy in is almost equal to energy out). This is the first law of thermodynamics – energy is conserved.

What energy comes into the atmosphere?

  1. Solar radiation is partly absorbed by the atmosphere (most is transmitted through and heats the surface of the earth)
  2. Radiation emitted from the earth’s surface (we’ll call this terrestrial radiation) is mostly absorbed by the atmosphere (some is transmitted straight through to space)
  3. Warm air is convected up from the surface
  4. Heat stored in evaporated water vapor (latent heat) is convected up from the surface and the water vapor condenses out, releasing heat into the atmosphere when this happens

How does the atmosphere lose energy?

  1. It radiates downwards to the surface
  2. It radiates out to space

..end of digression

Changing Energy Budget

In a warmer world, if we have more evaporation we have more latent heat transfer from the surface into the troposphere. But the atmosphere has to be able to radiate this heat away. If it can’t, then the atmosphere becomes warmer, and this reduces convection. So with a warmer surface we may have a plentiful potential supply of latent heat (via water vapor) but the atmosphere needs a mechanism to radiate away this heat.

Allen & Ingram put forward a simple conceptual equation:

ΔRc + ΔRT = LΔP

where the change in radiative cooling ΔR, is split into two components: ΔRc that is independent of the change in atmospheric temperature; and ΔRT that depends only on the temperature

L = latent heat of water vapor (a constant), ΔP = change in rainfall (= change in evaporation, as evaporation is balanced by rainfall)

LΔP is about 1W/m² per 1% increase in global precipitation.

Now, if we double CO2, then before any temperature changes we decrease the outgoing longwave radiation through the tropopause (the top of the troposphere) by about 3-4W/m² and we increase atmospheric radiation to the surface by about 1W/m².

So doubling CO2, ΔRc = -2 to -3W/m²; prior to a temperature change ΔRT = 0; and so ΔP reduces.

The authors comment that increasing CO2 before any temperature change takes place reduces the intensity of the hydrological cycle and this effect was seen in early modeling experiments using prescribed sea surface temperatures.

Now, of course, the idea of doubling CO2 without any temperature change is just a thought experiment. But it’s an important thought experiment because it lets us isolate different factors.

The authors then consider their factor ΔRT:

The enhanced radiative cooling due to tropospheric warming, ΔRT, is approximately proportional to ΔT: tropospheric temperatures scale with the surface temperature change and warmer air radiates more energy, so ΔRT = kΔT, with k=3W/(m²K)..

All this is saying is that as the surface warms, the atmosphere warms at about the same rate, and the atmosphere then emits more radiation. This is why the model results of rainfall in our figure 2 above show no trend in rainfall over 50 years, and also match the observations – the constraint on rainfall is the changing radiative balance in the troposphere.

And so they point out:

Thus, although there is clearly usable information in fig. 3 [our figure 2], it would be physically unjustified to estimate ΔP/ΔT directly from 20th century observations and assume that the same quantity will apply in the future, when the balance between climate drivers will be very different.

There is a lot of other interesting commentary in their paper, although the paper itself is now quite dated (and unfortunately behind a paywall). In essence they discuss the difficulties of modeling precipitation changes, especially for a given region, and are looking for “emergent constraints” from more fundamental physics that might help constrain forecasts.

A forecasting system that rules out some currently conceivable futures as unlikely could be far more useful for long-range planning than a small number of ultra-high-resolution forecasts that simply rule in some (very detailed futures as possibilities).

This is a very important point when considering impacts.

Conclusion

Increasing the surface temperature by 1ºC is expected to increase the humidity over the ocean by about 7%. This is simply the basic physics of saturation. However, climate models predict an increase in mean rainfall of maybe 2-3% per ºC. The fundamental reason is that the movement of latent heat from the surface to the atmosphere has to be radiated away by the atmosphere, and so the constraint is the ability of the atmosphere to do this. And so the limiting factor in increasing rainfall is not the humidity increase, it is the radiative cooling of the atmosphere.

We also see that despite 50 years of warming, mean rainfall hasn’t changed. Models also predict this. This is believed to be a transient state, for reasons explained in the article.

References

Constraints on future changes in climate and the hydrologic cycle, MR Allen & WJ Ingram, Nature (2002)  – freely available [thanks, Robert]

Notes

1 Relative humidity is measured as a percentage. If the relative humidity = 100% it means the air is saturated with water vapor – it can’t hold any more water vapor. If the relative humidity = 0% it means the air is completely dry. As temperature increases the ability of air to hold water vapor increases non-linearly.

For example, at 0ºC, 1kg of air can carry around 4g of water vapor, at 10ºC that has doubled to 8g, and at 20ºC it has doubled again to 15g (I’m using approximate values).

So now imagine saturated air over the ocean at 20ºC rising up and therefore cooling (it is cooler higher up in the atmosphere). By the time the air parcel has cooled down to 0ºC (this might be anything from 2km to 5km altitude) it is still saturated but is only carrying 4g of water vapor, having condensed out 11g into water droplets.

At least 99.9% of physicists believe the theory of gravity, and the heliocentric model of the solar system. The debate is over. There is no doubt that we can send a manned (and woman-ed) mission to Mars.

Some “skeptics” say it can’t be done. They are denying basic science! Gravity is plainly true. So is the heliocentric model. Everyone agrees. There is an overwhelming consensus. So the time for discussion is over. There is no doubt about the Mars mission.

I create this analogy (note 1) for people who don’t understand the relationship between five completely different ideas:

  • the “greenhouse” effect
  • burning fossil fuels adds CO2 to the atmosphere, increasing the “greehouse” effect
  • climate models
  • crop models
  • economic models

The first two items on the list are fundamental physics and chemistry, and while advanced to prove (see The “Greenhouse” Effect Explained in Simple Terms for the first one) to people who want to work through a proof, they are indisputable. Together they create the theory of AGW (anthropogenic global warming). This says that humans are contributing to global warming by burning fossil fuels.

99.9% of people who understand atmospheric physics believe this unassailable idea (note 2).

This means that if we continue with “business as usual” (note 3) and keep using fossil fuels to generate energy, then by 2100 the world will be warmer than today.

How much warmer?

For that we need climate models.

Climate Models

These are models which break the earth’s surface, ocean and atmosphere into a big grid so that we can use physics equations (momentum, heat transfer and others) to calculate future climate (this class of model is called finite element analysis). These models include giant fudge-factors that can’t be validated (by giant fudge factors I mean “sub-grid parameterizations” and unknown parameters, but I’m writing this article for a non-technical audience).

One way to validate models is to model the temperature over the last 100 years. Another way is to produce a current climatology that matches observations. Generally temperature is the parameter with most attention (note 4).

Some climate models predict that if we double CO2 in the atmosphere (from pre-industrial periods) then surface temperature will be around 4.5ºC warmer. Others that the temperature will be 1.5ºC warmer. And everything in between.

Surely we can just look at which models reproduced the last 100 years temperature anomaly the best and work with those?

From Mauritsen et al 2012

If the model that predicts 1.5ºC in 2100 is close to the past, while the one that predicts 4.5ºC has a big overshoot, we will know that 1.5ºC is a more likely future. Conversely, if the model that predicts 4.5ºC in 2100 is close to the past but the 1.5ºC model woefully under-predicts the last 100 years of warming then we can expect more like 4.5ºC in 2100.

You would think so, but you would be wrong.

All the models get the last 100 years of temperature changes approximately correct. Jeffrey Kiehl produced a paper 10 years ago which analyzed the then current class of models and gently pointed out the reason. Models with large future warming included a high negative effect from aerosols over the last 100 years. Models with small future warming included a small negative effect from aerosols over the last 100 years. So both reproduced the past but with a completely different value of aerosol cooling. You might think we can just find out the actual cooling effect of aerosols around 1950 and then we will know which climate model to believe – but we can’t. We didn’t have satellites to measure the cooling effect of aerosols back then.

This is the challenge of models with many parameters that we don’t know. When a modeler is trying to reproduce the past, or the present, they pick the values of parameters which make the model match reality as best as they can. This is a necessary first step (note 5).

So how warm will it be in 2100 if we double CO2 in the atmosphere?

Somewhat warmer

Models also predict rainfall, drought and storms. But they aren’t as good as they are at temperature. Bray and von Storch survey climate scientists periodically on a number of topics. Here is their response to:

How would you rate the ability of regional climate models to make 50 year projections of convective rain storms/thunder storms? (1 = very poor to 7 = very good)

Similar ratings are obtained for rainfall predictions. The last 50 years has seen no apparent global worsening of storms, droughts and floods, at least according to the IPCC consensus (see Impacts – V – Climate change is already causing worsening storms, floods and droughts).

Sea level is expected to rise between around 0.3m to 0.6m (see Impacts – VI – Sea Level Rise 1 and IX – Sea Level 4 – Sinking Megacities) – this is from AR5 of the IPCC (under scenario RCP6). I mention this because the few people I’ve polled thought that sea level was expected to be 5-10m higher in 2100.

Actual reports with uneventful projections don’t generate headlines.

Crop Models

Crop models build on climate models. Once we know rainfall, drought and temperature we can work out how this impacts crops.

Will we starve to death? Or will there be plentiful food?

Past predictions of disaster haven’t been very accurate, although they are wildly popular with generating media headlines and book sales, as Paul Ehrlich found to his benefit. But that doesn’t mean future predictions of disaster are necessarily wrong.

There are a number of problems with trying to answer the question.

Even if climate models could predict the global temperature, when it comes to a region the size of, say, northern California their accuracy is much lower. Likewise for rainfall. Models which produce similar global temperature changes often have completely different regional precipitation changes. For example, from the IPCC Special Report on Extremes (SREX), p. 154:

At regional scales, there is little consensus in GCM projections regarding the sign of future change in monsoon characteristics, such as circulation and rainfall. For instance, while some models project an intense drying of the Sahel under a global warming scenario, others project an intensification of the rains, and some project more frequent extreme events..

In a warmer world with more CO2 (helps some plants) and maybe more rainfall, or maybe less what can we expect out of crop yields? It’s not clear. The IPCC AR5 wg II, ch 7, p 496:

For example, interactions among CO2 fertilization, temperature, soil nutrients, O3, pests, and weeds are not well understood (Soussana et al., 2010) and therefore most crop models do not include all of these effects.

Of course, as climate changes over the next 80 years agricultural scientists will grow different crops, and develop new ones. In 1900, almost half the US population worked in farming. Today the figure is 2-3%. Agriculture has changed unimaginably.

In the left half of this graph we can see global crop yield improvements over 50 years (the right side is projections to 2050):

From Ray et al 2013

Economic Models

What will the oil price be in 2020? Economic models give you the answer. Well, they give you an answer. And if you consult lots of models they give you lots of different answers. When the oil price changes a lot, which it does from time to time, all of the models turn out to be wrong. Predicting future prices of commodities is very hard, even when it is of paramount concern for major economies, and even when a company could make vast profits from accurate prediction.

AR5 of the IPCC report, wg 2, ch 7, p.512, had this to say about crop prices in 2050:

Changes in temperature and precipitation, without considering effects of CO2, will contribute to increased global food prices by 2050, with estimated increases ranging from 3 to 84% (medium confidence). Projections that include the effects of CO2 changes, but ignore O3 and pest and disease impacts, indicate that global price increases are about as likely as not, with a range of projected impacts from –30% to +45% by 2050..

..One lesson from recent model intercomparison experiments (Nelson et al., 2014) is that the choice of economic model matters at least as much as the climate or crop model for determining  price response to climate change, indicating the critical role of economic uncertainties for projecting the magnitude of price impacts.

In 2001, the 3rd report (often called TAR) said, ch 5, p.238, perhaps a little more clearly:

..it should be noted however that hunger estimates are based on the assumptions that food prices will rise with climate change, which is highly uncertain

Economic models are not very good at predicting anything. As Herbert Stein said, summarizing a lifetime in economics:

  • Economists do not know very much
  • Other people, including the politicians who make economic policy, know even less about economics than economists do

Conclusion

Recently a group, Cook et al 2013, reviewed over 10,000 abstracts of climate papers and concluded that 97% believed in the proposition of AGW – the proposition that humans are contributing to global warming by burning fossil fuels. I’m sure if the question were posed the right way directly to thousands of climate scientists, the number would be over 99%.

It’s not in dispute.

AGW is a necessary theory for Catastrophic Anthropogenic Global Warming (CAGW). But not sufficient by itself.

Likewise we know for sure that gravity is real and the planets orbit the sun. But it doesn’t follow that we can get humans safely to Mars and back. Maybe we can. Understanding gravity and the heliocentric theory is a necessary condition for the mission, but a lot more needs to be demonstrated.

The uncertainties in CAGW are huge.

Economic models that have no predictive skill are built on limited crop models which are built on climate models which have a wide range of possible global temperatures and no consensus on regional rainfall.

Human ingenuity somehow solved the problem of going from 2.5bn people in the middle of the 20th century to more than 7bn people today, and yet the proportion of the global population in abject poverty (note 6) has dropped from over 40% to maybe 15%. This was probably unimaginable 70 years ago.

Perhaps reasonable people can question if climate change is definitely the greatest threat facing humanity?

Perhaps questioning the predictive power of economic models is not denying science?

Perhaps it is ok to be unsure about the predictive power of climate models that contain sub-grid parameterizations (giant fudge factors) and that collectively provide a wide range of forecasts?

Perhaps people who question the predictions aren’t denying basic (or advanced) science, and haven’t lost their reason or their moral compass?

—-

[Note to commenters, added minutes after this post was written – this article is not intended to restart debate over the “greenhouse” effect, please post your comments in one of the 10s (100s?) of articles that have covered that subject, for example – The “Greenhouse” Effect Explained in Simple Terms – Comments on the reality of the “greenhouse” effect posted here will be deleted. Thanks for understanding.]

References

Twentieth century climate model response and climate sensitivity, Jeffrey Kiehl (2007)

Tuning the climate of a global model, Mauritsen et al (2012)

Yield Trends Are Insufficient to Double Global Crop Production by 2050, Deepak K. Ray et al (2013)

Quantifying the consensus on anthropogenic global warming in the scientific literature, Cook et al, Environmental Research Letters (2013)

The Great Escape, Angus Deaton, Princeton University Press (2013)

The various IPCC reports cited are all available at their website

Notes

1. An analogy doesn’t prove anything. It is for illumination.

2. How much we have contributed to the last century’s warming is not clear. The 5th IPCC report (AR5) said it was 95% certain that more than 50% of recent warming was caused by human activity. Well, another chapter in the same report suggested that this was a bogus statistic and I agree, but that doesn’t mean I think that the percentage of warming caused by human activity is lower than 50%. I have no idea. It is difficult to assess, likely impossible. See Natural Variability and Chaos – Three – Attribution & Fingerprints for more.

3. Reports on future climate often come with the statement “under a conservative business as usual scenario” but refer to a speculative and hard to believe scenario called RCP8.5 – see Impacts – II – GHG Emissions Projections: SRES and RCP. I think RCP 6 is much closer to the world of 2100 if we do little about carbon emissions and the world continues on the kind of development pathways that we have seen over the last 60 years. RCP8.5 was a scenario created to match a possible amount of CO2 in the atmosphere and how we might get there. Calling it “a conservative business as usual case” is a value-judgement with no evidence.

4. More specifically the change in temperature gets the most attention. This is called the “temperature anomaly”. Many models that do “well” on temperature anomaly actually do quite badly on the actual surface temperature. See Models, On – and Off – the Catwalk – Part Four – Tuning & the Magic Behind the Scenes – you can see that many “fit for purpose” models have current climate halfway to the last ice age even though they reproduce the last 100 years of temperature changes pretty well. That is, they model temperature changes quite well, but not temperature itself.

5. This is a reasonable approach used in modeling (not just climate modeling) – the necessary next step is to try to constrain the unknown parameters and giant fudge factors (sub-grid parameterizations). Climate scientists work very hard on this problem. Many confused people writing blogs think that climate modelers just pick the values they like, produce the model results and go have coffee. This is not the case, and can easily be seen by just reviewing lots of papers. The problem is well-understood among climate modelers. But the world is a massive place, detailed past measurements with sufficient accuracy are mostly lacking, and sub-grid parameterizations of non-linear processes are a very difficult challenge (this is one of the reasons why turbulent flow is a mostly unsolved problem).

6. This is a very imprecise term. I refer readers to the 2015 Nobel Prize winner Angus Deaton and his excellent book, The Great Escape (2013) for more.

In a few large companies I observed the same phenomenon – over here are corporate dreams and over there is reality. Team – your job is to move reality over to where corporate dreams are.

It wasn’t worded like that. Anyway, reality won each time. Reality is pretty stubborn. Of course, refusal to accept “reality” is what has created great inventions and companies. It’s not always clear what is reality and what is today’s lack of vision vs tomorrow’s idea that just needs lots of work to make a revolution. So ideas should be challenged to find “reality”. But reality itself is hard to change.

I starting checking on Carbon Brief via my blog feed a few months back. It has some decent articles although they are more “reporting press releases or executive summaries” than any critical analysis. But at least they lack hysterical headlines and good vs evil doesn’t even appear in the subtext, which is refreshing. I’ve been too busy with other projects recently to devote any time to writing about climate science or impacts, but their article today – In-depth: How a smart flexible grid could save the UK £40bn – did inspire me to read one of the actual reports referenced. Part of the reason my interest was piqued was I’ve seen many articles where “inflexible baseload” is compared with “smart decentralized grids” and “flexible systems”. All lovely words, which must mean they are better ways to create an electricity grid. A company I used to work for created a few products with “smart” in the name. All good marketing. But what about reality? Let’s have a look.

The report in question is An analysis of electricity system flexibility for Great Britain from November 2016 by Carbon Trust. The UK government has written into legislation to reduce carbon emissions to almost nothing by 2050 and so they need to get to work.

What is fascinating reading the report is that all of the points I made in previous articles in this series show up, but dressed up in a very positive way:

We’re choosing between all these great options on the best way to save money

For those who like a short story, I’ll rewrite that summary:

We’re choosing between all these expensive options trying to understand which one (or what mix) will be the least expensive. Unfortunately we don’t know but we need to start now because we’ve already committed to this huge carbon reduction by 2050. If we make a good pick then we’ll spend the least amount of money, but if we get it wrong we will be left with lots of negative outcomes and high costs for a long time

Well, when you pay for the report you should be allowed to get the window dressing that you like. That’s a minimum.

The imponderables are that wind power is intermittent (and there’s not much solar at high latitudes) so you have some difficult choices:

I’ll just again repeat something I’ve said a few times in this series. I’m not trying to knock renewable energy or decarbonizing energy. But solving a problem requires understanding the scale of the problem and especially the hardest challenges – before you start on the main project.

As a digression, there is a lovely irony about the use of the words “flexible” for renewable energy vs “inflexible” for conventional energy. Planning conventional energy grids is pretty easy – you can be very flexible because a) you have dispatchable power, and b) you can stick the next power station right next to the new demand as and when it appears. So the current system is incredibly flexible and you don’t need to be much of a crystal ball gazer. That said, it’s just my appreciation of irony and how I can’t help enjoying the excitement other people have in taking up inspirational words for ideas they like.. anyway, it has zero bearing on the difficult questions at hand.

As the article from Carbon Brief said, there’s £40bn of savings to be had. Here is the report:

The modelling for the analysis has shown that the deployment of flexibility technologies could save the UK energy system £17-40 billion cumulative to 2050 against a counterfactual where flexibility technologies are not available

Ok, so it’s not £40bn of savings. The modeling says getting it wrong will cost £40bn more than picking better options. Or if the technologies don’t appear then it will be more expensive..

What are these “flexible grid technologies”?

Demand Management

The first one is the effectively untested idea of demand management (see XVIII – Demand Management & Levelized Cost) which allows the grid operator to shift peoples’ demand to when supply is available. (Remember that the biggest current challenge of an electricity grid is that second by second and minute by minute the grid operators have to match supply with demand – this is a big challenge but has been conquered with dispatchable power and a variety of mechanisms for the different timescales). I say untested because only small-scale trials have been done with very mixed results, and some large-scale trials are needed. They will be expensive. As the report says:

Demand side response has a key role in providing flexibility but also has the greatest uncertainty in terms of cost and uptake

However, with a big enough stick you get the result you want. The question is how palatable that is to voters and what kind of stomach politicians have for voter unrest. For example, increase the cost of electricity to £100/kWhr when little is available. Once you hear that a few friends received a £10,000 bill that they can’t get out of and are being taken to court you will be running around the house turning everything off and paying close attention to the tariff changes. When the tariff soars, you are all sitting in your house in your winter coats (perhaps with a small bootleg butane heater) with the internet off, the TV off, the lights off and singing entertaining songs about your favorite politicians.

I present this not in parody, but just to demonstrate that it is completely possible to get demand management to work. Just need a strong group of principled politicians with the courage of their convictions and no fear of voters.. (yes, that last bit was parody, if you are a politician you have to be afraid of voters, it’s the job requirement).

So the challenge isn’t “the technology”, it’s the cost of rolling out the technology and how inflexible consumers are with their demand preferences. What is the elasticity of demand? What results will you get? And the timescale matters. If you need people to delay using energy by one hour, you get one result. If you need people to delay using energy by two days, you get a completely different result. There is no data on this.

Pick a few large cities, design the experiments, implement the technology and use it to test different time horizons in different weather over a two year period and see how well it works. This is an urgent task that a few countries should have seriously started years ago. Data is needed.

Storage

Table 26 in the appendices has some storage costs, which for bulk storage “Includes a basket of technologies such as pumped hydro and compressed air energy storage” and is costed in £/kW – with a range of about £700 – 1,700/kW ($900 – 2,200/kW). This is for a 12 hour duration – typical daily cycle. These increase somewhat over the time period in question (to 2050) as you might expect.

For distributed storage “Based on a basket of lithium ion battery technologies” ranges from £900 – 1,300/kW today falling to £400 – 900/kW by 2050. This is for a 2 hour duration (and a 5-year lifetime). Meaning that the cost per unit of energy stored is £450 – 650/kWhr today falling to £200 – 450/kWhr by 2050. So they don’t have super-optimistic cost reductions for storage.

The storage calculations under various scenarios range from 10-20GW with a couple of outliers (5GW and 28GW).

My back of the envelope calculation says that if you can’t expand pumped hydro, don’t build your gas plants, and do need to rely on batteries, then for a 2-day wind hiatus and no demand management you would spend “quite a bit”. This is based on the expected energy use (below) of about 60GW = 2,880 GWhr for 48 hours. Converting to kWhr we get 2,880 x 106 and multiplying by the cost of £300/kWhr = £864bn every 5 years, or £170bn per year. UK GDP is about £2,000bn per year at the moment. This gives an idea of the cost of batteries when you want to back up power for a period of days.

Backup Plants

The backup gas plants show as around 20GW of CCGT and somewhere between 30-90GW of peaking plants added by 2050 (depending on the scenario). This makes sense. You need something less expensive than storage. It appears the constraint is the requirement to cut emissions so much that even running these plants as backup for low wind / no wind is a problem.

Expected Energy Use

The consumed electricity for 2020 is given (in the appendix) as 320-340 TWhr. Dividing by the number of hours in the year gives us the average output of 36-39 GW, which seems about right (recent figures from memory were about 30GW for the UK on average).

In 2050 the estimate is for 410-610 TWhr or an average of 47-70GW. This includes electric vehicles and heating – that is, all energy is coming from the grid – so on the surface it seems too low (current electricity usage is about 40% of total energy). Still, I’ve never tried to calculate it and they probably have some assumptions (not in this report) on improved energy efficiency.

Cost of Electricity in 2050 under These Various Scenarios

n/a

Conclusion

The key challenges for large-scale reductions in CO2 emissions haven’t changed. It is important to try and identify what future cost scenarios vs current plans will result in the most pain, but it’s clear that the important data to chart the right course is largely unknown. Luckily, report summaries can put some nice window-dressing on the problems.

As always with reports for public consumption the executive summary and the press release are best avoided. The chapters themselves and especially the appendices give some data that can be evaluated.

It’s clear that large-scale interconnectors across the country are needed to deliver power from places where high wind exists (e.g. west coast of Scotland) to demand locations (e.g. London). But it’s not clear that inter-connecting to Europe will solve many problems because most of northern and central Europe will be likewise looking for power when their wind output is low on a cold winter evening. Perhaps inter-connecting to further locations, as reviewed in XII – Windpower as Baseload and SuperGrids is an option, although this wasn’t reviewed in the paper.

It wasn’t clear to me from the report whether gas plants without storage/demand management/importing large quantities of European electricity would solve the problem except for too aggressive CO2 reduction targets. It sorted of hinted that the constraint of CO2 emissions forced the gas plants to less and less backup use, even though their available capacity was still very high in 2050. Wind turbines plus interconnectors around the country plus gas plants are simple and relatively quantifiable (current gas plants aren’t really optimized for this kind of backup but it’s not peering into a crystal ball to make an intelligent estimate).

The cost of electricity in 2050 for these scenarios wasn’t given in this report.

Articles in this Series

Renewable Energy I – Introduction

Renewables II – Solar and Free Lunches – Solar power

Renewables III – US Grid Operators’ Opinions – The grid operators’ concerns

Renewables IV – Wind, Forecast Horizon & Backups – Some more detail about wind power – what do we do when the wind goes on vacation

Renewables V – Grid Stability As Wind Power Penetration Increases

Renewables VI – Report says.. 100% Renewables by 2030 or 2050

Renewables VII – Feasibility and Reality – Geothermal example

Renewables VIII – Transmission Costs And Outsourcing Renewable Generation

Renewables IX – Onshore Wind Costs

Renewables X – Nationalism vs Inter-Nationalism

Renewables XI – Cost of Gas Plants vs Wind Farms

Renewables XII – Windpower as Baseload and SuperGrids

Renewables XIII – One of Wind’s Hidden Costs

Renewables XIV – Minimized Cost of 99.9% Renewable Study

Renewables XV – Offshore Wind Costs

Renewables XVI – JP Morgan advises

Renewables XVII – Demand Management 1

Renewables XVIII – Demand Management & Levelized Cost

Renewables XIX – Behind the Executive Summary and Reality vs Dreams

A long time ago I wrote The Confirmation Bias – Or Why None of Us are Really Skeptics, with a small insight from Nassim Taleb. Right now I’m rereading The Righteous Mind: Why Good People are Divided by Politics and Religion by Jonathan Haidt.

This is truly a great book if you want to understand more about how we think and how we delude ourselves. Through experiments cognitive psychologists demonstrate that once our “moral machinery” has clicked in, which happens very easily, our reasoning is just an after-the-fact rationalization of what we already believe.

Haidt gives the analogy of a rider on an elephant. The elephant starts going one way rather than another, and the rider, unaware of why, starts coming up with invented reasons for the new direction. It’s like the rider is the PR guy for the elephant. In Haidt’s analogy, the rider is our reasoning, and the elephant is our moral machinery. The elephant is in charge. The rider thinks he is.

An an intuitionist, I’d say that the worship of reason is itself an illustration of one of the most long-lived delusions in Western history: the rationalist delusion..

..The French cognitive scientists Hugo Mercier and Dan Sperber recently reviewed the vast research literature on motivated reasoning (in social psychology) and on the biases and errors of reasoning (in cognitive psychology). They concluded that most of the bizarre and depressing research findings make perfect sense once you see reasoning as having evolved not to help us find truth but to help us engage in arguments, persuasion and manipulation in the context of discussions with other people.

As they put it, “skilled arguers ..are not after the truth but after arguments supporting their views.” This explains why the confirmation bias is so powerful and so ineradicable. How hard could it be to teach students to look on the other side, to look for evidence against their favored view? Yet it’s very hard, and nobody has yet found a way to do it. It’s hard because the confirmation bias is a built-in feature (of an argumentative mind), not a bug that can be removed (from a platonic mind)..

..In the same way, each individual reasoner is really good at one thing: finding evidence to support the position he or she already holds, usually for intuitive reasons..

..I have tried to make a reasoned case that our moral capacities are best described from an intuitionist perspective. I do not claim to have examined the question from all sides, nor to have offered irrefutable proof.

Because of the insurmountable power of the confirmation bias, counterarguments will have to be produced by those who disagree with me.

Haidt also highlights some research showing that more intelligence and education makes you better at generating more arguments for your side of the argument, but not for finding reasons on the other side. “Smart people make really good lawyers and press secretaries.. people invest their IQ in buttressing their own case rather than in exploring the entire issue more fully and evenhandedly.”

The whole book is very readable and full of studies and explanations.

If you fancy a bucket of ice cold water thrown over the rationalist delusion then this is a good way to get it.