Feeds:
Posts
Comments

I was re-reading Missing iris effect as a possible cause of muted hydrological change and high climate sensitivity in models, Thorsten Mauritsen and Bjorn Stevens from 2015 (because I referenced it in a recent comment) and then looked up other recent papers citing it. One interesting review paper is by Stevens et al from 2016. I recognized his name from many other papers and it looks like Bjorn Stevens has been publishing papers since the early 1990s, with almost 200 papers in peer-reviewed journals, mostly on this and related topics. Likewise, Sherwood and Bony (two of the coauthors) are very familiar names from this field.

Many regular readers (and I’m sure new readers of this blog) will understand much more than me about current controversies in climate sensitivity. The question in brief (of course there are many subtleties) – how much will the earth warm if we double CO2? It’s a very important question. As the authors explain at the start:

Nearly 40 years have passed since the U.S. National Academies issued the “Charney Report.” This landmark assessment popularized the concept of the “equilibrium climate sensitivity” (ECS), the increase of Earth’s globally and annually averaged near surface temperature that would follow a sustained doubling of atmospheric carbon dioxide relative to its preindustrial value. Through the application of physical reasoning applied to the analysis of output from a handful of relatively simple models of the climate system, Jule G. Charney and his co-authors estimated a range of 1.5 –4.5 K for the ECS [Charney et al., 1979].

Charney is a eminent name you will know, along with Lorentz, if you read about the people who broke ground on numerical weather modeling. The authors explain a little about the definition of ECS:

ECS is an idealized but central measure of climate change, which gives specificity to the more general idea of Earth’s radiative response to warming. This specificity makes ECS something that is easy to grasp, if not to realize. For instance, the high heat capacity and vast carbon stores of the deep ocean mean that a new climate equilibrium would only be fully attained a few millennia after an applied forcing [Held et al., 2010; Winton et al., 2010; Li et al., 2012]; and uncertainties in the carbon cycle make it difficult to know what level of emissions is compatible with a doubling of the atmospheric CO2 concentration in the first place.

Concepts such as the “transient climate response” or the “transient climate response to cumulative carbon emissions” have been introduced to account for these effects and may be a better index of the warming that will occur within a century or two [Allen and Frame, 2007; Knutti and Hegerl, 2008; Collins et al., 2013;MacDougall, 2016].

But the ECS is strongly related and conceptually simpler, so it endures as the central measure of Earth’s susceptibility to forcing [Flato et al., 2013].

And about the implications of narrowing the range of ECS:

The socioeconomic value of better understanding the ECS is well documented. If the ECS were well below 1.5 K, climate change would be a less serious problem. The stakes are much higher for the upper bound. If the ECS were above 4.5 K, immediate and severe reductions of greenhouse gas emissions would be imperative to avoid dangerous climate changes within a few human generations.

From a mitigation point of view, the difference between an ECS of 1.5 K and 4.5 K corresponds to about a factor of two in the allowable CO2 emissions for a given temperature target [Stocker et al., 2013] and it explains why the value of learning more about the ECS has been appraised so highly [Cooke et al., 2013; Neubersch et al., 2014].

The ECS also gains importance because it conditions many other impacts of greenhouse gases, such as regional temperature and rainfall [Bony et al., 2013; Tebaldi and Arblaster, 2014], and even extremes [Seneviratne et al., 2016], knowledge of which is required for developing effective adaptation strategies. Being an important and simple measure of climate change, the ECS is something that climate science should and must be able to better understand and quantify more precisely.

One of the questions they raise is at the heart of my question about whether climate sensitivity is a constant that we can measure, or a value that has some durable meaning rather than being dependent on the actual climate specifics at the time. For example, there are attempts to measure it via the climate response during an El Nino. We see the climate warm and we measure how the top of atmosphere radiation balance changes. We attempt to measure the difference in ocean temperature between end of the last ice age and today and deduce climate sensitivity. Perhaps I have a mental picture of non-linear systems that is preventing me from seeing the obvious. However, the picture I have in my head is that the dependence of the top of radiation balance on temperature is not a constant.

Here is their commentary. They use the term “pattern effect” for my mental model described above:

Hence, a generalization of the concept of climate sensitivity to different eras may need to account for differences that arise from the different base state of the climate system, increasingly so for large perturbations.

Even for small perturbations, there is mounting evidence that the outward radiation may be sensitive to the geographic pattern of surface temperature changes. Senior and Mitchell [2000] argued that if warming is greater over land, or at high latitudes, different feedbacks may occur than for the case where the same amount of warming is instead concentrated over tropical oceans.

These effects appear to be present in a range of models [Armour et al., 2013; Andrews et al., 2015]. Physically they can be understood because clouds—and their impact on radiation—are sensitive to changes in the atmospheric circulation, which responds to geographic differences in warming [Kang et al., 2013], or simply because an evolving pattern of surface warming weights local responses differently at different times [Armour et al., 2013].

Hence different patterns of warming, occurring on different timescales, may be associated with stronger or weaker radiative responses. This introduces an additional state dependence, one that is not encapsulated by the global mean temperature. We call this a “pattern effect.” Pattern effects are thought to be important for interpreting changes over the instrumental period [Gregory and Andrews, 2016], and may contribute to the state dependence of generalized measures of Earth’s climate sensitivity as inferred from the geological record.

Some of my thoughts are that the insoluble questions on this specific topic are also tied into the question about the climate being chaotic vs just weather being chaotic – see for example, Natural Variability and Chaos – Four – The Thirty Year Myth. In that article we look at the convention of defining climate as the average of 30 years of weather and why that “eliminates” chaos, or doesn’t. Non-linear systems have lots of intractable problems – more on that topic in the whole series Natural Variability and Chaos. It’s good to see it being mentioned in this paper.

Read the whole paper – it reviews the conditions necessary for very low climate sensitivity and for very high climate sensitivity, with the idea being that if one necessary condition can be ruled out then the very low and/or very high climate sensitivity can be ruled out. The paper also includes some excellent references for further insights.

From Stevens et al 2016

Click to enlarge

Happy Thanksgiving to our US readers.

References

Missing iris effect as a possible cause of muted hydrological change and high climate sensitivity in models, Thorsten Mauritsen & Bjorn Stevens, Nature Geoscience (2015) – paywall paper

Prospects for narrowing bounds on Earth’s equilibrium climate sensitivity, Bjorn Stevens, Steven C Sherwood, Sandrine Bony & Mark J Webb, Earth’s
Future (2016) – free paper

Advertisements

In Part Five – More on Tuning & the Magic Behind the Scenes and also in the earlier Part Four we looked at the challenge of selecting parameters in climate models. A recent 2017 paper on this topic by Frédéric Hourdin and colleagues is very illuminating. One of the co-authors is Thorsten Mauritsen, the principal author of the 2012 paper we reviewed in Part Four. Another co-author is Jean-Christophe Golaz, the principal author of the 2013 paper we reviewed in Part Five.

The topics are similar but there is some interesting additional detail and commentary. The paper is open and, as always, I recommend reading the whole paper.

One of the key points is that climate models need to be specific about their “target” – were they trying to get the model to match recent climatology? top of atmosphere radiation balance? last 100 years of temperature trends? If we know that a model was developed with an eye on a particular target then it doesn’t demonstrate model skill if they get that target right.

Because of the uncertainties in observations and in the model formulation, the possible parameter choices are numerous and will differ from one modeling group to another. These choices should be more often considered in model intercomparison studies. The diversity of tuning choices reflects the state of our current climate understanding, observation, and modeling. It is vital that this diversity be maintained. It is, however, important that groups better communicate their tuning strategy. In particular, when comparing models on a given metric, either for model assessment or for understanding of climate mechanisms, it is essential to know whether some models used this metric as tuning target.

They comment on the paper by Jeffrey Kiehl from 2007 (referenced in The Debate is Over – 99% of Scientists believe Gravity and the Heliocentric Solar System so therefore..) which showed how models with higher sensitivity to CO2 have higher counter-balancing negative forcing from aerosols.

And later in the paper:

The question of whether the twentieth-century warming should be considered a target of model development or an emergent property is polarizing the climate modeling community, with 35% of modelers stating that twentieth-century warming was rated very important to decisive, whereas 30% would not consider it at all during development.

Some view the temperature record as an independent evaluation dataset not to be used, while others view it as a valuable observational constraint on the model development. Likewise, opinions diverge as to which measures, either forcing or ECS, are legitimate means for improving the model match to observed warming.

The question of developing toward the twentieth- century warming therefore is an area of vigorous debate within the community..

..The fact that some models are explicitly, or implicitly, tuned to better match the twentieth-century warming, while others may not be, clearly complicates the interpretation of the results of combined model ensembles such as CMIP. The diversity of approaches is unavoidable as individual modeling centers pursue their model development to seek their specific scientific goals.

It is, however, essential that decisions affecting forcing or feedback made during model development be transparently documented.

And so, onto another recent paper by Sumant Nigam and colleagues. They review the temperature trends by season over the last 100 years and review that against models. They look only at the northern hemisphere over land, due to the better temperature dataset available (compared with the southern hemisphere).

Here are the observations of the trends for each of the four seasons, I find it fascinating to see the difference between the seasonal trends:

From Nigam et al 2017

Figure 1 – Click to enlarge

Then they compare the observations to some of the models used in IPCC AR5 (from the model intercomparison project, CMIP5) – top line is observations, each line below is a different model. When we compare the geographical distribution of winter-summer trend (right column) we can see that the models don’t do very well:

From Nigam et al 2017

Figure 2 – Click to enlarge

From their conclusion:

The urgent need for shifting the evaluative and diagnostic focus away from the customary annual mean toward the seasonal cycle of secular warming is manifest in the inability of the leading climate models (whose simulations inform the IPCC’s Fifth Assessment Report) to generate realistic and robust (large signal-to noise ratio) twentieth-century winter and summer SAT trends over the northern continents. The large intra-ensemble SD of century-long SAT trends in some IPCC AR5 models (e.g., GFDL-CM3) moreover raises interesting questions: If this subset of climate models is realistic, especially in generation of ultra-low-frequency variability, is the century-long (1902–2014) linear trend in observed SAT—a one-member ensemble of the climate record—a reliable indicator of the secular warming signal?

I’ve commented a number of times in various articles – people who don’t read climate science papers often have some idea that climate scientists are monolithically opposed to questioning model results or questioning “the orthodoxy”. This is contrary to what you find if you read lots of papers. It might be that press releases that show up in The New York Times, CNN or the BBC (or pick another ideological bellwether) have some kind of monolithic sameness but this just demonstrates that no one interested in finding out anything important (apart from the weather and celebrity news) should ever watch/read media outlets.

They continue:

The relative contribution of both mechanisms to the observed seasonality in century-long SAT trends needs further assessment because of uncertainties in the diagnosis of evapotranspiration and sea level pressure from the century-long observational records. Climate system models—ideal tools for investigation of mechanisms through controlled experimentation—are unfortunately not yet ready given their inability to simulate the seasonality of trends in historical simulations.

Subversive indeed.

Their investigation digs into evapotranspiration – the additional water vapor, available from plants, to be evaporated and therefore to remove heat from the surface during the summer months.

Conclusion

All models are wrong but some are useful” – a statement attributed to a modeler from a different profession (statistical process control) and sometimes quoted also by climate modelers.

This is always a good way to think about models. Perhaps the inability of climate models to reproduce seasonal trends is inconsequential – or perhaps it is important. Models fail on many levels. The question is why, and the answers lead to better models.

Climate science is a real science, contrary to the claims of many people who don’t read much climate science papers, because many published papers ask important and difficult questions, and critique the current state of the science. That is, falsifiability is being addressed. These questions might not become media headlines, or even make it into the Summary for Policymakers in IPCC reports, but papers with these questions are not outliers.

I found both of these papers very interesting. Hourdin et al because they ask valuable questions about how models are tuned, and Nigam et al because they point out that climate models do a poor job of reproducing an important climate trend (seasonal temperature) which provides an extra level of testing for climate models.

References

Striking Seasonality in the Secular Warming of the Northern Continents: Structure and Mechanisms, Sumant Nigam et al, Journal of Climate (2017)

The Art and Science of Climate Model Tuning, Frédéric Hourdin et al, American Meteorological Society (2017) – free paper

I’ve been digging through some statistics for my own benefit.

When you read or hear a statistic that country X is generating Y% of electricity via renewables it can sound wonderful, but the headline number can conceal or overstate useful progress. A few tips for readers new to the subject:

  • Energy is not electricity. So you need to know – were they quoting energy or electricity. For most developed nations, electricity accounts for something around 40% of total energy.
  • “Renewables” includes two components that are important to separate out:
    • hydroelectric – this is “tapped out” in most developed countries. If the “share of renewables” is say 30%, but hydro is 20% (i.e. 2/3 of the total renewables) then the expandable renewables are only 10%. This can help you see recent progress and extrapolate to possible future progress (different story in developing countries, but there is often a large human cost to creating hydroelectric projects)
    • biomass – if you stop burning coal and you burn wood chip instead this tips the reporting scales from “the work of Satan” to “green and renewable”, even though burning wood chip generates more CO2 emissions per unit of electricity generated. Not all biomass is like this, but as a rule of thumb, put the biomass entry into the “more investigation needed” pile before declaring victory
  • Nameplate is not actual – if you have a gas plant (designed to run all the time) the actual output will be about 90% or more of the nameplate (the maximum output under normal conditions), but if you have a wind farm the actual output across a year will be about 20% of the nameplate in Germany, 30% in Ireland and over 40% in Oklahoma. So if you read that “10GW of wind power” was added to Germany’s generating capacity you need to mentally convert that to about 2GW. Similar story for solar – there is a conversion factor.

If you mentally take account of these points when you hear an update, you will be with the 1% of journalists who could pass the literacy test on the progress of renewables. It’s an elite club.

Once again I’ll state that I’m not trying to knock renewables, I’m trying to promote “literacy”. Instead of hapless cheerleaders, think informed citizens..

So, onto recent data.

I’m using two stalwarts of energy reporting: IEA and BP.

IEA produce data to 2015 and quote useful units like electricity consumed in TWh. This is a unit of energy – a TWh is a billion kWh. You find kWh on your electricity bill.

BP produce data to 2016 – which is better – and breakdown renewables much better, but quote units of Mtoe – millions tons of oil equivalent. If you delve into energy industry reports, you often find mixed together in one report: kWh/TWh (energy), GJ (energy), GW (power), tcf (volume of gas), barrels of oil, mmBtu (energy in obscure British units)..

In the case of the BP report it’s not clear to me how to convert from Mtoe to GWh – they do provide a footnote but when I do the conversion I can’t reconcile the numbers using their footnote. No doubt one of our readers has gone down this rabbit hole and can illuminate us all (?). In the meantime, I took the BP numbers in Mtoe and looked up IEA % values for 2016 in TWh and worked out a conversion factor – multiply Mtoe by 0.0045. Then cross-checked with Fraunhofer ISE for Germany. This allows us to see the BP 2016 renewables breakdown in real electricity units rather than in mythical barrels of oil.

Another note – I’m not trying to generate exact figures. Every source has different values. Reconciling them is a big undertaking and very uninteresting work. I’m simply trying to get some perspective on actual renewables progress.

I don’t quote nuclear energy statistics in this article. It’s very low carbon emission, but not exactly “renewable”. The real reason for not including the numbers is that most developed countries are not significantly expanding their nuclear generation, and in Germany’s case are shutting it down. China is a different story, with a big nuclear expansion ongoing.

Germany v US

You would think that Germany, one of the leading lights in renewable energy, would be greatly outperforming the US on CO2 emissions reduction.

  • 2005 – 2015 German CO2 reduction = 0.9% p.a
  • 2005 – 2015 US CO2 reduction = 1.1% p.a

Over that time period the German population has stayed the same, while the US population has grown by about 9%, so we can adjust the US reduction to about 2% p.a on a per capita basis.

Now the US emissions peaked in 2005. You actually don’t need to read a report to find that out because when the US commitment to reducing CO2 emissions was announced in Paris in 2015 the commitment was a reduction “from 2005”. Being cynical about politicians never loses, and sure enough (when checking data in a report) the peak was 2005 – and the reduction from 2005 to 2015 was already about 12%.

Germany’s emission peaked in 1990 so I believe their commitment is always referenced to 1990. The story I haven’t verified is that after the collapse of the Soviet Union and the re-unification of Germany, lots of dirty heavy industry shut down and this was a big help in emissions reductions.

The US reduction looks to be – in part – due to the embrace of natural gas due to its recent very low cost (gas produces about half the CO2 of coal for the same electricity production). This is a result of the current revolution in “unconventional gas”.

When we look at CO2 emissions per kWh in 2016 the story is also surprising:

  • Germany – 1.3g CO2/kWh
  • US – 1.2g CO2/kWh

So this tells us that the GHG efficiency of electricity generation is effectively the same in both countries, slightly better in the US.

When we look at total usage (across all electricity generation, including industry) the story is what we might expect:

  • Germany – 19 kWh per person per day
  • US – 35 kWh per person per day

This tells us that the US uses almost double the electricity per person.

Changes in Renewables

I looked up a few other countries – Denmark, the UK and Spain because they have a big push into renewables; and China to contrast a rapidly developing country. The last column in the table, Total Produced, is total electricity produced from all sources, including fossil fuels and nuclear.

From BP data

The IEA values (not shown) give lower total electricity for each country. The BP figures are electricity produced and IEA figures are electricity consumed. The solar + wind value for Germany in 2016 moves from 18% to 20% of total if I use the lower IEA total.

I also looked up electricity prices in the IEA report and while I have values for 2016, I don’t have comparable values for 2006. I couldn’t find the 2006 or 2007 version of the report. Based on a variety of websites all using different methods, quoting in different currencies and from unverified sources (so not reliable) the average consumer price in Germany has gone from about 19c/kWh to 33c/kWh from 2006-2016 (US$). The US looks almost flat, perhaps from 12 to 12.5c/kWh. UK from 14 to 21c/kWh. The IEA report didn’t give a figure for Denmark.

So Germany produces about 18% of electricity from solar + wind. Its total renewables are 30% if we include biomass, and about 21% if don’t include them. As I mentioned at the start, biomass sometimes includes burning “renewable” wood chip instead of fossil fuels. Biomass is a (big) subject for another day with numerous problems and I haven’t looked at the breakdown.

The Denmark figure for total electricity is probably quite misleading – see the huge reduction in electricity production from 46 TWh to 30 TWh over 10 years. On wikipedia someone has provided a better breakdown, showing consumption as well and the consumption has dropped by just 4% over that time. Also 2006 appears to be a big outlier in electricity production. Denmark is a country connected to neighboring grids and generating lots of wind energy. So Denmark’s 2006 real figure for wind was about 20% of total consumed (not 14%) and has gone up to 43% over 10 years. On this basis Denmark could be at 80% of electricity generation by wind in 2035.

Confusion

When looking for electricity price changes, here was a random site I came across, Economists at Large:

By June 16 this year electricity generated from solar and wind power accounted for a record 61% of total electricity generated in Germany.

The actual figure for 2016 is about 18%.

If I went looking I’m sure I could find lots of sites, including “reputable” media outlets, with wide ranges of inflated figures. It’s very easy to generate confusion – quote a peak daytime value like this “Germany’s renewable output was …%… on May 28th at 1:15pm” and wait for the recyclers of mush (this includes “reputable” media outlets) to propogate it in a new way. Or quote growth figures – as in how much has been added this year. Or quote capacity added, and rely on the fact that no one understands that 10GW of wind farm only generates about 2GW on average of output in Germany. And so on.

I realize young people may expect media outlets to “fact check” but that is not their job. Their job is to generate headlines and have their stories quoted more widely.

Also, if you pay zero for your electricity because you have solar power you might think that you are generating all of your own electricity. Most of the time you would be wrong. Various governments have guaranteed feed-in tariffs for rooftop solar at well above market price.

Basic energy literacy means understanding the difference between these items.

Conclusion

I was just trying to find the core statistics for my own understanding and was especially interested in Germany.

For Germany, we could look at the 3.5x increase in solar + wind in a decade and say “amazing”. Alternatively, we could look at going from 5% to 18% of total electricity generation in 10 years and say that to get to 80% of electricity production will take another 40-50 years at the same rate and say “disappointing”.

Remember that electricity is only about 40% of energy use in most developed countries. Therefore, if you want to decarbonize the whole economy you also have to boost your electricity supply by 2.5x and switch over heating, transport, etc to electric supply.

At the moment, there are currently issues with increasing “non-synchronous” generation beyond a certain point (see V – Grid Stability As Wind Power Penetration Increases). If you read spruiking websites you will find two common suggestions, first “people said we couldn’t get past 10% and now we’re already at 20%” and second “look at Denmark”. If you like happy stories probably skip the rest of this section..

The most helpful textbook I found on the topic was Renewable Electricity and the Grid : The Challenge of Variability written by people who are trying to do it. Long story short, integrating wind energy is very easy at the start, and up to about 20% of total supply on average it doesn’t seem to present a problem. Above 20% there are questions and uncertainties. These are electricity generation and grid experts contributing to the various chapters.

The key point is that grid stability can come from who you are connected to and how.

Denmark, while a country, is really just the size of a large city (population 6M) connected to the rest of Europe and this connection provides their grid stability. Denmark produced 43% of their electricity from wind in 2016 but this is a much lower % of the grid that it is connected to. The question is not “can one small country connected to nearby large countries produce 80% of electricity from wind?” but instead “can the interconnected grid produce 80% from wind?” The answer to the first question is of course yes. The other countries provide grid stability to Denmark. When all the surrounding countries are producing wind energy at 80% of the total inter-connected grid it will be a different story.

However, this is not some fundamental physics problem, it’s an engineering problem that I’m sure can be solved. I haven’t dug in much beyond the references in Part V (referenced above) so I don’t know what issues and costs are involved.

Other Articles in this Series

Renewable Energy I – Introduction

Renewables II – Solar and Free Lunches – Solar power

Renewables III – US Grid Operators’ Opinions – The grid operators’ concerns

Renewables IV – Wind, Forecast Horizon & Backups – Some more detail about wind power – what do we do when the wind goes on vacation

Renewables V – Grid Stability As Wind Power Penetration Increases

Renewables VI – Report says.. 100% Renewables by 2030 or 2050

Renewables VII – Feasibility and Reality – Geothermal example

Renewables VIII – Transmission Costs And Outsourcing Renewable Generation

Renewables IX – Onshore Wind Costs

Renewables X – Nationalism vs Inter-Nationalism

Renewables XI – Cost of Gas Plants vs Wind Farms

Renewables XII – Windpower as Baseload and SuperGrids

Renewables XIII – One of Wind’s Hidden Costs

Renewables XIV – Minimized Cost of 99.9% Renewable Study

Renewables XV – Offshore Wind Costs

Renewables XVI – JP Morgan advises

Renewables XVII – Demand Management 1

Renewables XVIII – Demand Management & Levelized Cost

Renewables XIX – Behind the Executive Summary and Reality vs Dreams

References

BP Statistical Review of World Energy June 2017

BP Statistical Review of World Energy June 2017 – Renewables Appendices (this is a separate pdf)

IEA Key world energy statistics 2017

Renewable Electricity and the Grid : The Challenge of Variability, Godfrey Boyle, Earthscan (2007)

 

Over in another article, a commenter claims:

..Catastrophic predictions depend on accelerated forcings due to water vapour feedback. This water vapour feedback is simply written into climate models as parameters. It is not derived from any kind simulation of first principles in the General Circulation Model runs (GCMs)..

[Emphasis added]

I’ve seen this article of faith a lot. If you frequent fantasy climate blogs where people learn first principles and modeling basics from comments by other equally well-educated commenters this is the kind of contribution you will be able to make after years of study.

None of us knowed nothing, so we all sat around and teached each other.

Actually how the atmospheric section of climate models work is pretty simple in principle. The atmosphere is divided up into a set of blocks (a grid) with each block having dimensions something like 200 km x 200km x 500m high. The values vary a lot and depend on the resolution of the model, this is just to give you an idea.

Then each block has an E-W wind; a N-S wind; a vertical velocity; temperature; pressure; the concentrations of CO2, water vapor, methane; cloud fractions, and so on.

Then the model “steps forward in time” and uses equations to calculate the new values of each item.

The earth is spinning and conservation of momentum, heat, mass is applied to each block. The principles of radiation through each block in each direction apply via paramaterizations (note 1).

Specifically on water vapor – the change in mass of water vapor in each block is calculated from the amount of water evaporated, the amount of water vapor condensed, and the amount of rainfall taking water out of the block. And from the movement of air via E-W, N-S and up/down winds. The final amount of water vapor in each time step affects the radiation emitted upwards and downwards.

It’s more involved and you can read whole books on the subject.

I doubt that anyone who has troubled themselves to read even one paper on climate modeling basics could reach the conclusion so firmly believed in fantasy climate blogs and repeated above. If you never need to provide evidence for your claims..

For this blog we do like to see proof of claims, so please take a read of Description of the NCAR Community Atmosphere Model (CAM 4.0) and just show where this water vapor feedback is written in. Or pick another climate model used by a climate modeling group.

This is the kind of exciting stuff you find in the 200+ pages of an atmospheric model description:

From CAM4 Technical Note

You can also find details of the shortwave and longwave radiation parameterization schemes and how they apply to water vapor.

Here is a quote from The Global Circulation of the Atmosphere (ref below):

Essentially all GCMs yield water vapor feedback consistent with that which would result from holding relative humidity approximately fixed as climate changes. This is an emergent property of the simulated climate system; fixed relative humidity is not in any way built into the model physics, and the models offer ample means by which relative humidity could change.

From Water Vapor Feedback and Global Warming, a paper well-worth anyone reading for who wants to understand this key question in climate:

Water vapor is the dominant greenhouse gas, the most important gaseous source of infrared opacity in the atmosphere. As the concentrations of other greenhouse gases, particularly carbon dioxide, increase because of human activity, it is centrally important to predict how the water vapor distribution will be affected. To the extent that water vapor concentrations increase in a warmer world, the climatic effects of the other greenhouse gases will be amplified. Models of the Earth’s climate indicate that this is an important positive feedback that increases the sensitivity of surface temperatures to carbon dioxide by nearly a factor of two when considered in isolation from other feedbacks, and possibly by as much as a factor of three or more when interactions with other feedbacks are considered. Critics of this consensus have attempted to provide reasons why modeling results are overestimating the strength of this feedback..

Remember, just a few years of study at fantasy climate blogs can save an hour or more of reading papers on atmospheric physics.

References

Description of the NCAR Community Atmosphere Model (CAM 4)– free paper

On the Relative Humidity of the Atmosphere, Chapter 6 of The Global Circulation of the Atmosphere, edited by Tapio Schneider & Adam Sobel, Princeton University Press (2007)

Water Vapor Feedback and Global Warming, Held & Soden, Annu. Rev. Energy Environ (2000) – free paper

Radiative forcing by well-mixed greenhouse gases: Estimates from climate models in the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4), WD Collins et al, JGR (2006)

Notes

Note 1: The very accurate calculation of radiation transfer is done via line by line calculations but they are computationally very expensive and so a simpler approximation is used in GCMs. Of course there are many studies comparing parameterizations vs line by line calculations. One example is Radiative forcing by well-mixed greenhouse gases: Estimates from climate models in the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4), WD Collins et al, JGR (2006).

Two Basic Foundations

This article will be a placeholder article to filter out a select group of people. The many people who arrive and confidently explain that atmospheric physics is fatally flawed (without the benefit of having read a textbook). They don’t think they are confused, in their minds they are helpfully explaining why the standard theory is wrong. There have been a lot of such people.

Almost none of them ever provides an equation. If on rare occasions they do provide a random equation, they never explain what is wrong with the 65-year old equation of radiative transfer (explained by Nobel prize winner Subrahmanyan Chandrasekhar, see note 1) which is derived from fundamental physics. Or an explanation for why observation matches the standard theory. For example (and I have lots of others), here is a graph produced nearly 50 years ago (referenced almost 30 years ago) of the observed spectrum at the top of atmosphere vs the calculated spectrum from the standard theory.

Why is it so accurate?

From Atmospheric Radiation, Goody (1989)

From Atmospheric Radiation, Goody (1989)

If it was me, and I thought the theory was wrong, I would read a textbook and try and explain why the textbook was wrong. But I’m old school and generally expect physics textbooks to be correct, short of some major revolution. Conventionally, when you “prove” textbook theory wrong you are expected to explain why everyone got it wrong before.

There is a simple reason why our many confident visitors never do that. They don’t know anything about the basic theory. Entertaining as that is, and I’ll be the first to admit that it has been highly entertaining, it’s time to prune comments from overconfident and confused visitors.

I am not trying to push away people with questions. If you have questions please ask. This article is just intended to limit the tsunami of comments from visitors with their overconfident non-textbook understanding of physics – that have often dominated comment threads. 

So here are my two questions for the many visitors with huge confidence in their physics knowledge. Dodging isn’t an option. You can say “not correct” and explain your alternative formulation with evidence, but you can’t dodge.

Answer these two questions:

1. Is the equation of radiative transfer correct or not?

Iλ(0) = Iλm)em + ∫ Bλ(T)e [16]

The intensity at the top of atmosphere equals.. The surface radiation attenuated by the transmittance of the atmosphere, plus.. The sum of all the contributions of atmospheric radiation – each contribution attenuated by the transmittance from that location to the top of atmosphere

Of course (and I’m sure I don’t even need to spell it out) we need to integrate across all wavelengths, λ, to get the flux value.

For the derivation see Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations. If you don’t agree it is correct then explain why.

[Note that other articles explain the basics. For example – The “Greenhouse” Effect Explained in Simple Terms, which has many links to other in depth articles].

If you don’t understand the equation you don’t understand the core of radiative atmospheric physics.

—-

2. Is this graphic with explanation from an undergraduate heat transfer textbook (Fundamentals of Heat and Mass Transfer, 6th edition, Incropera and DeWitt 2007) correct or not?

From "Fundamentals of Heat and Mass Transfer, 6th edition", Incropera and DeWitt (2007)

From “Fundamentals of Heat and Mass Transfer, 6th edition”, Incropera and DeWitt (2007)

You can see that radiation is emitted from a hot surface and absorbed by a cool surface. And that radiation is emitted from a cool surface and absorbed by a hot surface. More examples of this principle, including equations, in Amazing Things we Find in Textbooks – The Real Second Law of Thermodynamics – scanned pages from six undergraduate heat transfer textbooks (seven textbooks if we include the one added in comments after entertaining commenter Bryan suggested the first six were “cherry-picked” and offered his preferred textbook which had exactly the same equations).

—-

What I will be doing for the subset of new visitors with their amazing and confident insights is to send them to this article and ask for answers. In the past I have never been able to get a single member of this group to commit. The reason why is obvious.

But – if you don’t answer, your comments may never be published.

Once again, this is not designed to stop regular visitors asking questions. Most people interested in climate don’t understand equations, calculus, radiative physics or thermodynamics – and that is totally fine.

Call it censorship if it makes you sleep better at night.

Notes

Note 1 – I believe the theory is older than Chandrasekhar but I don’t have older references. It derives from basic emission (Planck), absorption (Beer Lambert) and the first law of thermodynamics. Chandrasekhar published this in his 1952 book Radiative Transfer (the link is the 1960 reprint). This isn’t the “argument from authority”, I’m just pointing out that the theory has been long established. Punters are welcome to try and prove it wrong, just no one ever does.

In recent articles we have looked at rainfall and there is still more to discuss. This article changes tack to look at tropical cyclones, prompted by the recent US landfall of Harvey and Irma along with questions from readers about attribution and the future.

It might be surprising to find the following statement from leading climate scientists (Kevin Walsh and many co-authors in 2015):

At present, there is no climate theory that can predict the formation rate of tropical cyclones from the mean climate state.

The subject gets a little involved so let’s dig into a few papers. First from Gabriel Vecchi and some co-authors in 2008 in the journal Science. The paper is very brief and essentially raises one question – has the recent rise in total Atlantic cyclone intensity been a result of increases in absolute sea surface temperature (SST) or relative sea surface temperature:

From Vecchi et al 2008

Figure 1

The top graph (above) shows a correlation of 0.79 between SST and PDI (power dissipation index). The bottom graph shows a correlation of 0.79 between relative SST (local sea surface temperature minus the average tropical sea surface temperature) and PDI.

With more CO2 in the atmosphere from burning fossil fuels we expect a warmer SST in the tropical Atlantic in 2100 than today. But we don’t expect the tropical Atlantic to warm faster than the tropics in general.

If cyclone intensity is dependent on local SST we expect more cyclones, or more powerful cyclones. If cyclone intensity is dependent on relative SST we expect no increase in cyclones. This is because climate models predict warmer SSTs in the future but not warmer Atlantic SSTs than the tropics. The paper also shows a few high resolution models – green symbols – sitting close to the zero change line.

Now predicting tropical cyclones with GCMs has a fundamental issue – the scale of a modern high resolution GCM is around 100km. But cyclone prediction requires a higher resolution due to their relatively small size.

Thomas Knutson and co-authors (including the great Isaac Held) produced a 2007 paper with an interesting method (of course, the idea is not at all new). They input actual meteorological data (i.e. real history from NCEP reanalysis) into a high resolution model which covered just the Atlantic region. Their aim was to see how well this model could reproduce tropical storms. There are some technicalities to the model – the output is constantly “nudged” back towards the actual climatology and out at the boundaries of the model we can’t expect good simulation results. The model resolution is 18km.

The main question addressed here is the following: Assuming one has essentially perfect knowledge of large-scale atmospheric conditions in the Atlantic over time, how well can one then simulate past variations in Atlantic hurricane activity using a dynamical model?

They comment that the cause of the recent (at that time) upswing in hurricane activity “remains unresolved”. (Of course, fast forward to 2016, prior to the recent two large landfall hurricanes, and the overall activity is at a 1970 low. In early 2018, this may be revised again..).

Two interesting graphs emerge. First an excellent match between model and observations for overall frequency year on year:

From Knutson et al 2007

Figure 2

Second, an inability to predict the most intense hurricanes. The black dots are observations, the red dots are simulations from the model. The vertical axis, a little difficult to read, is SLP, or sea level pressure:

From Knutson et al 2007

Figure 3

These results are a common theme of many papers – inputting the historical climatological data into a model we can get some decent results on year to year variation in tropical cyclones. But models under-predict the most intense cyclones (hurricanes).

Here is Morris Bender and co-authors (including Thomas Knutson, Gabriel Vecchi – a frequent author or co-author in this genre, and of course Isaac Held) from 2010:

Some statistical analyses suggest a link between warmer Atlantic SSTs and increased hurricane activity, although other studies contend that the spatial structure of the SST change may be a more important control on tropical cyclone frequency and intensity. A few studies suggest that greenhouse warming has already produced a substantial rise in Atlantic tropical cyclone activity, but others question that conclusion.

This is a very typical introduction in papers on this topic. I note in passing this is a huge blow to the idea that climate scientists only ever introduce more certainty and alarm on the harm from future CO2 emissions. They don’t. However, it is also true that some climate scientists believe that recent events have been accentuated due to the last century of fossil fuel burning and these perspectives might be reported in the media. I try to ignore the media and that is my recommendation to readers on just about all subjects except essential ones like the weather and celebrity news.

This paper used a weather prediction model starting a few days before each storm to predict the outcome. If you understand the idea behind Knutson 2007 then this is just one step further – a few days prior to the emergence of an intense storm, input the actual climate data into a high resolution model and see how well the high res model predicts the observations. They also used projected future climates from CMIP3 models (note 1).

In the set of graphs below there are three points I want to highlight – and you probably need to click on the graph to enlarge it.

First, in graph B, “Zetac” is the model used by Knutson et al 2007, whereas GFDL is the weather prediction model getting better results in this paper – you can see that observations and the GFDL are pretty close in the maximum wind speed distribution. Second, the climate change predictions in E show that predictions of the future show an overall reduction in frequency of tropical storms, but an increase in the frequency of storms with the highest wind speeds – this is a common theme in papers from this genre. Third, in graph F, the results (from the weather prediction model) fed by different GCMs for future climate show quite different distributions. For example, the UKMO model produces a distribution of future wind speeds that is lower than current values.

From Bender et al 2010

Figure 4 – Click to enlarge

In this graph (S3 from the Supplementary data) we see graphs of the difference between future projected climatologies and current climatologies for three relevant parameters for each of the four different models shown in graph F in the figure above:

From Bender et al 2010

Figure 5 – Click to enlarge

This illustrates that different projected future climatologies, which all show increased SST in the Atlantic region, generate quite different hurricane intensities. The paper suggests that the reduction in wind shear in the UKMO model produces a lower frequency of higher intensity hurricanes.

Conclusion

This article illustrates that feeding higher resolution models with current data can generate realistic cyclone data in some aspects, but less so in other aspects. As we increase the model resolution we can get even better results – but this is dependent on inputting the correct climate data. As we look towards 2100 the questions are – How realistic is the future climate data? How does that affect projections of hurricane frequencies and intensities?

Articles in this Series

Impacts – I – Introduction

Impacts – II – GHG Emissions Projections: SRES and RCP

Impacts – III – Population in 2100

Impacts – IV – Temperature Projections and Probabilities

Impacts – V – Climate change is already causing worsening storms, floods and droughts

Impacts – VI – Sea Level Rise 1

Impacts – VII – Sea Level 2 – Uncertainty

Impacts – VIII – Sea level 3 – USA

Impacts – IX – Sea Level 4 – Sinking Megacities

Impacts – X – Sea Level Rise 5 – Bangladesh

Impacts XI – Rainfall 1

Impacts – XII – Rainfall 2

Impacts – XIII – Rainfall 3

References

Hurricanes and climate: the US CLIVAR working group on hurricanes, American Meteorological Society, Kevin Walsh et al (2015) – free paper

Whither Hurricane Activity? Gabriel A Vecchi, Kyle L Swanson & Brian J. Soden, Science (2008) – free paper

Simulation of the Recent Multidecadal Increase of Atlantic Hurricane Activity Using an 18-km-Grid Regional Model, Thomas Knutson et al, American Meteorological Society, (2007) – free paper

Modeled Impact of Anthropogenic Warming on the Frequency of Intense Atlantic Hurricanes, Morris A Bender et al, Science (2010) – free paper

Notes

Note 1: The scenario is A1B, which is similar to RCP6 – that is, an approximate doubling of CO2 by the end of the century. The simulations came from the CMIP3 suite of model results.

In XII – Rainfall 2 we saw the results of many models on rainfall as GHGs increase. They project wetter tropics, drier subtropics and wetter higher latitude regions. We also saw an expectation that rainfall will increase globally, with something like 2-3% per ºC of warming.

Here is a (too small) graph from Allen & Ingram (2002) showing the model response of rainfall under temperature changes from GHG increases. The dashed line marked “C-C” is the famous (in climate physics) Clausius–Clapeyron relation which, at current temperatures, shows a 7% change in water vapor per ºC of warming. The red triangles are the precipitation changes from model simulations showing about half of that.

From Allen & Ingram (2002)

Figure 1

Here is another graph from the same paper showing global mean temperature change (top) and rainfall over land (bottom):

From Allen & Ingram (2002)

Figure 2

The temperature has increased over the last 50 years, and models and observations show that the precipitation has.. oh, it’s not changed. What is going on?

First, the authors explain some important background:

The distribution of moisture in the troposphere (the part of the atmosphere that is strongly coupled to the surface) is complex, but there is one clear and strong control: moisture condenses out of supersaturated air.

This constraint broadly accounts for the humidity of tropospheric air parcels above the boundary layer, because almost all such parcels will have reached saturation at some point in their recent history. Physically, therefore, it has long seemed plausible that the distribution of relative humidity would remain roughly constant under climate change, in which case the Clausius-Clapeyron relation implies that specific humidity would increase roughly exponentially with temperature.

This reasoning is strongest at higher latitudes where air is usually closer to saturation, and where relative humidity is indeed roughly constant through the substantial temperature changes of the seasonal cycle. For lower latitudes it has been argued that the real-world response might be different. But relative humidity seems to change little at low latitudes under a global warming scenario, even in models of very high vertical resolution, suggesting this may be a robust ’emergent constraint’ on which models have already converged.

They continue:

If tropospheric moisture loading is controlled by the constraints of (approximately) unchanged relative humidity and the Clausius-Clapeyron relation, should we expect a corresponding exponential increase in global precipitation and the overall intensity of the hydrological cycle as global temperatures rise?

This is certainly not what is observed in models.

To clarify, the point in the last sentence is that models do show an increase in precipitation, but not at the same rate as the expected increase in specific humidity (see note 1 for new readers).

They describe their figure 2 (our figure 1 above) and explain:

The explanation for these model results is that changes in the overall intensity of the hydrological cycle are controlled not by the availability of moisture, but by the availability of energy: specifically, the ability of the troposphere to radiate away latent heat released by precipitation.

At the simplest level, the energy budgets of the surface and troposphere can be summed up as a net radiative heating of the surface (from solar radiation, partly offset by radiative cooling) and a net radiative cooling of the troposphere to the surface and to space (R) being balanced by an upward latent heat flux (LP, where L is the latent heat of evaporation and P is global-mean precipitation): evaporation cools the surface and precipitation heats the troposphere.

[Emphasis added].

Basics Digression

Picture the atmosphere over a long period of time (like a decade), and for the whole globe. If it hasn’t heated up or cooled down we know that the energy in must equal energy out (or if it has only done so only marginally then energy in is almost equal to energy out). This is the first law of thermodynamics – energy is conserved.

What energy comes into the atmosphere?

  1. Solar radiation is partly absorbed by the atmosphere (most is transmitted through and heats the surface of the earth)
  2. Radiation emitted from the earth’s surface (we’ll call this terrestrial radiation) is mostly absorbed by the atmosphere (some is transmitted straight through to space)
  3. Warm air is convected up from the surface
  4. Heat stored in evaporated water vapor (latent heat) is convected up from the surface and the water vapor condenses out, releasing heat into the atmosphere when this happens

How does the atmosphere lose energy?

  1. It radiates downwards to the surface
  2. It radiates out to space

..end of digression

Changing Energy Budget

In a warmer world, if we have more evaporation we have more latent heat transfer from the surface into the troposphere. But the atmosphere has to be able to radiate this heat away. If it can’t, then the atmosphere becomes warmer, and this reduces convection. So with a warmer surface we may have a plentiful potential supply of latent heat (via water vapor) but the atmosphere needs a mechanism to radiate away this heat.

Allen & Ingram put forward a simple conceptual equation:

ΔRc + ΔRT = LΔP

where the change in radiative cooling ΔR, is split into two components: ΔRc that is independent of the change in atmospheric temperature; and ΔRT that depends only on the temperature

L = latent heat of water vapor (a constant), ΔP = change in rainfall (= change in evaporation, as evaporation is balanced by rainfall)

LΔP is about 1W/m² per 1% increase in global precipitation.

Now, if we double CO2, then before any temperature changes we decrease the outgoing longwave radiation through the tropopause (the top of the troposphere) by about 3-4W/m² and we increase atmospheric radiation to the surface by about 1W/m².

So doubling CO2, ΔRc = -2 to -3W/m²; prior to a temperature change ΔRT = 0; and so ΔP reduces.

The authors comment that increasing CO2 before any temperature change takes place reduces the intensity of the hydrological cycle and this effect was seen in early modeling experiments using prescribed sea surface temperatures.

Now, of course, the idea of doubling CO2 without any temperature change is just a thought experiment. But it’s an important thought experiment because it lets us isolate different factors.

The authors then consider their factor ΔRT:

The enhanced radiative cooling due to tropospheric warming, ΔRT, is approximately proportional to ΔT: tropospheric temperatures scale with the surface temperature change and warmer air radiates more energy, so ΔRT = kΔT, with k=3W/(m²K)..

All this is saying is that as the surface warms, the atmosphere warms at about the same rate, and the atmosphere then emits more radiation. This is why the model results of rainfall in our figure 2 above show no trend in rainfall over 50 years, and also match the observations – the constraint on rainfall is the changing radiative balance in the troposphere.

And so they point out:

Thus, although there is clearly usable information in fig. 3 [our figure 2], it would be physically unjustified to estimate ΔP/ΔT directly from 20th century observations and assume that the same quantity will apply in the future, when the balance between climate drivers will be very different.

There is a lot of other interesting commentary in their paper, although the paper itself is now quite dated (and unfortunately behind a paywall). In essence they discuss the difficulties of modeling precipitation changes, especially for a given region, and are looking for “emergent constraints” from more fundamental physics that might help constrain forecasts.

A forecasting system that rules out some currently conceivable futures as unlikely could be far more useful for long-range planning than a small number of ultra-high-resolution forecasts that simply rule in some (very detailed futures as possibilities).

This is a very important point when considering impacts.

Conclusion

Increasing the surface temperature by 1ºC is expected to increase the humidity over the ocean by about 7%. This is simply the basic physics of saturation. However, climate models predict an increase in mean rainfall of maybe 2-3% per ºC. The fundamental reason is that the movement of latent heat from the surface to the atmosphere has to be radiated away by the atmosphere, and so the constraint is the ability of the atmosphere to do this. And so the limiting factor in increasing rainfall is not the humidity increase, it is the radiative cooling of the atmosphere.

We also see that despite 50 years of warming, mean rainfall hasn’t changed. Models also predict this. This is believed to be a transient state, for reasons explained in the article.

References

Constraints on future changes in climate and the hydrologic cycle, MR Allen & WJ Ingram, Nature (2002)  – freely available [thanks, Robert]

Notes

1 Relative humidity is measured as a percentage. If the relative humidity = 100% it means the air is saturated with water vapor – it can’t hold any more water vapor. If the relative humidity = 0% it means the air is completely dry. As temperature increases the ability of air to hold water vapor increases non-linearly.

For example, at 0ºC, 1kg of air can carry around 4g of water vapor, at 10ºC that has doubled to 8g, and at 20ºC it has doubled again to 15g (I’m using approximate values).

So now imagine saturated air over the ocean at 20ºC rising up and therefore cooling (it is cooler higher up in the atmosphere). By the time the air parcel has cooled down to 0ºC (this might be anything from 2km to 5km altitude) it is still saturated but is only carrying 4g of water vapor, having condensed out 11g into water droplets.