Archive for the ‘Basic Science’ Category

In #1 and #2 we looked at trends in frequency and intensity of Tropical Cyclones (TCs) and found that the IPCC 6th Assessment Report (AR6) contained good news. Not in the executive summary, and not particularly clearly even in the body of the report. But still, it is good news.

This article looks at the speed of TCs – how fast they move overall, not how fast the winds are swirling around. This is important, because as they hit land if they go more slowly there will be more rain and therefore more flooding.

This was going to be a short article, but as long time readers of this blog will know, brevity was never my strong point.

Here’s the plain English summary of the report:

Translation speed of TCs has reduced over the last 70 years, leading to more flooding as TCs hit the coast.

This is bad news. The actual text, from p. 1587, is in the Notes at the end of this article.

James Kossin’s paper from 2018 is the main idea of this section of the report. Two papers are noted as questioning his conclusion. Kossin replied in 2019, confirming his original conclusion. The report essentially agrees with Kossin.

One of the lead authors of this chapter 11 on Extreme Weather is also James Kossin.

The main focus of this series of articles is the conclusions of the IPCC 6th Assessment Report, but it seems the question is still open, so read on for more analysis.

I’m moving to Substack. It’s a great publishing platform. See the rest this article (for free) at Science of Doom on Substack.

Read Full Post »


Recent reports have shown that California knew about the threat of climate change decades ago.

No one could have missed the testimony of James Hansen in 1988 and many excellent papers were published prior to that time (and, of course, subsequently). Californian policymakers cannot claim ignorance.

I’m not a resident of California but I often visit this great state and seeing this new petition I’m hoping that everyone concerned about the climate of California will get onboard to denounce the past and current state governments and, especially, to ensure that current residents get to sue these politicians.

They knew, and yet they kept burning fossil fuel. History will judge them harshly, but in the meantime, the people should ensure these politicians feel the pain.

[Update: small note for readers, see comment]

Read Full Post »

I was re-reading Missing iris effect as a possible cause of muted hydrological change and high climate sensitivity in models, Thorsten Mauritsen and Bjorn Stevens from 2015 (because I referenced it in a recent comment) and then looked up other recent papers citing it. One interesting review paper is by Stevens et al from 2016. I recognized his name from many other papers and it looks like Bjorn Stevens has been publishing papers since the early 1990s, with almost 200 papers in peer-reviewed journals, mostly on this and related topics. Likewise, Sherwood and Bony (two of the coauthors) are very familiar names from this field.

Many regular readers (and I’m sure new readers of this blog) will understand much more than me about current controversies in climate sensitivity. The question in brief (of course there are many subtleties) – how much will the earth warm if we double CO2? It’s a very important question. As the authors explain at the start:

Nearly 40 years have passed since the U.S. National Academies issued the “Charney Report.” This landmark assessment popularized the concept of the “equilibrium climate sensitivity” (ECS), the increase of Earth’s globally and annually averaged near surface temperature that would follow a sustained doubling of atmospheric carbon dioxide relative to its preindustrial value. Through the application of physical reasoning applied to the analysis of output from a handful of relatively simple models of the climate system, Jule G. Charney and his co-authors estimated a range of 1.5 –4.5 K for the ECS [Charney et al., 1979].

Charney is a eminent name you will know, along with Lorentz, if you read about the people who broke ground on numerical weather modeling. The authors explain a little about the definition of ECS:

ECS is an idealized but central measure of climate change, which gives specificity to the more general idea of Earth’s radiative response to warming. This specificity makes ECS something that is easy to grasp, if not to realize. For instance, the high heat capacity and vast carbon stores of the deep ocean mean that a new climate equilibrium would only be fully attained a few millennia after an applied forcing [Held et al., 2010; Winton et al., 2010; Li et al., 2012]; and uncertainties in the carbon cycle make it difficult to know what level of emissions is compatible with a doubling of the atmospheric CO2 concentration in the first place.

Concepts such as the “transient climate response” or the “transient climate response to cumulative carbon emissions” have been introduced to account for these effects and may be a better index of the warming that will occur within a century or two [Allen and Frame, 2007; Knutti and Hegerl, 2008; Collins et al., 2013;MacDougall, 2016].

But the ECS is strongly related and conceptually simpler, so it endures as the central measure of Earth’s susceptibility to forcing [Flato et al., 2013].

And about the implications of narrowing the range of ECS:

The socioeconomic value of better understanding the ECS is well documented. If the ECS were well below 1.5 K, climate change would be a less serious problem. The stakes are much higher for the upper bound. If the ECS were above 4.5 K, immediate and severe reductions of greenhouse gas emissions would be imperative to avoid dangerous climate changes within a few human generations.

From a mitigation point of view, the difference between an ECS of 1.5 K and 4.5 K corresponds to about a factor of two in the allowable CO2 emissions for a given temperature target [Stocker et al., 2013] and it explains why the value of learning more about the ECS has been appraised so highly [Cooke et al., 2013; Neubersch et al., 2014].

The ECS also gains importance because it conditions many other impacts of greenhouse gases, such as regional temperature and rainfall [Bony et al., 2013; Tebaldi and Arblaster, 2014], and even extremes [Seneviratne et al., 2016], knowledge of which is required for developing effective adaptation strategies. Being an important and simple measure of climate change, the ECS is something that climate science should and must be able to better understand and quantify more precisely.

One of the questions they raise is at the heart of my question about whether climate sensitivity is a constant that we can measure, or a value that has some durable meaning rather than being dependent on the actual climate specifics at the time. For example, there are attempts to measure it via the climate response during an El Nino. We see the climate warm and we measure how the top of atmosphere radiation balance changes. We attempt to measure the difference in ocean temperature between end of the last ice age and today and deduce climate sensitivity. Perhaps I have a mental picture of non-linear systems that is preventing me from seeing the obvious. However, the picture I have in my head is that the dependence of the top of radiation balance on temperature is not a constant.

Here is their commentary. They use the term “pattern effect” for my mental model described above:

Hence, a generalization of the concept of climate sensitivity to different eras may need to account for differences that arise from the different base state of the climate system, increasingly so for large perturbations.

Even for small perturbations, there is mounting evidence that the outward radiation may be sensitive to the geographic pattern of surface temperature changes. Senior and Mitchell [2000] argued that if warming is greater over land, or at high latitudes, different feedbacks may occur than for the case where the same amount of warming is instead concentrated over tropical oceans.

These effects appear to be present in a range of models [Armour et al., 2013; Andrews et al., 2015]. Physically they can be understood because clouds—and their impact on radiation—are sensitive to changes in the atmospheric circulation, which responds to geographic differences in warming [Kang et al., 2013], or simply because an evolving pattern of surface warming weights local responses differently at different times [Armour et al., 2013].

Hence different patterns of warming, occurring on different timescales, may be associated with stronger or weaker radiative responses. This introduces an additional state dependence, one that is not encapsulated by the global mean temperature. We call this a “pattern effect.” Pattern effects are thought to be important for interpreting changes over the instrumental period [Gregory and Andrews, 2016], and may contribute to the state dependence of generalized measures of Earth’s climate sensitivity as inferred from the geological record.

Some of my thoughts are that the insoluble questions on this specific topic are also tied into the question about the climate being chaotic vs just weather being chaotic – see for example, Natural Variability and Chaos – Four – The Thirty Year Myth. In that article we look at the convention of defining climate as the average of 30 years of weather and why that “eliminates” chaos, or doesn’t. Non-linear systems have lots of intractable problems – more on that topic in the whole series Natural Variability and Chaos. It’s good to see it being mentioned in this paper.

Read the whole paper – it reviews the conditions necessary for very low climate sensitivity and for very high climate sensitivity, with the idea being that if one necessary condition can be ruled out then the very low and/or very high climate sensitivity can be ruled out. The paper also includes some excellent references for further insights.

From Stevens et al 2016

Click to enlarge

Happy Thanksgiving to our US readers.


Missing iris effect as a possible cause of muted hydrological change and high climate sensitivity in models, Thorsten Mauritsen & Bjorn Stevens, Nature Geoscience (2015) – paywall paper

Prospects for narrowing bounds on Earth’s equilibrium climate sensitivity, Bjorn Stevens, Steven C Sherwood, Sandrine Bony & Mark J Webb, Earth’s
Future (2016) – free paper

Read Full Post »

Over in another article, a commenter claims:

..Catastrophic predictions depend on accelerated forcings due to water vapour feedback. This water vapour feedback is simply written into climate models as parameters. It is not derived from any kind simulation of first principles in the General Circulation Model runs (GCMs)..

[Emphasis added]

I’ve seen this article of faith a lot. If you frequent fantasy climate blogs where people learn first principles and modeling basics from comments by other equally well-educated commenters this is the kind of contribution you will be able to make after years of study.

None of us knowed nothing, so we all sat around and teached each other.

Actually how the atmospheric section of climate models work is pretty simple in principle. The atmosphere is divided up into a set of blocks (a grid) with each block having dimensions something like 200 km x 200km x 500m high. The values vary a lot and depend on the resolution of the model, this is just to give you an idea.

Then each block has an E-W wind; a N-S wind; a vertical velocity; temperature; pressure; the concentrations of CO2, water vapor, methane; cloud fractions, and so on.

Then the model “steps forward in time” and uses equations to calculate the new values of each item.

The earth is spinning and conservation of momentum, heat, mass is applied to each block. The principles of radiation through each block in each direction apply via paramaterizations (note 1).

Specifically on water vapor – the change in mass of water vapor in each block is calculated from the amount of water evaporated, the amount of water vapor condensed, and the amount of rainfall taking water out of the block. And from the movement of air via E-W, N-S and up/down winds. The final amount of water vapor in each time step affects the radiation emitted upwards and downwards.

It’s more involved and you can read whole books on the subject.

I doubt that anyone who has troubled themselves to read even one paper on climate modeling basics could reach the conclusion so firmly believed in fantasy climate blogs and repeated above. If you never need to provide evidence for your claims..

For this blog we do like to see proof of claims, so please take a read of Description of the NCAR Community Atmosphere Model (CAM 4.0) and just show where this water vapor feedback is written in. Or pick another climate model used by a climate modeling group.

This is the kind of exciting stuff you find in the 200+ pages of an atmospheric model description:

From CAM4 Technical Note

You can also find details of the shortwave and longwave radiation parameterization schemes and how they apply to water vapor.

Here is a quote from The Global Circulation of the Atmosphere (ref below):

Essentially all GCMs yield water vapor feedback consistent with that which would result from holding relative humidity approximately fixed as climate changes. This is an emergent property of the simulated climate system; fixed relative humidity is not in any way built into the model physics, and the models offer ample means by which relative humidity could change.

From Water Vapor Feedback and Global Warming, a paper well-worth anyone reading for who wants to understand this key question in climate:

Water vapor is the dominant greenhouse gas, the most important gaseous source of infrared opacity in the atmosphere. As the concentrations of other greenhouse gases, particularly carbon dioxide, increase because of human activity, it is centrally important to predict how the water vapor distribution will be affected. To the extent that water vapor concentrations increase in a warmer world, the climatic effects of the other greenhouse gases will be amplified. Models of the Earth’s climate indicate that this is an important positive feedback that increases the sensitivity of surface temperatures to carbon dioxide by nearly a factor of two when considered in isolation from other feedbacks, and possibly by as much as a factor of three or more when interactions with other feedbacks are considered. Critics of this consensus have attempted to provide reasons why modeling results are overestimating the strength of this feedback..

Remember, just a few years of study at fantasy climate blogs can save an hour or more of reading papers on atmospheric physics.


Description of the NCAR Community Atmosphere Model (CAM 4)– free paper

On the Relative Humidity of the Atmosphere, Chapter 6 of The Global Circulation of the Atmosphere, edited by Tapio Schneider & Adam Sobel, Princeton University Press (2007)

Water Vapor Feedback and Global Warming, Held & Soden, Annu. Rev. Energy Environ (2000) – free paper

Radiative forcing by well-mixed greenhouse gases: Estimates from climate models in the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4), WD Collins et al, JGR (2006)


Note 1: The very accurate calculation of radiation transfer is done via line by line calculations but they are computationally very expensive and so a simpler approximation is used in GCMs. Of course there are many studies comparing parameterizations vs line by line calculations. One example is Radiative forcing by well-mixed greenhouse gases: Estimates from climate models in the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4), WD Collins et al, JGR (2006).

Read Full Post »

This article will be a placeholder article to filter out a select group of people. The many people who arrive and confidently explain that atmospheric physics is fatally flawed (without the benefit of having read a textbook). They don’t think they are confused, in their minds they are helpfully explaining why the standard theory is wrong. There have been a lot of such people.

Almost none of them ever provides an equation. If on rare occasions they do provide a random equation, they never explain what is wrong with the 65-year old equation of radiative transfer (explained by Nobel prize winner Subrahmanyan Chandrasekhar, see note 1) which is derived from fundamental physics. Or an explanation for why observation matches the standard theory. For example (and I have lots of others), here is a graph produced nearly 50 years ago (referenced almost 30 years ago) of the observed spectrum at the top of atmosphere vs the calculated spectrum from the standard theory.

Why is it so accurate?

From Atmospheric Radiation, Goody (1989)

From Atmospheric Radiation, Goody (1989)

If it was me, and I thought the theory was wrong, I would read a textbook and try and explain why the textbook was wrong. But I’m old school and generally expect physics textbooks to be correct, short of some major revolution. Conventionally, when you “prove” textbook theory wrong you are expected to explain why everyone got it wrong before.

There is a simple reason why our many confident visitors never do that. They don’t know anything about the basic theory. Entertaining as that is, and I’ll be the first to admit that it has been highly entertaining, it’s time to prune comments from overconfident and confused visitors.

I am not trying to push away people with questions. If you have questions please ask. This article is just intended to limit the tsunami of comments from visitors with their overconfident non-textbook understanding of physics – that have often dominated comment threads. 

So here are my two questions for the many visitors with huge confidence in their physics knowledge. Dodging isn’t an option. You can say “not correct” and explain your alternative formulation with evidence, but you can’t dodge.

Answer these two questions:

1. Is the equation of radiative transfer correct or not?

Iλ(0) = Iλm)em + ∫ Bλ(T)e [16]

The intensity at the top of atmosphere equals.. The surface radiation attenuated by the transmittance of the atmosphere, plus.. The sum of all the contributions of atmospheric radiation – each contribution attenuated by the transmittance from that location to the top of atmosphere

Of course (and I’m sure I don’t even need to spell it out) we need to integrate across all wavelengths, λ, to get the flux value.

For the derivation see Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations. If you don’t agree it is correct then explain why.

[Note that other articles explain the basics. For example – The “Greenhouse” Effect Explained in Simple Terms, which has many links to other in depth articles].

If you don’t understand the equation you don’t understand the core of radiative atmospheric physics.


2. Is this graphic with explanation from an undergraduate heat transfer textbook (Fundamentals of Heat and Mass Transfer, 6th edition, Incropera and DeWitt 2007) correct or not?

From "Fundamentals of Heat and Mass Transfer, 6th edition", Incropera and DeWitt (2007)

From “Fundamentals of Heat and Mass Transfer, 6th edition”, Incropera and DeWitt (2007)

You can see that radiation is emitted from a hot surface and absorbed by a cool surface. And that radiation is emitted from a cool surface and absorbed by a hot surface. More examples of this principle, including equations, in Amazing Things we Find in Textbooks – The Real Second Law of Thermodynamics – scanned pages from six undergraduate heat transfer textbooks (seven textbooks if we include the one added in comments after entertaining commenter Bryan suggested the first six were “cherry-picked” and offered his preferred textbook which had exactly the same equations).


What I will be doing for the subset of new visitors with their amazing and confident insights is to send them to this article and ask for answers. In the past I have never been able to get a single member of this group to commit. The reason why is obvious.

But – if you don’t answer, your comments may never be published.

Once again, this is not designed to stop regular visitors asking questions. Most people interested in climate don’t understand equations, calculus, radiative physics or thermodynamics – and that is totally fine.

Call it censorship if it makes you sleep better at night.


Note 1 – I believe the theory is older than Chandrasekhar but I don’t have older references. It derives from basic emission (Planck), absorption (Beer Lambert) and the first law of thermodynamics. Chandrasekhar published this in his 1952 book Radiative Transfer (the link is the 1960 reprint). This isn’t the “argument from authority”, I’m just pointing out that the theory has been long established. Punters are welcome to try and prove it wrong, just no one ever does.

Read Full Post »

[I started writing this some time ago and got side-tracked, initially because aerosol interaction in clouds and rainfall is quite fascinating with lots of current research and then because there are many papers on higher resolution simulations of convection that also looked interesting.. so decided to post it less than complete because it will be some time before I can put together a more complete article..]

In Part Four of this series we looked at the paper by Mauritsen et al (2012). Isaac Held has a very interesting post on his blog – and people interested in understanding climate science will benefit from reading his blog – he has been in the field writing papers for 40 years). He highlighted this paper: Cloud tuning in a coupled climate model: Impact on 20th century warming, Jean-Christophe Golaz, Larry W. Horowitz, and Hiram Levy II, GRL (2013).

Their paper has many similarities to Mauritsen et al (2013). Here are some of their comments:

Climate models incorporate a number of adjustable parameters in their cloud formulations. They arise from uncertainties in cloud processes. These parameters are tuned to achieve a desired radiation balance and to best reproduce the observed climate. A given radiation balance can be achieved by multiple combinations of parameters. We investigate the impact of cloud tuning in the CMIP5 GFDL CM3 coupled climate model by constructing two alternate configurations.

They achieve the desired radiation balance using different, but plausible, combinations of parameters. The present-day climate is nearly indistinguishable among all configurations.

However, the magnitude of the aerosol indirect effects differs by as much as 1.2 W/m², resulting in significantly different temperature evolution over the 20th century..


..Uncertainties that arise from interactions between aerosols and clouds have received considerable attention due to their potential to offset a portion of the warming from greenhouse gases. These interactions are usually categorized into first indirect effect (“cloud albedo effect”; Twomey [1974]) and second indirect effect (“cloud lifetime effect”; Albrecht [1989]).

Modeling studies have shown large spreads in the magnitudes of these effects [e.g., Quaas et al., 2009]. CM3 [Donner et al., 2011] is the first Geophysical Fluid Dynamics Laboratory (GFDL) coupled climate model to represent indirect effects.

As in other models, the representation in CM3 is fraught with uncertainties. In particular, adjustable cloud parameters used for the purpose of tuning the model radiation can also have a significant impact on aerosol effects [Golaz et al., 2011]. We extend this previous study by specifically investigating the impact that cloud tuning choices in CM3 have on the simulated 20th century warming.

What did they do?

They adjusted the “autoconversion threshold radius”, which controls when water droplets turn into rain.

Autoconversion converts cloud water to rain. The conversion occurs once the mean cloud droplet radius exceeds rthresh. Larger rthresh delays the formation of rain and increases cloudiness.

The default in CM3 was 8.2 􏰃μm. They selected alternate values from other GFDL models: 6.0 􏰃μm (CM3w) and 10.6 􏰃μm (CM3c). Of course, they have to then adjust others parameters to achieve radiation balance – the “erosion time” (lateral mixing effect reducing water in clouds) which they note is poorly constrained (that is, we don’t have some external knowledge of the correct value for this parameter) and the “velocity variance” which affects how quickly water vapor condenses out onto aerosols.

Here is the time evolution in the three models (and also observations):


From Golaz et al 2013

From Golaz et al 2013

Figure 1 – Click to enlarge

In terms of present day climatology, the three variants are very close, but in terms of 20th century warming two variants are very different and only CM3w is close to observations.

Here is their conclusion, well worth studying. I reproduce it in full:

CM3w predicts the most realistic 20th century warming. However, this is achieved with a small and less desirable threshold radius of 6.0 􏰃μm for the onset of precipitation.

Conversely, CM3c uses a more desirable value of 10.6 􏰃μm but produces a very unrealistic 20th century temperature evolution. This might indicate the presence of compensating model errors. Recent advances in the use of satellite observations to evaluate warm rain processes [Suzuki et al., 2011; Wang et al., 2012] might help understand the nature of these compensating errors.

CM3 was not explicitly tuned to match the 20th temperature record.

However, our findings indicate that uncertainties in cloud processes permit a large range of solutions for the predicted warming. We do not believe this to be a peculiarity of the CM3 model.

Indeed, numerous previous studies have documented a strong sensitivity of the radiative forcing from aerosol indirect effects to details of warm rain cloud processes [e.g., Rotstayn, 2000; Menon et al., 2002; Posselt and Lohmann, 2009; Wang et al., 2012].

Furthermore, in order to predict a realistic evolution of the 20th century, models must balance radiative forcing and climate sensitivity, resulting in a well-documented inverse correlation between forcing and sensitivity [Schwartz et al., 2007; Kiehl, 2007; Andrews et al., 2012].

This inverse correlation is consistent with an intercomparison-driven model selection process in which “climate models’ ability to simulate the 20th century temperature increase with fidelity has become something of a show-stopper as a model unable to reproduce the 20th century would probably not see publication” [Mauritsen et al., 2012].

Very interesting paper, and freely availableKiehl’s paper, referenced in the conclusion, is also well-worth reading. In his paper he shows that models with the highest sensitivity to GHGs have the highest negative value from 20th century aerosols, while the models with the lowest sensitivity to GHGs have the lowest negative value from 20th century aerosols. Therefore, both ends of the range can reproduce 20th century temperature anomalies, while suggesting very different 21st century temperature evolution.

A paper on higher resolution models, Siefert et al 2015, did some model experiments, “large eddy simulations”, which are much higher resolution than GCMs. The best resolution GCMs today typically have a grid size around 100km x 100km. Their LES model had a grid size of 25m x 25m, with 2048 x 2048 x 200 grid points, to span a simulated volume of 51.2 km x 51.2 km x 5 km, and ran for a simulated 60hr time span.

They had this to say about the aerosol indirect effect:

It has also been conjectured that changes in CCN might influence cloud macrostructure. Most prominently, Albrecht [1989] argued that processes which reduce the average size of cloud droplets would retard and reduce the rain formation in clouds, resulting in longer-lived clouds. Longer living clouds would increase cloud cover and reflect even more sunlight, further cooling the atmosphere and surface. This type of aerosol-cloud interaction is often called a lifetime effect. Like the Twomey effect, the idea that smaller particles will form rain less readily is based on sound physical principles.

Given this strong foundation, it is somewhat surprising that empirical evidence for aerosol impacts on cloud macrophysics is so thin.

Twenty-five years after Albrecht’s paper, the observational evidence for a lifetime effect in the marine cloud regimes for which it was postulated is limited and contradictory. Boucher et al. [2013] who assess the current level of our understanding, identify only one study, by Yuan et al. [2011], which provides observational evidence consistent with a lifetime effect. In that study a natural experiment, outgassing of SO2 by the Kilauea volcano is used to study the effect of a changing aerosol environment on cloud macrophysical processes.

But even in this case, the interpretation of the results are not without ambiguity, as precipitation affects both the outgassing aerosol precursors and their lifetime. Observational studies of ship tracks provide another inadvertent experiment within which one could hope to identify lifetime effects [Conover, 1969; Durkee et al., 2000; Hobbs et al., 2000], but in many cases the opposite response of clouds to aerosol perturbations is observed: some observations [Christensen and Stephens, 2011; Chen et al., 2012] are consistent with more efficient mixing of smaller cloud drops leading to more rapid cloud desiccation and shorter lifetimes.

Given the lack of observational evidence for a robust lifetime effect, it seems fair to question the viability of the underlying hypothesis.

In their paper they show a graphic of what their model produced, it’s not the main dish but interesting for the realism:

From Seifert et al 2015

From Seifert et al 2015

Figure 2 – Click to expand

It is an involved paper, but here is one of the conclusions, relevant for the topic at hand:

Our simulations suggest that parameterizations of cloud-aerosol effects in large-scale models are almost certain to overstate the impact of aerosols on cloud albedo and rain rate. The process understanding on which the parameterizations are based is applicable to isolated clouds in constant environments, but necessarily neglects interactions between clouds, precipitation, and circulations that, as we have shown, tend to buffer much of the impact of aerosol change.

For people new to parameterizations, a couple of articles that might be useful:


Climate models necessarily have some massive oversimplifications, as we can see from the large eddy simulation which has 25m x 25m grid cells while GCMs have 100km x 100km at best. Even LES models have simplifications – to get to direct numerical solution (DNS) of the equations for turbulent flow we would need a scale closer to a few millimeters rather than meters.

The over-simplifications in GCMs require “parameters” which are not actually intrinsic material properties, but are more an average of some part of a climate process over a large scale. (Note that even if we had the resolution for the actual fundamental physics we wouldn’t necessary know the material parameters necessary, especially in the case of aerosols which are heterogeneous in time and space).

As the climate changes will these “parameters” remain constant, or change as the climate changes?


Cloud tuning in a coupled climate model: Impact on 20th century warming, Jean-Christophe Golaz, Larry W. Horowitz, and Hiram Levy II, GRL (2013) – free paper

Twentieth century climate model response and climate sensitivity, Jeffrey T. Kiehl, GRL (2007) – free paper

Large-eddy simulation of the transient and near-equilibrium behavior of precipitating shallow convection, Axel Seifert et al, Journal of Advances in Modeling Earth Systems (2015) – free paper

Read Full Post »

In Part VI we looked at past and projected sea level rise. There is significant uncertainty in future sea level rise, even assuming we know the future global temperature change. The uncertainty results from “how much ice will melt?”

We can be reasonably sure of sea level rise from thermal expansion (so long as we know the temperature). By contrast, we don’t have much confidence in the contribution from melting ice (on land). This is because ice sheet dynamics (glaciers, Greenland & Antarctic ice sheet) are non-linear and not well understood.

Here’s something surprising. Suppose you live in Virginia near the ocean. And suppose all of the Greenland ice sheet melted in a few years (not possible, but just suppose). How much would sea level change in Virginia? Hint: the entire Greenland ice sheet converted into global mean sea level is about 7m.

Zero change in Virginia.

Here are charts of relative sea level change across the globe for Greenland & West Antarctica, based on a 1mm/yr contribution from each location – click to expand:

From Tamisiea 2011

From Tamisiea 2011

Figure 1 – Click to Expand

We see that the sea level actually drops close to Greenland, stays constant around mid-northern latitudes in the Atlantic and rises in other locations. The reason is simple – the Greenland ice sheet is a local gravitational attractor and is “pulling the ocean up” towards Greenland. Once it is removed, the local sea level drops.


If we knew for sure that the global mean temperature in 2100 would be +2ºC or +3ºC compared to today we would have a good idea in each case of the sea level rise from thermal expansion. But not much certainty on any rise from melting ice sheets.

Let’s consider someone thinking about the US for planning purposes. If the Greenland ice sheet contributes lots of melting ice, the sea level on the US Atlantic coast won’t be affected at all and the increase on the Pacific coast will be significantly less than the overall sea level rise. In this case, the big uncertainty in the magnitude of sea level rise is not much of a factor for most of the US.

If the West Antarctic ice sheet contributes lots of melting ice, the sea level on the east and west coasts of the US will be affected by more than the global mean sea level rise.

For example, imagine the sea level was expected to rise 0.3m from thermal expansion by 2100. But there is a fear that ice melting will cause 0 – 0.5m global rise. A US policymaker really needs to know which ice sheet will melt. The “we expect at most an additional 0.5m from melting ice” tells her that she might have – in total – a maximum sea level rise of 0.3m on the east coast and a little more than 0.3m on the west coast if Greenland melts; but she instead might have – in total – a maximum of almost 1m on each coast if West Antarctica melts.

The source of the melting ice just magnifies the uncertainty for policy and economics.

If this 10th century legend was still with us maybe it would be different (we only have his tweets):

Donaeld the Unready

Articles in this Series

Impacts – I – Introduction

Impacts – II – GHG Emissions Projections: SRES and RCP

Impacts – III – Population in 2100

Impacts – IV – Temperature Projections and Probabilities

Impacts – V – Climate change is already causing worsening storms, floods and droughts

Impacts – VI – Sea Level Rise 1


The moving boundaries of sea level change: Understanding the origins of geographic variability, ME Tamisiea & JX Mitrovica, Oceanography (2011)

Read Full Post »

In Part II we looked at various scenarios for emissions. One important determinant is how the world population will change through this century and with a few comments on that topic I thought it worth digging a little.

Here is Lutz, Sanderson & Scherbov, Nature (2001):

The median value of our projections reaches a peak around 2070 at 9.0 billion people and then slowly decreases. In 2100, the median value of our projections is 8.4 billion people with the 80 per cent prediction interval bounded by 5.6 and 12.1 billion.

From Lutz 2001

From Lutz 2001

Figure 1 – Click to enlarge

This paper is behind a paywall but Lutz references the 1996 book he edited for assumptions, which is freely available (link below).

In it the authors comment, p. 22:

Some users clearly want population figures for the year 2100 and beyond. Should the demographer disappoint such expectations and leave it to others with less expertise to produce them? The answer given in this study is no. But as discussed below, we make a clear distinction between what we call projections up to 2030-2050 and everything beyond that time, which we term extensions for illustrative purposes.

[Emphasis added]

And then p.32:

Sanderson (1995) shows that it is impossible to produce “objective” confidence ranges for future population projections. Subjective confidence intervals are the best we can ever attain because assumptions are always involved.

Here are some more recent views.

Gerland et al 2014 – Gerland is from the Population Division of the UN:

The United Nations recently released population projections based on data until 2012 and a Bayesian probabilistic methodology. Analysis of these data reveals that, contrary to previous literature, world population is unlikely to stop growing this century. There is an 80% probability that world population, now 7.2 billion, will increase to between 9.6 and 12.3 billion in 2100. This uncertainty is much smaller than the range from the traditional UN high and low variants. Much of the increase is expected to happen in Africa, in part due to higher fertility and a recent slowdown in the pace of fertility decline..

..Among the most robust empirical findings in the literature on fertility transitions are that higher contraceptive use and higher female education are associated with faster fertility decline. These suggest that the projected rapid population growth could be moderated by greater investments in family planning programs to satisfy the unmet need for contraception, and in girls’ education. It should be noted, however, that the UN projections are based on an implicit assumption of a continuation of existing policies, but an intensification of current investments would be required for faster changes to occur

Wolfgang Lutz & Samir KC (2010). Lutz seems popular in this field:

The total size of the world population is likely to increase from its current 7 billion to 8–10 billion by 2050. This uncertainty is because of unknown future fertility and mortality trends in different parts of the world. But the young age structure of the population and the fact that in much of Africa and Western Asia, fertility is still very high makes an increase by at least one more billion almost certain. Virtually, all the increase will happen in the developing world. For the second half of the century, population stabilization and the onset of a decline are likely..

Although the paper doesn’t focus on 2100, but only up to 2050 it does include a graph for probalistic expectations to 2100 and has some interesting commentary around how different forecasting groups deal with uncertainty, how women’s education plays a huge role in reducing fertility and many other stories, for example:

The Demographic and Health Survey for Ethiopia, for instance, shows that women without any formal education have on average six children, whereas those with secondary education have only two (see http://www.measuredhs.com). Significant differentials can be found in most populations of all cultural traditions. Only in a few modern societies does the strongly negative association give way to a U-shaped pattern in which the most educated women have a somewhat higher fertility than those with intermediate education. But globally, the education differentials are so pervasive that education may well be called the single most important observable source of population heterogeneity after age and sex (Lutz et al. 1999). There are good reasons to assume that during the course of a demographic transition the fact that higher education leads to lower fertility is a true causal mechanism, where education facilitates better access to and information about family planning and most importantly leads to a change in attitude in which ‘quantity’ of children is replaced by ‘quality’, i.e. couples want to have fewer children with better life chances..

Lee 2011, another very interesting paper, makes this comment:

The U.N. projections assume that fertility will slowly converge toward replacement level (2.1 births per woman) by 2100

Lutz’s book had a similar hint that many demographers assume that somehow societies on mass will converge towards a steady state. Lee also comments that probability treatments for “low”, “medium” and “high” are not very realistic because the methods used assume a correlation between different countries, that isn’t true in practice. Lutz likewise has similar points. Here is Lee:

Special issues arise in constructing consistent probability intervals for individual countries, for regions, and for the world, because account must be taken of the positive or negative correlations among the country forecast errors within regions and across regions. Since error correlation is typically positive but less than 1.0, country errors tend to cancel under aggregation, and the proportional error bounds for the world population are far narrower than for individual countries. The NRC study (20) found that the average absolute country error was 21% while the average global error was only 3%. When the High, Medium and Low scenario approach is used, there is no cancellation of error under aggregation, so the probability coverage at different levels of aggregation cannot be handled consistently. An ongoing research collaboration between the U.N. Population Division and a team led by Raftery is developing new and very promising statistical methods for handling uncertainty in future forecasts.

And then on UN projections:

One might quibble with this or that assumption, but the UN projections have had an impressive record of success in the past, particularly at the global level, and I expect that to continue in the future. To a remarkable degree, the UN has sought out expert advice and experimented with cutting edge forecasting techniques, while maintaining consistency in projections. But in forecasting, errors are inevitable, and sound decision making requires that the likelihood of errors be taken into account. In this respect, there is much room for improvement in the UN projections and indeed in all projections by government statistical offices.

This comment looks like an oblique academic gentle slapping around (disguised as praise), but it’s hard to tell.


I don’t have a conclusion. I thought it would be interesting to find some demographic experts and show their views on future population trends. The future is always hard to predict – although in demography the next 20 years are usually easy to predict, short of global plagues and famines.

It does seem hard to have much idea about the population in 2100, but the difference between a population of 8bn and 11bn will have a large impact on CO2 emissions (without very significant CO2 mitigation policies).


The end of world population growth, Wolfgang Lutz, Warren Sanderson & Sergei Scherbov, Nature (2001) – paywall paper

The future population of the world – what can we assume?, edited Wolfgang Lutz, Earthscan Publications (1996) – freely available book

World Population Stabilization Unlikely This Century, Patrick Gerland et al, Science (2014) – free paper

Dimensions of global population projections: what do we know about future population trends and structures? Wolfgang Lutz & Samir KC, Phil. Trans. R. Soc. B (2010)

The Outlook for Population Growth, Ronald Lee, Science (2011) – free paper

Read Full Post »

In Planck, Stefan-Boltzmann, Kirchhoff and LTE one of our commenters asked a question about emissivity. The first part of that article is worth reading as a primer in the basics for this article. I don’t want to repeat all the basics, except to say that if a body is a “black body” it emits radiation according to a simple formula. This is the maximum that any body can emit. In practice, a body will emit less.

The ratio between actual and the black body is the emissivity. It has a value between 0 and 1.

The question that this article tries to help readers understand is the origin and use of the emissivity term in the Stefan-Boltzmann equation:

E = ε’σT4

where E = total flux, ε’ = “effective emissivity” (a value between 0 and 1), σ is a constant and T = temperature in Kelvin (i.e., absolute temperature).

The term ε’ in the Stefan-Boltzmann equation is not really a constant. But it is often treated as a constant in articles that related to climate. Is this valid? Not valid? Why is it not a constant?

There is a constant material property called emissivity, but it is a function of wavelength. For example, if we found that the emissivity of a body at 10.15 μm was 0.55 then this would be the same regardless of whether the body was in Antarctica (around 233K = -40ºC), the tropics (around 303K = 30ºC) or at the temperature of the sun’s surface (5800K). How do we know this? From experimental work over more than a century.

Hopefully some graphs will illuminate the difference between emissivity the material property (that doesn’t change), and the “effective emissivity” (that does change) we find in the Stefan-Boltzmann equation. In each graph you can see:

  • (top) the blackbody curve
  • (middle) the emissivity of this fictional material as a function of wavelength
  • (bottom) the actual emitted radiation due to the emissivity – and a calculation of the “effective emissivity”.

The calculation of “effective emissivity” = total actual emitted radiation / total blackbody emitted radiation (note 1).

At 288K – effective emissivity = 0.49:


At 300K – effective emissivity = 0.49:


At 400K – effective emissivity = 0.44:


At 500K – effective emissivity = 0.35:


At 5800K, that is solar surface temperature — effective emissivity = 0.00 (note the scale on the bottom graph is completely different from the scale of the top graph):


Hopefully this helps people trying to understand what emissivity really relates to in the Stefan Boltzmann equation. It is not a constant except in rare cases. But you can see that treating it as a constant over a range of temperatures is a reasonable approximation (depending on the accuracy you want), but change the temperature “too much” and your “effective emissivity” can change massively.

As always with approximations and useful formulas, you need to understand the basis behind them to know when you can and can’t use them.

Any questions, just ask in the comments.

Note 1 – The flux was calculated for the wavelength range of 0.01 μm to 50μm. If you use the Stefan Boltzmann equation for 288K you will get E = 5.67×10-8 x 2884 = 390 W/m2. The reason my graph has 376 W/m2 is because I don’t include the wavelength range from 50 to infinity. It doesn’t change the practical results you see.

Read Full Post »

Long before the printing of money, golden eggs were the only currency.

In a deep cave, goose Day-Lewis, the last of the gold-laying geese, was still at work.

Day-Lewis lived in the country known affectionately as Utopia. Every day, Day-Lewis laid 10 perfect golden eggs, and was loved and revered for her service. Luckily, everyone had read Aesop’s fables, and no one tried to kill Day-Lewis to get all those extra eggs out. Still Utopia did pay a few armed guards to keep watch for the illiterates, just in case.

Utopia wasn’t into storing wealth because it wanted to run some important social programs to improve the education and health of the country. Thankfully they didn’t run a deficit and issue bonds so we don’t need to get into any political arguments about libertarianism.

This article is about golden eggs.

Utopia employed the service of bunny Fred to take the golden eggs to the nearby country of Capitalism in return for services of education and health. Every day, bunny Fred took 10 eggs out of the country. Every day, goose Day-Lewis produced 10 eggs. It was a perfect balance. The law of conservation of golden eggs was intact.

Thankfully, history does not record any comment on the value of the services received for these eggs, or on the benefit to society of those services, so we can focus on the eggs story.

Due to external circumstances outside of Utopia’s control, on January 1st, the year of Our Goose 150, a new international boundary was created between Utopia and Capitalism. History does not record the complex machinations behind the reasons for this new border control.

However, as always with government organizations, things never go according to plan. On the first day, January 1st, there were paperwork issues.

Bunny Fred showed up with 10 golden eggs, and, what he thought was the correct paperwork. Nothing got through. Luckily, unlike some government organizations with wafer-thin protections for citizens’ rights, they didn’t practice asset forfeiture for “possible criminal activity we might dream up and until you can prove you earned this honestly we are going to take it and never give it back”. Instead they told Fred to come back tomorrow.

On January 2nd, Bunny Fred had another run at the problem and brought another 10 eggs. The export paperwork for the supply of education and health only allowed for 10 golden eggs to be exported to Capitalism so border control sent on the 10 eggs from Jan 1st and insisted Bunny Fred take 10 eggs take back to Utopia.

On January 3rd, Bunny Fred, desperate to remedy the deficit of services in Utopia took 20 eggs – 10 from Day-Lewis and 10 he had brought back from border control the day before.

Insistent on following their new ad hoc processes, border control could only send on 10 eggs to Capitalism. As they had no approved paperwork for “storing” extra eggs, they insisted that Fred take back the excess eggs.

Every day, the same result:

  • Day-Lewis produced 10 eggs, Bunny Fred took 20 eggs to border control
  • Border control sent 10 eggs to Capitalism, Bunny Fred brought 10 eggs back

One day some people who thought they understood the law of conservation of golden eggs took a look at the current situation and declared:

Heretics! This is impossible. Day-Lewis, last of the gold-laying geese, only produces 10 eggs per day. How can Bunny Fred be taking 20 eggs to border control?

You can’t create golden eggs! The law of conservation of golden eggs has been violated.

You can’t get more than 100% efficiency. This is impossible.

And in other completely unrelated stories:

A Challenge for Bryan & 

Do Trenberth and Kiehl understand the First Law of Thermodynamics? & Part Two & Part Three – The Creation of Energy?

and recent comments in CO2- An Insignificant Trace Gas? – Part One

Amazing Things we Find in Textbooks – The Real Second Law of Thermodynamics

The Three Body Problem

Read Full Post »

Older Posts »