In Ensemble Forecasting we had a look at the principles behind “ensembles of initial conditions” and “ensembles of parameters” in forecasting weather. Climate models are a little different from weather forecasting models but use the same physics and the same framework.

A lot of people, including me, have questions about “tuning” climate models. While looking for what the latest IPCC report (AR5) had to say about ensembles of climate models I found a reference to Tuning the climate of a global model by Mauritsen et al (2012). Unless you work in the field of climate modeling you don’t know the magic behind the scenes. This free paper (note 1) gives some important insights and is very readable:

The need to tune models became apparent in the early days of coupled climate modeling, when the top of the atmosphere (TOA) radiative imbalance was so large that models would quickly drift away from the observed state. Initially, a practice to input or extract heat and freshwater from the model, by applying flux-corrections, was invented to address this problem. As models gradually improved to a point when flux-corrections were no longer necessary, this practice is now less accepted in the climate modeling community.

Instead, the radiation balance is controlled primarily by tuning cloud-related parameters at most climate modeling centers while others adjust the ocean surface albedo or scale the natural aerosol climatology to achieve radiation balance. Tuning cloud parameters partly masks the deficiencies in the simulated climate, as there is considerable uncertainty in the representation of cloud processes. But just like adding flux-corrections, adjusting cloud parameters involves a process of error compensation, as it is well appreciated that climate models poorly represent clouds and convective processes. Tuning aims at balancing the Earth’s energy budget by adjusting a deficient representation of clouds, without necessarily aiming at improving the latter.

A basic requirement of a climate model is reproducing the temperature change from pre-industrial times (mid 1800s) until today. So the focus is on temperature change, or in common terminology, anomalies.

It was interesting to see that if we plot the “actual modeled temperatures” from 1850 to present the picture doesn’t look so good (the grey curves are models from the coupled model inter-comparison projects: CMIP3 and CMIP5):

From Mauritsen et al 2012

From Mauritsen et al 2012

Figure 1

The authors state:

There is considerable coherence between the model realizations and the observations; models are generally able to reproduce the observed 20th century warming of about 0.7 K..

Yet, the span between the coldest and the warmest model is almost 3 K, distributed equally far above and below the best observational estimates, while the majority of models are cold-biased. Although the inter-model span is only one percent relative to absolute zero, that argument fails to be reassuring. Relative to the 20th century warming the span is a factor four larger, while it is about the same as our best estimate of the climate response to a doubling of CO2, and about half the difference between the last glacial maximum and present.

They point out that adjusting parameters might just be offsetting one error against another..

In addition to targeting a TOA radiation balance and a global mean temperature, model tuning might strive to address additional objectives, such as a good representation of the atmospheric circulation, tropical variability or sea-ice seasonality. But in all these cases it is usually to be expected that improved performance arises not because uncertain or non-observable parameters match their intrinsic value – although this would clearly be desirable – rather that compensation among model errors is occurring. This raises the question as to whether tuning a model influences model-behavior, and places the burden on the model developers to articulate their tuning goals, as including quantities in model evaluation that were targeted by tuning is of little value. Evaluating models based on their ability to represent the TOA radiation balance usually reflects how closely the models were tuned to that particular target, rather than the models intrinsic qualities.

[Emphasis added]. And they give a bit more insight into the tuning process:

A few model properties can be tuned with a reasonable chain of understanding from model parameter to the impact on model representation, among them the global mean temperature. It is comprehendible that increasing the models low-level cloudiness, by for instance reducing the precipitation efficiency, will cause more reflection of the incoming sunlight, and thereby ultimately reduce the model’s surface temperature.

Likewise, we can slow down the Northern Hemisphere mid-latitude tropospheric jets by increasing orographic drag, and we can control the amount of sea ice by tinkering with the uncertain geometric factors of ice growth and melt. In a typical sequence, first we would try to correct Northern Hemisphere tropospheric wind and surface pressure biases by adjusting parameters related to the parameterized orographic gravity wave drag. Then, we tune the global mean temperature as described in Sections 2.1 and 2.3, and, after some time when the coupled model climate has come close to equilibrium, we will tune the Arctic sea ice volume (Section 2.4).

In many cases, however, we do not know how to tune a certain aspect of a model that we care about representing with fidelity, for example tropical variability, the Atlantic meridional overturning circulation strength, or sea surface temperature (SST) biases in specific regions. In these cases we would rather monitor these aspects and make decisions on the basis of a weak understanding of the relation between model formulation and model behavior.

Here we see how CMIP3 & 5 models “drift” – that is, over a long period of simulation time how the surface temperature varies with TOA flux imbalance (and also we see the cold bias of the models):

From Mauritsen et al 2012

From Mauritsen et al 2012

Figure 2

If a model equilibrates at a positive radiation imbalance it indicates that it leaks energy, which appears to be the case in the majority of models, and if the equilibrium balance is negative it means that the model has artificial energy sources. We speculate that the fact that the bulk of models exhibit positive TOA radiation imbalances, and at the same time are cold-biased, is due to them having been tuned without account for energy leakage.

[Emphasis added].

From that graph they discuss the implied sensitivity to radiative forcing of each model (the slope of each model and how it compares with the blue and red “sensitivity” curves).

We get to see some of the parameters that are played around with (a-h in the figure):

From Mauritsen et al 2012

From Mauritsen et al 2012

Figure 3

And how changing some of these parameters affects (over a short run) “headline” parameters like TOA imbalance and cloud cover:

From Mauritsen et al 2012

From Mauritsen et al 2012

Figure 4 – Click to Enlarge

There’s also quite a bit in the paper about tuning the Arctic sea ice that will be of interest for Arctic sea ice enthusiasts.

In some of the final steps we get a great insight into how the whole machine goes through its final tune up:

..After these changes were introduced, the first parameter change was a reduction in two non-dimensional parameters controlling the strength of orographic wave drag from 0.7 to 0.5.

This greatly reduced the low zonal mean wind- and sea-level pressure biases in the Northern Hemisphere in atmosphere-only simulations, and further had a positive impact on the global to Arctic temperature gradient and made the distribution of Arctic sea-ice far more realistic when run in coupled mode.

In a second step the conversion rate of cloud water to rain in convective clouds was doubled from 1×10-4 s-1 to 2×10-4 s-1 in order to raise the OLR to be closer to the CERES satellite estimates.

At this point it was clear that the new coupled model was too warm compared to our target pre- industrial temperature. Different measures using the convection entrainment rates, convection overshooting fraction and the cloud homogeneity factors were tested to reduce the global mean temperature.

In the end, it was decided to use primarily an increased homogeneity factor for liquid clouds from 0.70 to 0.77 combined with a slight reduction of the convective overshooting fraction from 0.22 to 0.21, thereby making low-level clouds more reflective to reduce the surface temperature bias. Now the global mean temperature was sufficiently close to our target value and drift was very weak. At this point we decided to increase the Arctic sea ice volume from 18×1012 m3 to 22×1012 m3 by raising the cfreeze parameter from 1/2 to 2/3. ECHAM5/MPIOM had this parameter set to 4/5. These three final parameter settings were done while running the model in coupled mode.

Some of the paper’s results (not shown here) are some “parallel worlds” with different parameters. In essence, while working through the model development phase they took a lot of notes of what they did, what they changed, and at the end they went back and created some alternatives from some of their earlier choices. The parameter choices along with a set of resulting climate properties are shown in their table 10.

Some summary statements:

Parameter tuning is the last step in the climate model development cycle, and invariably involves making sequences of choices that influence the behavior of the model. Some of the behavioral changes are desirable, and even targeted, but others may be a side effect of the tuning. The choices we make naturally depend on our preconceptions, preferences and objectives. We choose to tune our model because the alternatives – to either drift away from the known climate state, or to introduce flux-corrections – are less attractive. Within the foreseeable future climate model tuning will continue to be necessary as the prospects of constraining the relevant unresolved processes with sufficient precision are not good.

Climate model tuning has developed well beyond just controlling global mean temperature drift. Today, we tune several aspects of the models, including the extratropical wind- and pressure fields, sea-ice volume and to some extent cloud-field properties. By doing so we clearly run the risk of building the models’ performance upon compensating errors, and the practice of tuning is partly masking these structural errors. As one continues to evaluate the models, sooner or later these compensating errors will become apparent, but the errors may prove tedious to rectify without jeopardizing other aspects of the model that have been adjusted to them.

Climate models ability to simulate the 20th century temperature increase with fidelity has become something of a show-stopper as a model unable to reproduce the 20th century would probably not see publication, and as such it has effectively lost its purpose as a model quality measure. Most other observational datasets sooner or later meet the same destiny, at least beyond the first time they are applied for model evaluation. That is not to say that climate models can be readily adapted to fit any dataset, but once aware of the data we will compare with model output and invariably make decisions in the model development on the basis of the results. Rather, our confidence in the results provided by climate models is gained through the development of a fundamental physical understanding of the basic processes that create climate change. More than a century ago it was first realized that increasing the atmospheric CO2 concentration leads to surface warming, and today the underlying physics and feedback mechanisms are reasonably understood (while quantitative uncertainty in climate sensitivity is still large). Coupled climate models are just one of the tools applied in gaining this understanding..

[Emphasis added].

..In this paper we have attempted to illustrate the tuning process, as it is being done currently at our institute. Our hope is to thereby help de-mystify the practice, and to demonstrate what can and cannot be achieved. The impacts of the alternative tunings presented were smaller than we thought they would be in advance of this study, which in many ways is reassuring. We must emphasize that our paper presents only a small glimpse at the actual development and evaluation involved in preparing a comprehensive coupled climate model – a process that continues to evolve as new datasets emerge, model parameterizations improve, additional computational resources become available, as our interests, perceptions and objectives shift, and as we learn more about our model and the climate system itself.

 Note 1: The link to the paper gives the html version. From there you can click the “Get pdf” link and it seems to come up ok – no paywall. If not try the link to the draft paper (but the formatting makes it not so readable)

I’ve had questions about the use of ensembles of climate models for a while. I was helped by working through a bunch of papers which explain the origin of ensemble forecasting. I still have my questions but maybe this post will help to create some perspective.

The Stochastic Sixties

Lorenz encapulated the problem in the mid-1960’s like this:

The proposed procedure chooses a finite ensemble of initial states, rather than the single observed initial state. Each state within the ensemble resembles the observed state closely enough so that the difference might be ascribed to errors or inadequacies in observation. A system of dynamic equations previously deemed to be suitable for forecasting is then applied to each member of the ensemble, leading to an ensemble of states at any future time. From an ensemble of future states, the probability of occurrence of any event, or such statistics as the ensemble mean and ensemble standard deviation of any quantity, may be evaluated.

Between the near future, when all states within an ensemble will look about alike, and the very distant future, when two states within an ensemble will show no more resemblance than two atmospheric states chosen at random, it is hoped that there will be an extended range when most of the states in an ensemble, while not constituting good pin-point forecasts, will possess certain important features in common. It is for this extended range that the procedure may prove useful.

[Emphasis added].

Epstein picked up some of these ideas in two papers in 1969. Here’s an extract from The Role of Initial Uncertainties in Prediction.

While it has long been recognized that the initial atmospheric conditions upon which meteorological forecasts are based are subject to considerable error, little if any explicit use of this fact has been made.

Operational forecasting consists of applying deterministic hydrodynamic equations to a single “best” initial condition and producing a single forecast value for each parameter..

..One of the questions which has been entirely ignored by the forecasters.. is whether of not one gets the “best” forecast by applying the deterministic equations to the “best” values of the initial conditions and relevant parameters..

..one cannot know a uniquely valid starting point for each forecast. There is instead an ensemble of possible starting points, but the identification of the one and only one of these which represents the “true” atmosphere is not possible.

In essence, the realization that small errors can grow in a non-linear system like weather and climate leads us to ask what the best method is of forecasting the future. In this paper Epstein takes a look at a few interesting simple problems to illustrate the ensemble approach.

Let’s take a look at one very simple example – the slowing of a body due to friction.

Rate of change of velocity (dv/dt) is proportional to the velocity, v. The “proportional” term is k, which increases with more friction.

dv/dt = -kv, therefore, v = v0.exp(-kt), where v0 = initial velocity

With a starting velocity of 10 m/s and k = 10-4 (in units of 1/s), how does velocity change with time?


Figure 1 – note the logarithm of time on the time axis, time runs from 10 secs – 100,000 secs

Probably no surprises there.

Now let’s consider in the real world that we don’t know the starting velocity precisely, and also we don’t know the coefficient of friction precisely. Instead, we might have some idea about the possible values, which could be expressed as a statistical spread. Epstein looks at the case for v0 with a normal distribution and k with a gamma distribution (for specific reasons not that important).

 Mean of v0:   <v0> = 10 m/s

Standard deviation of v0:   σv = 1m/s

Mean of k:    <k> = 10-4 /s

Standard deviation of k:   σk = 3×10-5 /s

The particular example he gave has equations that can be easily manipulated, allowing him to derive an analytical result. In 1969 that was necessary. Now we have computers and some lucky people have Matlab. My approach uses Matlab.

What I did was create a set of 1,000 random normally distributed values for v0, with the mean and standard deviation above. Likewise, a set of gamma distributed values for k.

Then we take each pair in turn and produce the velocity vs time curve. Then we look at the stats of the 1,000 curves.

Interestingly the standard deviation increases before fading away to zero. It’s easy to see why the standard deviation will end up at zero – because the final velocity is zero. So we could easily predict that. But it’s unlikely we would have predicted that the standard deviation of velocity would start to increase after 3,000 seconds and then peak at around 9,000 seconds.

Here is the graph of standard deviation of velocity vs time:


Figure 2

Now let’s look at the spread of results. The blue curves in the top graph (below) are each individual ensemble member and the green is the mean of the ensemble results. The red curve is the calculation of velocity against time using the mean of v0 and k:


Figure 3

The bottom curve zooms in on one portion (note the time axis is now linear), with the thin green lines being 1 standard deviation in each direction.

What is interesting is the significant difference between the mean of the ensemble members and the single value calculated using the mean parameters. This is quite usual with “non-linear” equations (aka the real world).

So, if you aren’t sure about your parameters or your initial conditions, taking the “best value” and running the simulation can well give you a completely different result from sampling the parameters and initial conditions and taking the mean of this “ensemble” of results..

Epstein concludes in his paper:

In general, the ensemble mean value of a variable will follow a different course than that of any single member of the ensemble. For this reason it is clearly not an optimum procedure to forecast the atmosphere by applying deterministic hydrodynamic equations to any single initial condition, no matter how well it fits the available, but nevertheless finite and fallible, set of observations.

In Epstein’s other 1969 paper, Stochastic Dynamic Prediction, is more involved. He uses Lorenz’s “minimum set of atmospheric equations” and compares the results after 3 days from using the “best value” starting point vs an ensemble approach. The best value approach has significant problems compared with the ensemble approach:

Note that this does not mean the deterministic forecast is wrong, only that it is a poor forecast. It is possible that the deterministic solution would be verified in a given situation but the stochastic solutions would have better average verification scores.


One of the important points in the earlier work on numerical weather forecasting was the understanding that parameterizations also have uncertainty associated with them.

For readers who haven’t seen them, here’s an example of a parameterization, for latent heat flux, LH:

LH = LρCDEUr(qs-qa)

which says Latent Heat flux = latent heat of vaporization x density x “aerodynamic transfer coefficient” x wind speed at the reference level x ( humidity at the surface – humidity in the air at the reference level)

The “aerodynamic transfer coefficient” is somewhere around 0.001 over ocean to 0.004 over moderately rough land.

The real formula for latent heat transfer is much simpler:

LH = the covariance of upwards moisture with vertical eddy velocity x density x latent heat of vaporization

These are values that vary even across very small areas and across many timescales. Across one “grid cell” of a numerical model we can’t use the “real formula” because we only get to put in one value for upwards eddy velocity and one value for upwards moisture flux and anyway we have no way of knowing the values “sub-grid”, i.e., at the scale we would need to know them to do an accurate calculation.

That’s why we need parameterizations. By the way, I don’t know whether this is a current formula in use in NWP, but it’s typical of what we find in standard textbooks.

So right away it should be clear why we need to apply the same approach of ensembles to the parameters describing these sub-grid processes as well as to initial conditions. Are we sure that over Connecticut the parameter CDE = 0.004, or should it be 0.0035? In fact, parameters like this are usually calculated from the average of a number of experiments. They conceal as much as they reveal. The correct value probably depends on other parameters. In so far as it represents a real physical property it will vary depending on the time of day, seasons and other factors. It might even be, “on average”, wrong. Because “on average” over the set of experiments was an imperfect sample. And “on average” over all climate conditions is a different sample.

The Numerical Naughties

The insights gained in the stochastic sixties weren’t so practically useful until some major computing power came along in the 90s and especially the 2000s.

Here is Palmer et al (2005):

Ensemble prediction provides a means of forecasting uncertainty in weather and climate prediction. The scientific basis for ensemble forecasting is that in a non-linear system, which the climate system surely is, the finite-time growth of initial error is dependent on the initial state. For example, Figure 2 shows the flow-dependent growth of initial uncertainty associated with three different initial states of the Lorenz (1963) model. Hence, in Figure 2a uncertainties grow significantly more slowly than average (where local Lyapunov exponents are negative), and in Figure 2c uncertainties grow significantly more rapidly than one would expect on average. Conversely, estimates of forecast uncertainty based on some sort of climatological-average error would be unduly pessimistic in Figure 2a and unduly optimistic in Figure 2c.


From Palmer et al 2005


Figure 4

The authors then provide an interesting example to demonstrate the practical use of ensemble forecasts. In the top left are the “deterministic predictions” using the “best estimate” of initial conditions. The rest of the charts 1-50 are the ensemble forecast members each calculated from different initial conditions. We can see that there was a low yet significant chance of a severe storm:


From Palmer et al 2005

Figure 5 – Click to enlarge

In fact a severe storm did occur so the probabilistic forecast was very useful, in that it provided information not available with the deterministic forecast.

This is a nice illustration of some benefits. It doesn’t tell us how well NWPs perform in general.

One measure is the forecast spread of certain variables as the forecast time increases. Generally single model ensembles don’t do so well – they under-predict the spread of results at later time periods.

Here’s an example of the performance of a multi-model ensemble vs single-model ensembles on saying whether an event will occur or not. (Intuitively, the axes seem the wrong way round). The single model versions are over-confident – so when the forecast probability is 1.0 (certain) the reality is 0.7; when the forecast probability is 0.8, the reality is 0.6; and so on:

From Palmer et al 2005

From Palmer et al 2005

Figure 6

We can see that, at least in this case, the multi-model did a pretty good job. However, similar work on forecasting precipitation events showed much less success.

In their paper, Palmer and his colleagues contrast multi-model vs multi-parameterization within one model. I am not clear what the difference is – whether it is a technicality or something fundamentally different in approach. The example above is multi-model. They do give some examples of multi-parameterizations (with a similar explanation to what I gave in the section above). Their paper is well worth a read, as is the paper by Lewis (see links below).


The concept of taking a “set of possible initial conditions” for weather forecasting makes a lot of sense. The concept of taking a “set of possible parameterizations” also makes sense although it might be less obvious at first sight.

In the first case we know that we don’t know the precise starting point because observations have errors and we lack a perfect observation system. In the second case we understand that a parameterization is some empirical formula which is clearly not “the truth”, but some approximation that is the best we have for the forecasting job at hand, and the “grid size” we are working to. So in both cases creating an ensemble around “the truth” has some clear theoretical basis.

Now what is also important for this theoretical basis is that we can test the results – at least with weather prediction (NWP). That’s because of the short time periods under consideration.

A statement from Palmer (1999) will resonate in the hearts of many readers:

A desirable if not necessary characteristic of any physical model is an ability to make falsifiable predictions

When we come to ensembles of climate models the theoretical case for multi-model ensembles is less clear (to me). There’s a discussion in IPCC AR5 that I have read. I will follow up the references and perhaps produce another article.


The Role of Initial Uncertainties in Prediction, Edward Epstein, Journal of Applied Meteorology (1969) – free paper

Stochastic Dynamic Prediction, Edward Epstein, Tellus (1969) – free paper

On the possible reasons for long-period fluctuations of the general circulation. Proc. WMO-IUGG Symp. on Research and Development Aspects of Long-Range Forecasting, Boulder, CO, World Meteorological Organization, WMO Tech. EN Lorenz (1965) – cited from Lewis (2005)

Roots of Ensemble Forecasting, John Lewis, Monthly Weather Forecasting (2005) – free paper

Predicting uncertainty in forecasts of weather and climate, T.N. Palmer (1999), also published as ECMWF Technical Memorandum No. 294 – free paper

Representing Model Uncertainty in Weather and Climate Prediction, TN Palmer, GJ Shutts, R Hagedorn, FJ Doblas-Reyes, T Jung & M Leutbecher, Annual Review Earth Planetary Sciences (2005) – free paper

Over the last few years I’ve written lots of articles relating to the inappropriately-named “greenhouse” effect and covered some topics in great depth. I’ve also seen lots of comments and questions which has helped me understand common confusion and misunderstandings.

This article, with huge apologies to regular long-suffering readers, covers familiar ground in simple terms. It’s a reference article. I’ve referenced other articles and series as places to go to understand a particular topic in more detail.

One of the challenges of writing a short simple explanation is it opens you up to the criticism of having omitted important technical details that you left out in order to keep it short. Remember this is the simple version..


First of all, the “greenhouse” effect is not AGW. In maths, physics, engineering and other hard sciences, one block is built upon another block. AGW is built upon the “greenhouse” effect. If AGW is wrong, it doesn’t invalidate the greenhouse effect. If the greenhouse effect is wrong, it does invalidate AGW.

The greenhouse effect is built on very basic physics, proven for 100 years or so, that is not in any dispute in scientific circles. Fantasy climate blogs of course do dispute it.

Second, common experience of linearity in everyday life cause many people to question how a tiny proportion of “radiatively-active” molecules can have such a profound effect. Common experience is not a useful guide. Non-linearity is the norm in real science. Since the enlightenment at least, scientists have measured things rather than just assumed consequences based on everyday experience.

The Elements of the “Greenhouse” Effect

Atmospheric Absorption

1. The “radiatively-active” gases in the atmosphere:

  • water vapor
  • CO2
  • CH4
  • N2O
  • O3
  • and others

absorb radiation from the surface and transfer this energy via collision to the local atmosphere. Oxygen and nitrogen absorb such a tiny amount of terrestrial radiation that even though they constitute an overwhelming proportion of the atmosphere their radiative influence is insignificant (note 1).

How do we know all this? It’s basic spectroscopy, as detailed in exciting journals like the Journal of Quantitative Spectroscopy and Radiative Transfer over many decades. Shine radiation of a specific wavelength through a gas and measure the absorption. Simple stuff and irrefutable.

Atmospheric Emission

2. The “radiatively-active” gases in the atmosphere also emit radiation. Gases that absorb at a wavelength also emit at that wavelength. Gases that don’t absorb at that wavelength don’t emit at that wavelength. This is a consequence of Kirchhoff’s law.

The intensity of emission of radiation from a local portion of the atmosphere is set by the atmospheric emissivity and the temperature.


3. The transfer of heat within the troposphere is mostly by convection. The sun heats the surface of the earth through the (mostly) transparent atmosphere (note 2). The temperature profile, known as the “lapse rate”, is around 6K/km in the tropics. The lapse rate is principally determined by non-radiative factors – as a parcel of air ascends it expands into the lower pressure and cools during that expansion (note 3).

The important point is that the atmosphere is cooler the higher you go (within the troposphere).

Energy Balance

4. The overall energy in the climate system is determined by the absorbed solar radiation and the emitted radiation from the climate system. The absorbed solar radiation – globally annually averaged – is approximately 240 W/m² (note 4). Unsurprisingly, the emitted radiation from the climate system is also (globally annually averaged) approximately 240 W/m². Any change in this and the climate is cooling or warming.

Emission to Space

5. Most of the emission of radiation to space by the climate system is from the atmosphere, not from the surface of the earth. This is a key element of the “greenhouse” effect. The intensity of emission depends on the local atmosphere. So the temperature of the atmosphere from which the emission originates determines the amount of radiation.

If the place of emission of radiation – on average – moves upward for some reason then the intensity decreases. Why? Because it is cooler the higher up you go in the troposphere. Likewise, if the place of emission – on average – moves downward for some reason, then the intensity increases (note 5).

More GHGs

6. If we add more radiatively-active gases (like water vapor and CO2) then the atmosphere becomes more “opaque” to terrestrial radiation and the consequence is the emission to space from the atmosphere moves higher up (on average). Higher up is colder. See note 6.

So this reduces the intensity of emission of radiation, which reduces the outgoing radiation, which therefore adds energy into the climate system. And so the climate system warms (see note 7).

That’s it!

It’s as simple as that. The end.

A Few Common Questions

CO2 is Already Saturated

There are almost 315,000 individual absorption lines for CO2 recorded in the HITRAN database. Some absorption lines are stronger than others. At the strongest point of absorption – 14.98 μm (667.5 cm-1), 95% of radiation is absorbed in only 1m of the atmosphere (at standard temperature and pressure at the surface). That’s pretty impressive.

By contrast, from 570 – 600 cm-1 (16.7 – 17.5 μm) and 730 – 770 cm-1 (13.0 – 13.7 μm) the CO2 absorption through the atmosphere is nowhere near “saturated”. It’s more like 30% absorbed through a 1km path.

You can see the complexity of these results in many graphs in Atmospheric Radiation and the “Greenhouse” Effect – Part Nine – calculations of CO2 transmittance vs wavelength in the atmosphere using the 300,000 absorption lines from the HITRAN database, and see also Part Eight – interesting actual absorption values of CO2 in the atmosphere from Grant Petty’s book

The complete result combining absorption and emission is calculated in Visualizing Atmospheric Radiation – Part Seven – CO2 increases – changes to TOA in flux and spectrum as CO2 concentration is increased

CO2 Can’t Absorb Anything of Note Because it is Only .04% of the Atmosphere

See the point above. Many spectroscopy professionals have measured the absorptivity of CO2. It has a huge variability in absorption, but the most impressive is that 95% of 14.98 μm radiation is absorbed in just 1m. How can that happen? Are spectroscopy professionals charlatans? You need evidence, not incredulity. Science involves measuring things and this has definitely been done. See the HITRAN database.

Water Vapor Overwhelms CO2

This is an interesting point, although not correct when we consider energy balance for the climate. See Visualizing Atmospheric Radiation – Part Four – Water Vapor – results of surface (downward) radiation and upward radiation at TOA as water vapor is changed.

The key point behind all the detail is that the top of atmosphere radiation change (as CO2 changes) is the important one. The surface change (forcing) from increasing CO2 is not important, is definitely much weaker and is often insignificant. Surface radiation changes from CO2 will, in many cases, be overwhelmed by water vapor.

Water vapor does not overwhelm CO2 high up in the atmosphere because there is very little water vapor there – and the radiative effect of water vapor is dramatically impacted by its concentration, due to the “water vapor continuum”.

The Calculation of the “Greenhouse” Effect is based on “Average Surface Temperature” and there is No Such Thing

Simplified calculations of the “greenhouse” effect use some averages to make some points. They help to create a conceptual model.

Real calculations, using the equations of radiative transfer, don’t use an “average” surface temperature and don’t rely on a 33K “greenhouse” effect. Would the temperature decrease 33K if all of the GHGs were removed from the atmosphere? Almost certainly not. Because of feedbacks. We don’t know the effect of all of the feedbacks. But would the climate be colder? Definitely.

See The Rotational Effect – why the rotation of the earth has absolutely no effect on climate, or so a parody article explains..

The Second Law of Thermodynamics Prohibits the Greenhouse Effect, or so some Physicists Demonstrated..

See The Three Body Problem – a simple example with three bodies to demonstrate how a “with atmosphere” earth vs a “without atmosphere earth” will generate different equilibrium temperatures. Please review the entropy calculations and explain (you will be the first) where they are wrong or perhaps, or perhaps explain why entropy doesn’t matter (and revolutionize the field).

See Gerlich & Tscheuschner for the switch and bait routine by this operatic duo.

And see Kramm & Dlugi On Dodging the “Greenhouse” Bullet – Kramm & Dlugi demonstrate that the “greenhouse” effect doesn’t exist by writing a few words in a conclusion but carefully dodging the actual main point throughout their entire paper. However, they do recover Kepler’s laws and point out a few errors in a few websites. And note that one of the authors kindly showed up to comment on this article but never answered the important question asked of him. Probably just too busy.. Kramm & Dlugi also helpfully (unintentionally) explain that G&T were wrong, see Kramm & Dlugi On Illuminating the Confusion of the Unclear – Kramm & Dlugi step up as skeptics of the “greenhouse” effect, fans of Gerlich & Tscheuschner and yet clarify that colder atmospheric radiation is absorbed by the warmer earth..

And for more on that exciting subject, see Confusion over the Basics under the sub-heading The Second Law of Thermodynamics.

Feedbacks overwhelm the Greenhouse Effect

This is a totally different question. The “greenhouse” effect is the “greenhouse” effect. If the effect of more CO2 is totally countered by some feedback then that will be wonderful. But that is actually nothing to do with the “greenhouse” effect. It would be a consequence of increasing temperature.

As noted in the preamble, it is important to separate out the different building blocks in understanding climate.

Miskolczi proved that the Greenhouse Effect has no Effect

Miskolczi claimed that the greenhouse effect was true. He also claimed that more CO2 was balanced out by a corresponding decrease in water vapor. See the Miskolczi series for a tedious refutation of his paper that was based on imaginary laws of thermodynamics and questionable experimental evidence.

Once again, it is important to be able to separate out two ideas. Is the greenhouse effect false? Or is the greenhouse effect true but wiped out by a feedback?

If you don’t care, so long as you get the right result you will be in ‘good’ company (well, you will join an extremely large company of people). But this blog is about science. Not wishful thinking. Don’t mix the two up..

Convection “Short-Circuits” the Greenhouse Effect

Let’s assume that regardless of the amount of energy arriving at the earth’s surface, that the lapse rate stays constant and so the more heat arriving, the more heat leaves. That is, the temperature profile stays constant. (It’s a questionable assumption that also impacts the AGW question).

It doesn’t change the fact that with more GHGs, the radiation to space will be from a higher altitude. A higher altitude will be colder. Less radiation to space and so the climate warms – even with this “short-circuit”.

In a climate without convection, the surface temperature will start off higher, and the GHG effect from doubling CO2 will be higher. See Radiative Atmospheres with no Convection.

In summary, this isn’t an argument against the greenhouse effect, this is possibly an argument about feedbacks. The issue about feedbacks is a critical question in AGW, not a critical question for the “greenhouse” effect. Who can say whether the lapse rate will be constant in a warmer world?


Note 1 – An important exception is O2 absorbing solar radiation high up above the troposphere (lower atmosphere). But O2 does not absorb significant amounts of terrestrial radiation.

Note 2 – 99% of solar radiation has a wavelength <4μm. In these wavelengths, actually about 1/3 of solar radiation is absorbed in the atmosphere. By contrast, most of the terrestrial radiation, with a wavelength >4μm, is absorbed in the atmosphere.

Note 3 – see:

Density, Stability and Motion in Fluids – some basics about instability
Potential Temperature – explaining “potential temperature” and why the “potential temperature” increases with altitude
Temperature Profile in the Atmosphere – The Lapse Rate – lots more about the temperature profile in the atmosphere

Note 4 – see Earth’s Energy Budget – a series on the basics of the energy budget

Note 5 – the “place of emission” is a useful conceptual tool but in reality the emission of radiation takes place from everywhere between the surface and the stratosphere. See Visualizing Atmospheric Radiation – Part Three – Average Height of Emission – the complex subject of where the TOA radiation originated from, what is the “Average Height of Emission” and other questions.

Also, take a look at the complete series: Visualizing Atmospheric Radiation.

Note 6 – the balance between emission and absorption are found in the equations of radiative transfer. These are derived from fundamental physics – see Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations – the equations of radiative transfer including the plane parallel assumption and it’s nothing to do with blackbodies. The fundamental physics is not just proven in the lab, spectral measurements at top of atmosphere and the surface match the calculated values using the radiative transfer equations – see Theory and Experiment – Atmospheric Radiation – real values of total flux and spectra compared with the theory.

Also, take a look at the complete series: Atmospheric Radiation and the “Greenhouse” Effect

Note 7 – this calculation is under the assumption of “all other things being equal”. Of course, in the real climate system, all other things are not equal. However, to understand an effect “pre-feedback” we need to separate it from the responses to the system.

If we open an introductory atmospheric physics textbook, we find that the temperature profile in the troposphere (lower atmosphere) is mostly explained by convection. (See for example, Things Climate Science has Totally Missed? – Convection)

We also find that the temperature profile in the stratosphere is mostly determined by radiation. And that the overall energy balance of the climate system is determined by radiation.

Many textbooks introduce the subject of convection in this way:

  • what would the temperature profile be like if there was no convection, only radiation for heat transfer
  • why is the temperature profile actually different
  • how does pressure reduce with height
  • what happens to air when it rises and expands in the lower pressure environment
  • derivation of the “adiabatic lapse rate”, which in layman’s terms is the temperature change when we have relatively rapid movements of air
  • how the real world temperature profile (lapse rate) compares with the calculated adiabatic lapse rate and why

We looked at the last four points in some detail in a few articles:

Density, Stability and Motion in Fluids – some basics about instability
Potential Temperature – explaining “potential temperature” and why the “potential temperature” increases with altitude
Temperature Profile in the Atmosphere – The Lapse Rate – lots more about the temperature profile in the atmosphere

In this article we will look at the first point.

All of the atmospheric physics textbooks I have seen use a very simple model for explaining the temperature profile in a fictitious “radiation only” environment. The simple model is great for giving insight into how radiation travels.

Physics textbooks, good ones anyway, try and use the simplest models to explain a phenomenon.

The simple model, in brief, is the “semi-gray approximation”. This says the atmosphere is completely transparent to solar radiation, but opaque to terrestrial radiation. Its main simplification is having a constant absorption with wavelength. This makes the problem nice and simple analytically – which means we can rewrite the starting equations and plot a nice graph of the result.

However, atmospheric absorption is the total opposite of constant. Here is an example of the absorption vs wavelength of a minor “greenhouse” gas:

From Vardavas & Taylor (2007)

From Vardavas & Taylor (2007)

Figure 1

So from time to time I’ve wondered what the “no convection” atmosphere would look like with real GHG absorption lines. I also thought it would be especially interesting to see the effect of doubling CO2 in this fictitious environment.

This article is for curiosity value only, and for helping people understand radiative transfer a little better.

We will use the Matlab program seen in the series Visualizing Atmospheric Radiation. This does a line by line calculation of radiative transfer for all of the GHGs, pulling the absorption data out of the HITRAN database.

I updated the program in a few subtle ways. Mainly the different treatment of the stratosphere – the place where convection stops – was removed. Because, in this fictitious world there is no convection in the lower atmosphere either.

Here is a simulation based on 380 ppm CO2, 1775 ppb CH4, 319 ppb N2O and 50% relative humidity all through the atmosphere. Top of atmosphere was 100 mbar and the atmosphere was divided into 40 layers of equal pressure. Absorbed solar radiation was set to 240 W/m² with no solar absorption in the atmosphere. That is (unlike in the real world), the atmosphere has been made totally transparent to solar radiation.

The starting point was a surface temperature of 288K (15ºC) and a lapse rate of 6.5K/km – with no special treatment of the stratosphere. The final surface temperature was 326K (53ºC), an increase of 38ºC:


Figure 2

The ocean depth was only 5m. This just helps get to a new equilibrium faster. If we change the heat capacity of a system like this the end result is the same, the only difference is the time taken.

Water vapor was set at a relative humidity of 50%. For these first results I didn’t get the simulation to update the absolute humidity as the temperature changed. So the starting temperature was used to calculate absolute humidity and that mixing ratio was kept constant:


Figure 3

The lapse rate, or temperature drop per km of altitude:


Figure 4

The flux down and flux up vs altitude:


Figure 5

The top of atmosphere upward flux is 240 W/m² (actually at the 500 day point it was 239.5 W/m²) – the same as the absorbed solar radiation (note 1). The simulation doesn’t “force” the TOA flux to be this value. Instead, any imbalance in flux in each layer causes a temperature change, moving the surface and each part of the atmosphere into a new equilibrium.

A bit more technically for interested readers.. For a given layer we sum:

  • upward flux at the bottom of a layer minus upward flux at the top of a layer
  • downward flux at the top of a layer minus downward flux at the bottom of a layer

This sum equates to the “heating rate” of the layer. We then use the heat capacity and time to work out the temperature change. Then the next iteration of the simulation redoes the calculation.

And even more technically:

  • the upwards flux at the top of a layer = the upwards flux at the bottom of the layer x transmissivity of the layer plus the emission of that layer
  • the downwards flux at the bottom of a layer = the downwards flux at the top of the layer x transmissivity of the layer plus the emission of that layer

End of “more technically”..

Anyway, the main result is the surface is much hotter and the temperature drop per km of altitude is much greater than the real atmosphere. This is because it is “harder” for heat to travel through the atmosphere when radiation is the only mechanism. As the atmosphere thins out, which means less GHGs, radiation becomes progressively more effective at transferring heat. This is why the lapse rate is lower higher up in the atmosphere.

Now let’s have a look at what happens when we double CO2 from its current value (380ppm -> 760 ppm):


Figure 6 – with CO2 doubled instantaneously from 380ppm at 500 days

The final surface temperature is 329.4, increased from 326.2K. This is an increase (no feedback of 3.2K).

The “pseudo-radiative forcing” = 18.9 W/m² (which doesn’t include any change to solar absorption). This radiative forcing is the immediate change in the TOA forcing. (It isn’t directly comparable to the IPCC standard definition which is at the tropopause and after the stratosphere has come back into equilibrium – none of these have much meaning in a world without convection).

Let’s also look at the “standard case” of an increase from pre-industrial CO2 of 280 ppm to a doubling of 560 ppm. I ran this one for longer – 1000 days before doubling CO2 and 2000 days in total- because the starting point was less in balance. At the start, the TOA flux (outgoing longwave radiation) = 248 W/m². This means the climate was cooling quite a bit with the starting point we gave it.

At 180 ppm CO2, 1775 ppb CH4, 319 ppb N2O and 50% relative humidity (set at the starting point of 288K and 6.5K/km lapse rate), the surface temperature after 1,000 days = 323.9 K. At this point the TOA flux was 240.0 W/m². So overall the climate has cooled from its initial starting point but the surface is hotter.

This might seem surprising at first sight – the climate cools but the surface heats up? It’s simply that the “radiation-only” atmosphere has made it much harder for heat to get out. So the temperature drop per km of height is now much greater than it is in a convection atmosphere. Remember that we started with a temperature profile of 6.5K/km – a typical convection atmosphere.

After CO2 doubles to 560 ppm (and all other factors stay the same, including absolute humidity), the immediate effect is the TOA flux drops to 221 W/m² (once again a radiative forcing of about 19 W/m²). This is because the atmosphere is now even more “resistant” to the escape of heat by radiation. The atmosphere is more opaque and so the average emission of radiation of space moves to a higher and colder part of the atmosphere. Colder parts of the atmosphere emit less radiation than warmer parts of the atmosphere.

After the climate moves back into balance – a TOA flux of 240 W/m² – the surface temperature = 327.0 K – an increase (pre-feedback) of 3.1 K.

Compare this with the standard IPCC “with convection” no-feedback forcing of 3.7 W/m² and a “no feedback” temperature rise of about 1.2 K.


Figure 7 – with CO2 doubled instantaneously from 280ppm at 1000 days

Then I introduced a more realistic model with solar absorption by water vapor in the atmosphere (changed parameter ‘solaratm’ in the Matlab program from ‘false’ to ‘true’). Unfortunately this part of the radiative transfer program is not done by radiative transfer, only by a very crude parameterization, just to get roughly the right amount of heating by solar radiation in roughly the right parts of the atmosphere.

The equilibrium surface temperature at 280 ppm CO2 was now “only” 302.7 K (almost 30ºC). Doubling CO2 to 560 ppm created a radiative forcing of 11 W/m², and a final surface temperature of 305.5K – that is, an increase of 2.8K.

Why is the surface temperature lower? Because in the “no solar absorption in the atmosphere” model, all of the solar radiation is absorbed by the ground and has to “fight its way out” from the surface up. Once you absorb solar radiation higher up than the surface, it’s easier for this heat to get out.


One of the common themes of fantasy climate blogs is that the results of radiative physics are invalidated by convection, which “short-circuits” radiation in the troposphere. No one in climate science is confused about the fact that convection dominates heat transfer in the lower atmosphere.

We can see in this set of calculations that when we have a radiation-only atmosphere the surface temperature is a lot higher than any current climate – at least when we consider a “one-dimensional” climate.

Of course, the whole world would be different and there are many questions about the amount of water vapor and the effect of circulation (or lack of it) on moving heat around the surface of the planet via the atmosphere and the ocean.

When we double CO2 from its pre-industrial value the radiative forcing is much greater in a “radiation-only atmosphere” than in a “radiative-convective atmosphere”, with the pre-feedback temperature rise 3ºC vs 1ºC.

So it is definitely true that convection short-circuits radiation in the troposphere. But the whole climate system can only gain and lose energy by radiation and this radiation balance still has to be calculated. That’s what current climate models do.

It’s often stated as a kind of major simplification (a “teaching model”) that with increases in GHGs the “average height of emission” moves up, and therefore the emission is from a colder part of the atmosphere. This idea is explained in more detail and less simplifications in Visualizing Atmospheric Radiation – Part Three – Average Height of Emission – the complex subject of where the TOA radiation originated from, what is the “Average Height of Emission” and other questions.

A legitimate criticism of current atmospheric physics is that convection is poorly understood in contrast to subjects like radiation. This is true. And everyone knows it. But it’s not true to say that convection is ignored. And it’s not true to say that because “convection short-circuits radiation” in the troposphere that somehow more GHGs will have no effect.

On the other hand I don’t want to suggest that because more GHGs in the atmosphere mean that there is a “pre-feedback” temperature rise of about 1K, that somehow the problem is all nicely solved. On the contrary, climate is very complicated. Radiation is very simple by comparison.

All the standard radiative-convective calculation says is: “all other things being equal, an doubling of CO2 from pre-industrial levels, would lead to a 1K increase in surface temperature”

All other things are not equal. But the complication is not that somehow atmospheric physics has just missed out convection. Hilarious. Of course, I realize most people learn their criticisms of climate science from people who have never read a textbook on the subject. Surprisingly, this doesn’t lead to quality criticism..

On more complexity  – I was also interested to see what happens if we readjust absolute humidity due to the significant temperature changes, i.e. we keep relative humidity constant. This led to some surprising results, so I will post them in a followup article.


Note 1 – The boundary conditions are important if you want to understand radiative heat transfer in the atmosphere.

First of all, the downward longwave radiation at TOA (top of atmosphere) = 0. Why? Because there is no “longwave”, i.e., terrestrial radiation, from outside the climate system. So at the top of the atmosphere the downward flux = 0. As we move down through the atmosphere the flux gradually increases. This is because the atmosphere emits radiation. We can divide up the atmosphere into fictitious “layers”. This is how all numerical (finite element analysis) programs actually work. Each layer emits and each layer also absorbs. The balance depends on the temperature of the source radiation vs the temperature of the layer of the atmosphere we are considering.

At the bottom of the atmosphere, i.e., at the surface, the upwards longwave radiation is the surface emission. This emission is given by the Stefan-Boltzmann equation with an emissivity of 1.0 if we consider the surface as a blackbody which is a reasonable approximation for most surface types – for more on this, see Visualizing Atmospheric Radiation – Part Thirteen – Surface Emissivity – what happens when the earth’s surface is not a black body – useful to understand seeing as it isn’t..

At TOA, the upwards emission needs to equal the absorbed solar radiation, otherwise the climate system has an imbalance – either cooling or warming.

As a friend of mine in Florida says:

You can’t kill stupid, but you can dull it with a 4×2

Some ideas are so comically stupid that I thought there was no point writing about them. And yet, one after another, people who can type are putting forward these ideas on this blog.. At first I wondered if I was the object of a practical joke. Some kind of parody. Perhaps the joke is on me. But, just in case I was wrong about the practical joke..


If you pick up a textbook on heat transfer that includes a treatment of radiative heat transfer you find no mention of Arrhenius.

If you pick up a textbook on atmospheric physics none of the equations come from Arrhenius.

Yet there is a steady stream of entertaining “papers” which describe “where Arrhenius went wrong”, “Arrhenius and his debates with Fourier”. Who cares?

Likewise, if you study equations of motion in a rotating frame there is no discussion of where Newton went wrong, or where he got it right, or debates he got right or wrong with contemporaries. Who knows? Who cares?

History is fascinating. But if you want to study physics you can study it pretty well without reading about obscure debates between people who were in the formulation stages of the field.

Here are the building blocks of atmospheric radiation:

  • The emission of radiation – described by Nobel prize winner Max Planck’s equation and modified by the material property called emissivity (this is wavelength dependent)
  • The absorption of radiation by a surface – described by the material property called absorptivity (this is wavelength dependent and equal at the same wavelength and direction to emissivity)
  • The Beer-Lambert law of absorption of radiation by a gas
  • The spectral absorption characteristics of gases – currently contained in the HITRAN database – and based on work carried out over many decades and written up in journals like Journal of Quantitative Spectroscopy and Radiative Transfer
  • The theory of radiative transfer – the Schwarzschild equation – which was well documented by Nobel prize winner Subrahmanyan Chandrasekhar in his 1952 book Radiative Transfer (and by many physicists since)

The steady stream of stupidity will undoubtedly continue, but if you are interested in learning about science then you can rule out blogs that promote papers which earnestly explain “where Arrhenius went wrong”.

Hit them with a 4 by 2.

Or, ask the writer where Subrahmanyan Chandrasekhar went wrong in his 1952 work Radiative Transfer. Ask the writer where Richard M. Goody went wrong. He wrote the seminal Atmospheric Radiation: Theoretical Basis in 1964.

They won’t even know these books exist and will have never read them. These books contain equations that are thoroughly proven over the last 100 years. There is no debate about them in the world of physics. In the world of fantasy blogs, maybe.

There is also a steady stream of people who believe an idea yet more amazing. Somehow basic atmospheric physics is proven wrong because of the last 15 years of temperature history.

The idea seems to be:

More CO2 is believed to have some radiative effect in the climate because of the last 100 years of temperature history, climate scientists saw some link and tried to explain it using CO2, but now there has been no significant temperature increase for the last x years this obviously demonstrates the original idea was false..

If you think this, please go and find a piece of 4×2 and ask a friend to hit you across the forehead with it. Repeat. I can’t account for this level of stupidity but I have seen that it exists.

An alternative idea, that I will put forward, one that has evidence, is that scientists discovered that they can reliably predict:

  • emission of radiation from a surface
  • emission of radiation from a gas
  • absorption of radiation by a surface
  • absorption of radiation by a gas
  • how to add up, subtract, divide and multiply, raise numbers to the power of, and other ninja mathematics

The question I have for the people with these comical ideas:

Do you think that decades of spectroscopy professionals have just failed to measure absorption? Their experiments were some kind of farce? No one noticed they made up all the results?

Do you think Max Planck was wrong?

It is possible that climate is slightly complicated and temperature history relies upon more than one variable?

Did someone teach you that the absorption and emission of radiation was only “developed” by someone analyzing temperature vs CO2 since 1970 and not a single scientist thought to do any other measurements? Why did you believe them?

Bring out the 4×2.

Note – this article is a placeholder so I don’t have to bother typing out a subset of these points for the next entertaining commenter..

Update July 10th with the story of Fred the Charlatan

Let’s take the analogy of a small boat crossing the Atlantic.

Analogies don’t prove anything, they are for illustration. For proof, please review Theory and Experiment – Atmospheric Radiation.

We’ve done a few crossings and it’s taken 45 days, 42 days and 46 days (I have no idea what the right time is, I’m not a nautical person).

We measure the engine output – the torque of the propellors. We want to get across quicker. So Fred the engine guy makes a few adjustments and we remeasure the torque at 5% higher. We also do Fred’s standardized test, which is to zip across a local sheltered bay with no currents, no waves and no wind – the time taken for Fred’s standarized test is 4% faster. Nice.

So we all set out on our journey across the Atlantic. Winds, rain, waves, ocean currents. We have our books to read, Belgian beer and red wine and the time flies. Oh no, when we get to our final destination, it’s actually taken 47 days.

Clearly Fred is some kind of charlatan! No need to check his measurements or review the time across the bay. We didn’t make it across the Atlantic in less time and clearly the ONLY variable involved in that expedition was the output of the propellor.

Well, there’s no point trying to use more powerful engines to get across the Atlantic (or any ocean) faster. Torque has no relationship to speed. Case closed.

Analogy over.


In Part Seven – GCM I  through Part Ten – GCM IV we looked at GCM simulations of ice ages.

These were mostly attempts at “glacial inception”, that is, starting an ice age. But we also saw a simulation of the last 120 kyrs which attempted to model a complete ice age cycle including the last termination. As we saw, there were lots of limitations..

One condition for glacial inception, “perennial snow cover at high latitudes”, could be produced with a high-resolution coupled atmosphere-ocean GCM (AOGCM), but that model did suffer from the problem of having a cold bias at high latitudes.

The (reasonably accurate) simulation of a whole cycle including inception and termination came by virtue of having the internal feedbacks (ice sheet size & height and CO2 concentration) prescribed.

Just to be clear to new readers, these comments shouldn’t indicate that I’ve uncovered some secret that climate scientists are trying to hide, these points are all out in the open and usually highlighted by the authors of the papers.

In Part Nine – GCM III, one commenter highlighted a 2013 paper by Ayako Abe-Ouchi and co-workers, where the journal in question, Nature, had quite a marketing pitch on the paper. I made brief comment on it in a later article in response to another question, including that I had emailed the lead author asking a question about the modeling work (how was a 120 kyr cycle actually simulated?).

Most recently, in Eighteen – “Probably Nonlinearity” of Unknown Origin, another commented highlighted it, which rekindled my enthusiasm, and I went back and read the paper again. It turns out that my understanding of the paper had been wrong. It wasn’t really a GCM paper at all. It was an ice sheet paper.

There is a whole field of papers on ice sheet models deserving attention.

GCM review

Let’s review GCMs first of all to help us understand where ice sheet models fit in the hierarchy of climate simulations.

GCMs consist of a number of different modules coupled together. The first GCMs were mostly “atmospheric GCMs” = AGCMs, and either they had a “swamp ocean” = a mixed layer of fixed depth, or had prescribed ocean boundary conditions set from an ocean model or from an ocean reconstruction.

Less commonly, unless you worked just with oceans, there were ocean GCMs with prescribed atmospheric boundary conditions (prescribed heat and momentum flux from the atmosphere).

Then coupled atmosphere-ocean GCMs came along = AOGCMs. It was a while before these two parts matched up to the point where there was no “flux drift”, that is, no disappearing heat flux from one part of the model.

Why so difficult to get these two models working together? One important reason comes down to the time-scales involved, which result from the difference in heat capacity and momentum of the two parts of the climate system. The heat capacity and momentum of the ocean is much much higher than that of the atmosphere.

And when we add ice sheets models – ISMs – we have yet another time scale to consider.

  • the atmosphere changes in days, weeks and months
  • the ocean changes in years, decades and centuries
  • the ice sheets changes in centuries, millennia and tens of millenia

This creates a problem for climate scientists who want to apply the fundamental equations of heat, mass & momentum conservation along with parameterizations for “stuff not well understood” and “stuff quite-well-understood but whose parameters are sub-grid”. To run a high resolution AOGCM for a 1,000 years simulation might consume 1 year of supercomputer time and the ice sheet has barely moved during that period.

Ice Sheet Models

Scientists who study ice sheets have a whole bunch of different questions. They want to understand how the ice sheets developed.

What makes them grow, shrink, move, slide, melt.. What parameters are important? What parameters are well understood? What research questions are most deserving of attention? And:

Does our understanding of ice sheet dynamics allow us to model the last glacial cycle?

To answer that question we need a model for ice sheet dynamics, and to that we need to apply some boundary conditions from some other “less interesting” models, like GCMs. As a result, there are a few approaches to setting the boundary conditions so we can do our interesting work of modeling ice sheets.

Before we look at that, let’s look at the dynamics of ice sheets themselves.

Ice Sheet Dynamics

First, in the theme of the last paper, Eighteen – “Probably Nonlinearity” of Unknown Origin, here is Marshall & Clark 2002:

The origin of the dominant 100-kyr ice-volume cycle in the absence of substantive radiation forcing remains one of the most vexing questions in climate dynamics

We can add that to the 34 papers reviewed in that previous article. This paper by Marshall & Clark is definitely a good quick read for people who want to understand ice sheets a little more.

Ice doesn’t conduct a lot of heat – it is a very good insulator. So the important things with ice sheets happen at the top and the bottom.

At the top, ice melts, and the water refreezes, runs off or evaporates. In combination, the loss is called ablation. Then we have precipitation that adds to the ice sheet. So the net effect determines what happens at the top of the ice sheet.

At the bottom, when the ice sheet is very thin, heat can be conducted through from the atmosphere to the base and make it melt – if the atmosphere is warm enough. As the ice sheet gets thicker, very little heat is conducted through. However, there are two important sources of heat for surface heating which results in “basal sliding”. One source is geothermal energy. This is around 0.1 W/m² which is very small unless we are dealing with an insulating material (like ice) and lots of time (like ice sheets). The other source is the shear stress in the ice sheet which can create a lot of heat via the mechanics of deformation.

Once the ice sheet is able to start sliding, the dynamics create a completely different result compared to an ice sheet “cold-pinned” to the rock underneath.

Some comments from Marshall and Clark:

Ice sheet deglaciation involves an amount of energy larger than that provided directly from high-latitude radiation forcing associated with orbital variations. Internal glaciologic, isostatic, and climatic feedbacks are thus essential to explain the deglaciation.

..Moreover, our results suggest that thermal enabling of basal flow does not occur in response to surface warming, which may explain why the timing of the Termination II occurred earlier than predicted by orbital forcing [Gallup et al., 2002].

Results suggest that basal temperature evolution plays an important role in setting the stage for glacial termination. To confirm this hypothesis, model studies need improved basal process physics to incorporate the glaciological mechanisms associated with ice sheet instability (surging, streaming flow).

..Our simulations suggest that a substantial fraction (60% to 80%) of the ice sheet was frozen to the bed for the first 75 kyr of the glacial cycle, thus strongly limiting basal flow. Subsequent doubling of the area of warm-based ice in response to ice sheet thickening and expansion and to the reduction in downward advection of cold ice may have enabled broad increases in geologically- and hydrologically-mediated fast ice flow during the last deglaciation.

Increased dynamical activity of the ice sheet would lead to net thinning of the ice sheet interior and the transport of large amounts of ice into regions of intense ablation both south of the ice sheet and at the marine margins (via calving). This has the potential to provide a strong positive feedback on deglaciation.

The timescale of basal temperature evolution is of the same order as the 100-kyr glacial cycle, suggesting that the establishment of warm-based ice over a large enough area of the ice sheet bed may have influenced the timing of deglaciation. Our results thus reinforce the notion that at a mature point in their life cycle, 100-kyr ice sheets become independent of orbital forcing and affect their own demise through internal feedbacks.

[Emphasis added]

In this article we will focus on a 2007 paper by Ayako Abe-Ouchi, T Segawa & Fuyuki Saito. This paper is essentially the same modeling approach used in Abe-Ouchi’s 2013 Nature paper.

The Ice Model

The ice sheet model has a time step of 2 years, with 1° grid from 30°N to the north pole, 1° longitude and 20 vertical levels.

Equations for the ice sheet include sliding velocity, ice sheet deformation, the heat transfer through the lithosphere, the bedrock elevation and the accumulation rate on the ice sheet.

Note, there is a reference that some of the model is based on work described in Sensitivity of Greenland ice sheet simulation to the numerical procedure employed for ice sheet dynamics, F Saito & A Abe-Ouchi, Ann. Glaciol., (2005) – but I don’t have access to this journal. (If anyone does, please email the paper to me at scienceofdoom – you know what goes here – gmail.com).

How did they calculate the accumulation on the ice sheet? There is an equation:


Ts is the surface temperature, dP is a measure of aridity and Aref is a reference value for accumulation. This is a highly parameterized method of calculating how much thicker or thinner the ice sheet is growing. The authors reference Marshall et al 2002 for this equation, and that paper is very instructive in how poorly understood ice sheet dynamics actually are.

Here is one part of the relevant section in Marshall et al 2002:

..For completeness here, note that we have also experimented with spatial precipitation patterns that are based on present-day distributions.

Under this treatment, local precipitation rates diminish exponentially with local atmospheric cooling, reflecting the increased aridity that can be expected under glacial conditions (Tarasov and Peltier, 1999).

Paleo-precipitation under this parameterization has the form:

P(λ,θ,t) = Pobs(λ,θ)(1+dp)ΔT(λ,θ,t) x exp[βp.max[hs(λ,θ,t)-ht,0]]       (18)

The parameter dP in this equation represents the percentage of drying per 1C; Tarasov and Peltier (1999) choose a value of 3% per °C; dp = 0:03.

[Emphasis added, color added to highlight the relevant part of the equation]

So dp is a parameter that attempts to account for increasing aridity in colder glacial conditions, and in their 2002 paper Marshall et al describe it as 1 of 4 “free parameters” that are investigated to see what effect they have on ice sheet development around the LGM.

Abe-Ouchi and co-authors took a slightly different approach that certainly seems like an improvement over Marshall et al 2002:


So their value of aridity is just a linear function of ice sheet area – from zero to a fixed value, rather than a fixed value no matter the ice sheet size.

How is Ts calculated? That comes, in a way, from the atmospheric GCM, but probably not in a way that readers might expect. So let’s have a look at the GCM then come back to this calculation of Ts.

Atmospheric GCM Simulations

There were three groups of atmospheric GCM simulations, with parameters selected to try and tease out which factors have the most impact.

Group One: high resolution GCM – 1.1º latitude and longitude and 20 atmospheric vertical levels with fixed sea surface temperature. So there is no ocean model, the ocean temperature are prescribed. Within this group, four experiments:

  • A control experiment – modern day values
  • LGM (last glacial maximum) conditions for CO2 (note 1) and orbital parameters with
    • no ice
    • LGM ice extent but zero thickness
    • LGM ice extent and LGM thickness

So the idea is to compare results with and without the actual ice sheet so see how much impact orbital and CO2 values have vs the effect of the ice sheet itself – and then for the ice sheet to see whether the albedo or the elevation has the most impact. Why the elevation? Well, if an ice sheet is 1km thick then the surface temperature will be something like 6ºC colder. (Exactly how much colder is an interesting question because we don’t know what the lapse rate actually was). There will also be an effect on atmospheric circulation – you’ve stuck a “mountain range” in the path of wind so this changes the circulation.

Each of the four simulations was run for 11 or 13 years and the last 10 years’ results used:

From Abe-Ouchi et al 2007

From Abe-Ouchi et al 2007

Figure 1

It’s clear from this simulation that the full result (left graphic) is mostly caused by the ice sheet (right graphic) rather than CO2, orbital parameters and the SSTs (middle graphic). And the next figure in the paper shows the breakdown between the albedo effect and the height of the ice sheet:

From Abe-Ouchi et al 2007

From Abe-Ouchi et al 2007

Figure 2 – same color legend as figure 1

Now a lapse rate of 5K/km was used. What happens if the lapse rate of 9K/km was used instead? There were no simulations done with different lapse rates.

..Other lapse rates could be used which vary depending on the altitude or location, while a lapse rate larger than 7 K/km or smaller than 4 K/km is inconsistent with the overall feature. This is consistent with the finding of Krinner and Genthon (1999), who suggest a lapse rate of 5.5 K/km, but is in contrast with other studies which have conventionally used lapse rates of 8 K/km or 6.5 K/km to drive the ice sheet models..

Group Two – medium resolution GCM 2.8º latitude and longitude and 11 atmospheric vertical levels, with a “slab ocean” – this means the ocean is treated as one temperature through the depth of some fixed layer, like 50m. So it is allowing the ocean to be there as a heat sink/source responding to climate, but no heat transfer through to a deeper ocean.

There were five simulations in this group, one control (modern day everything) and four with CO2 & orbital parameters at the LGM:

  • no ice sheet
  • LGM ice extent, but flat
  • 12 kyrs ago ice extent, but flat
  • 12 kyrs ago ice extent and height

So this group takes a slightly more detailed look at ice sheet impact. Not surprisingly the simulation results give intermediate values for the ice sheet extent at 12 kyrs ago.

Group Three – medium resolution GCM as in group two, and ice sheets either at present day or LGM, with nine simulations covering different orbital values, different CO2 values of present day, 280 or 200 ppm.

There was also some discussion of the impact of different climate models. I found this fascinating because the difference between CCSM and the other models appears to be as great as the difference in figure 2 (above) which identifies the albedo effect as more significant than the lapse rate effect:

From Abe-Ouchi et al 2007

From Abe-Ouchi et al 2007

Figure 3

And this naturally has me wondering about how much significance to put on the GCM simulation results shown in the paper. The authors also comment:

Based on these GCM results we conclude there remains considerable uncertainty over the actual size of the albedo effect.

Given there is also uncertainty over the lapse rate that actually occurred, it seems there is considerable uncertainty over everything.

Now let’s return to the ice sheet model, because so far we haven’t seen any output from the ice sheet model.

GCM Inputs into the Ice Sheet Model

The equation which calculates the change in accumulation on the ice sheet used a fairly arbitrary parameter dp, with (1+dp) raised to the power of Ts.

The ice sheet model has a 2 year time step. The GCM results don’t provide Ts across the surface grid every 2 years, they are snapshots for certain conditions. The ice sheet model uses this calculation for Ts:

Ts = Tref + ΔTice + ΔTco2 + ΔTinsol + ΔTnonlinear

Tref is the reference temperature which is present day climatology. The other ΔT (change in temperature) values are basically a linear interpolation from two values of the GCM simulations. Here is the ΔTCo2 value:



So think of it like this – we have found Ts at one value of CO2 higher and one value of CO2 lower from some snapshot GCM simulations. We plot a graph with Co2 on the x-axis and Ts on the y-axis with just two points on the graph from these two experiments and we draw a straight line between the two points.

To calculate Ts at say 50 kyrs ago we look up the CO2 value at 50 kyrs from ice core data, and read the value of TCO2 from the straight line on the graph.

Likewise for the other parameters. Here is ΔTinsol:



So the method is extremely basic. Of course the model needs something..

Now, given that we have inputs for accumulation on the ice sheet, the ice sheet model can run. Here are the results. The third graph (3) is the sea level from proxy results so is our best estimate of reality, with (4) providing model outputs for different parameters of d0 (“desertification” or aridity) and lapse rate, and (5) providing outputs for different parameters of albedo and lapse rate:

From Abe-Ouchi et al 2007

From Abe-Ouchi et al 2007

Figure 4

There are three main points of interest.

Firstly, small changes in the parameters cause huge changes in the final results. The idea of aridity over ice sheets as just linear function of ice sheet size is very questionable itself. The idea of a constant lapse rate is extremely questionable. Together, using values that appear realistic, we can model much less ice sheet growth (sea level drop) or many times greater ice sheet growth than actually occurred.

Secondly, notice that the time of maximum ice sheet (lowest sea level) for realistic results show sea level starting to rise around 12 kyrs, rather than the actual 18 kyrs. This might be due to the impact of orbital factors which were at quite a low level (i.e., high latitude summer insolation was at quite a low level) when the last ice age finished, but have quite an impact in the model. Of course, we have covered this “problem” in a few previous articles in this series. In the context of this model it might be that the impact of the southern hemisphere leading the globe out of the last ice age is completely missing.

Thirdly – while this might be clear to some people, but for many new to this kind of model it won’t be obvious – the inputs for the model are some limits of the actual history. The model doesn’t simulate the actual start and end of the last ice age “by itself”. We feed into the GCM model a few CO2 values. We feed into the GCM model a few ice sheet extent and heights that (as best as can be reconstructed) actually occurred. The GCM gives us some temperature values for these snapshot conditions.

In the case of this ice sheet model, every 2 years (each time step of the ice sheet model) we “look up” the actual value of ice sheet extent and atmospheric CO2 and we linearly interpolate the GCM output temperatures for the current year. And then we crudely parameterize these values into some accumulation rate on the ice sheet.


This is our first foray into ice sheet models. It should be clear that the results are interesting but we are at a very early stage in modeling ice sheets.

The problems are:

  • the computational load required to run a GCM coupled with an ice sheet model over 120 kyrs is much too high, so it can’t be done
  • the resulting tradeoff uses a few GCM snapshot values to feed linearly interpolated temperatures into a parameterized accumulation equation
  • the effect of lapse rate on the results is extremely large and the actual value for lapse rate over ice sheets is very unlikely to be a constant and is also not known
  • our understanding of ice sheet fundamental equations are still at an early stage, as readers can see by reviewing the first two papers below, especially the second one

 Articles in this Series

Part One – An introduction

Part Two – Lorenz – one point of view from the exceptional E.N. Lorenz

Part Three – Hays, Imbrie & Shackleton – how everyone got onto the Milankovitch theory

Part Four – Understanding Orbits, Seasons and Stuff – how the wobbles and movements of the earth’s orbit affect incoming solar radiation

Part Five – Obliquity & Precession Changes – and in a bit more detail

Part Six – “Hypotheses Abound” – lots of different theories that confusingly go by the same name

Part Seven – GCM I – early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Seven and a Half – Mindmap – my mind map at that time, with many of the papers I have been reviewing and categorizing plus key extracts from those papers

Part Eight – GCM II – more recent work from the “noughties” – GCM results plus EMIC (earth models of intermediate complexity) again trying to produce perennial snow cover

Part Nine – GCM III – very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age – a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Eleven – End of the Last Ice age – latest data showing relationship between Southern Hemisphere temperatures, global temperatures and CO2

Twelve – GCM V – Ice Age Termination – very recent work from He et al 2013, using a high resolution GCM (CCSM3) to analyze the end of the last ice age and the complex link between Antarctic and Greenland

Thirteen – Terminator II – looking at the date of Termination II, the end of the penultimate ice age – and implications for the cause of Termination II

Fourteen – Concepts & HD Data – getting a conceptual feel for the impacts of obliquity and precession, and some ice age datasets in high resolution

Fifteen – Roe vs Huybers – reviewing In Defence of Milankovitch, by Gerard Roe

Sixteen – Roe vs Huybers II – remapping a deep ocean core dataset and updating the previous article

Seventeen – Proxies under Water I – explaining the isotopic proxies and what they actually measure

Eighteen – “Probably Nonlinearity” of Unknown Origin – what is believed and what is put forward as evidence for the theory that ice age terminations were caused by orbital changes


Basal temperature evolution of North American ice sheets and implications for the 100-kyr cycle, SJ Marshall & PU Clark, GRL (2002) – free paper

North American Ice Sheet reconstructions at the Last Glacial Maximum, SJ Marshall, TS James, GKC Clarke, Quaternary Science Reviews (2002) – free paper

Climatic Conditions for modelling the Northern Hemisphere ice sheets throughout the ice age cycle, A Abe-Ouchi, T Segawa, and F Saito, Climate of the Past (2007) – free paper

Insolation-driven 100,000-year glacial cycles and hysteresis of ice-sheet volume, Ayako Abe-Ouchi, Fuyuki Saito, Kenji Kawamura, Maureen E. Raymo, Jun’ichi Okuno, Kunio Takahashi & Heinz Blatter, Nature (2013) – paywall paper


Note 1 – the value of CO2 used in these simulations was 200 ppm, while CO2 at the LGM was actually 180 ppm. Apparently this value of 200 ppm was used in a major inter-comparison project (the PMIP), but I don’t know the reason why. PMIP = Paleoclimate Modelling Intercomparison Project, Joussaume and Taylor, 1995.

A while ago, in Part Three – Hays, Imbrie & Shackleton we looked at a seminal paper from 1976.

In that paper, the data now stretched back far enough in time for the authors to demonstrate something of great importance. They showed that changes in ice volume recorded by isotopes in deep ocean cores (see Seventeen – Proxies under Water I) had significant signals at the frequencies of obliquity, precession and one of the frequencies of eccentricity.

Obliquity is the changes in the tilt of the earth’s axis, on a period around 40 kyrs. Precession is the change in the closest approach to the sun through the year (right now the closest approach is in NH winter), on a period around 20 kyrs (see Four – Understanding Orbits, Seasons and Stuff).

Both of these involve significant redistributions of solar energy. Obliquity changes the amount of solar insolation received by the poles versus the tropics. Precession changes the amount of solar insolation at high latitudes in summer versus winter. (Neither changes total solar insolation). This was nicely in line with Milankovitch’s theory – for a recap see Part Three.

I’m going to call this part Theory A, and paraphrase it like this:

The waxing and waning of the ice sheets has 40 kyr and 20 kyr periods which is caused by the changing distribution of solar insolation due to obliquity and precession.

The largest signal in ocean cores over the last 800 kyrs has a component of about 100 kyrs (with some variability). That is, the ice ages start and end with a period of about 100 kyrs. Eccentricity varies on time periods of 100 kyrs and 400 kyrs, but with a very small change in total insolation (see Part Four).

Hays et al produced a completely separate theory, which I’m going to call Theory B, and paraphrase it like this:

The start and end of the ice ages has 100 kyr periods which is caused by the changing eccentricity of the earth’s orbit.

Theory A and Theory B are both in the same paper and are both theories that “link ice ages to orbital changes”. In their paper they demonstrated Theory A but did not prove or demonstrate Theory B. Unfortunately, Theory B is the much more important one.

Here is what they said:

The dominant 100,000 year climatic component has an average period close to, and is in phase with, orbital eccentricity. Unlike the correlations between climate and the higher frequency orbital variations (which can be explained on the assumption that the climate system responds linearly to orbital forcing) an explanation of the correlations between climate and eccentricity probably requires an assumption of non-linearity.

[Emphasis added].

The only quibble I have with the above paragraph is the word “probably”. This word should have been removed. There is no doubt. An assumption of non-linearity is required as a minimum.

Now why does it “probably” or “definitely” require an assumption of non-linearity? And what does that mean?

A linearity assumption is one where the output is proportional to the input. For example: double the weight of a vehicle and the acceleration halves. Most things in the real world, and most things in climate are non-linear. So for example, double the temperature (absolute temperature) and the emitted radiation goes up by a factor of 16.

However, there isn’t a principle, an energy balance equation or even a climate model that can take this tiny change in incoming solar insolation over a 100 kyr period and cause the end of an ice age.

In fact, their statement wasn’t so much “an assumption of non-linearity” but “some non-linearity relationship that we are not currently able to model or demonstrate, some non-linearity relationship we have yet to discover”.

There is nothing wrong with their original statement as such (apart from “probably”), but an alternative way of writing from the available evidence could be:

The dominant 100,000 year climatic component has an average period close to, and is in phase with, orbital eccentricity. Unlike the correlations between climate and the higher frequency orbital variations.. an explanation of the correlations between climate and eccentricity is as yet unknown, remains to be demonstrated and there may in fact be no relationship at all.

Unfortunately, because Theory A and Theory B were in the same paper and because Theory A is well demonstrated and because there is no accepted alternative on the cause of the start and end of ice ages (there are alternative hypotheses around natural resonance) Theory B has become “well accepted”.

And because everyone familiar with climate science knows that Theory A is almost certainly true, when you point out that Theory B doesn’t have any evidence, many people are confused and wonder why you are rejecting well-proven theories.

In the series so far, except in occasional comments, I haven’t properly explained the separation between the two theories and this article is an attempt to clear that up.

Now I will produce a sufficient quantity of papers and quote their “summary of the situation so far” to demonstrate that there isn’t any support for Theory B. The only support is the fact that one component frequency of eccentricity is “similar” to the frequency of the ice age terminations/inceptions, plus the safety in numbers support of everyone else believing it.

One other comment on paleoclimate papers attempts to explain the 100 kyr period. It is the norm for published papers to introduce a new hypothesis. That doesn’t make the new hypothesis correct.

So if I produce a paper, and quote the author’s summary of “the state of work up to now” and that paper then introduces their new hypothesis which claims to perhaps solve the mystery, I haven’t quoted the author’s summary out of context.

Let’s take it as read that lots of climate scientists think they have come up with something new. What we are interesting in is their review of the current state of the field and their evidence cited in support of Theory B.

Before producing the papers I also want to explain why I think the idea behind Theory B is so obviously flawed, and not just because 38 years after Hays, Imbrie & Shackleton the mechanism is still a mystery.

Why Theory B is Unsupportable

If a non-linear relationship can be established between a 0.1% change in insolation over a long period, it must also explain why significant temperature fluctuations in high latitude regions during glacials do not cause a termination.

Here are two high resolution examples from a Greenland ice core (NGRIP) during the last glaciation:

From Wolff et al 2010

From Wolff et al 2010

The “non-linearity” hypothesis has more than one hill to climb. This second challenge is even more difficult than the first.

A tiny change in total insolation causes, via a yet to be determined non-linear effect, the end of each ice age, but this same effect does not amplify frequent large temperature changes of long duration to end an ice age (note 1).

Food for thought.

Theory C Family

Many papers which propose orbital reasons for ice age terminations do not propose eccentricity variations as the cause. Instead, they attribute terminations to specific insolation changes at specific latitudes, or various combinations of orbital factors completely unrelated to eccentricity variations. See Part Six – “Hypotheses Abound”.

Of course, one of these might be right. For now I will call them the family, so we remember that Theory C is not one theory, but a whole range of mostly incompatible theories.

But remember where the orbital hypothesis for ice age termination came from – the 100,000 year period of eccentricity variation “matching” (kind of matching) the 100,000 year period of the ice ages.

The Theory C Family does not have that starting point.


So let’s move onto papers. I started by picking off papers from the right category in my mind map that might have something to say, then I opened up every one of about 300 papers in my ice ages folder (alphabetical by author) and checked to see whether they had something to say on the cause of ice ages in the abstract or introduction. Most papers don’t have a comment because they are about details like d18O proxies, or the CO2 concentration in the Vostok ice core, etc. That’s why there aren’t 300 citations here.

And bold text within a citation is added by me for emphasis.

I looked for their citations (evidence) to back up any claim that orbital variations caused ice age terminations. In some cases I pull up what the citations said.


Last Interglacial Climates, Kukla et al (2002), by a cast of many including the famous Wallace S. Broecker, John Imbrie and Nicholas J. Shackleton:

At the end of the last interglacial period, over 100,000 yr ago, the Earth’s environments, similar to those of today, switched into a profoundly colder glacial mode. Glaciers grew, sea level dropped, and deserts expanded. The same transition occurred many times earlier, linked to periodic shifts of the Earth’s orbit around the Sun. The mechanism of this change, the most important puzzle of climatology, remains unsolved.

Note that “linked to periodic shifts of the Earth’s orbit” is followed by an “unknown mechanism”. Two of the authors were the coauthors of the classic 1976 paper that is most commonly cited as evidence for Theory B.


Millennial-scale variability during the last glacial: The ice core record, Wolff, Chappellaz, Blunier, Rasmussen & Svensson (2010)

The most significant climate variability in the Quaternary record is the alternation between glacial and interglacial, occurring at approximately 100 ka periodicity in the most recent 800 ka. This signal is of global scale, and observed in all climate records, including the long Antarctic ice cores (Jouzel et al., 2007a) and marine sediments (Lisiecki and Raymo, 2005). There is a strong consensus that the underlying cause of these changes is orbital (i.e. due to external forcing from changes in the seasonal and latitudinal pattern of insolation), but amplified by a whole range of internal factors (such as changes in greenhouse gas concentration and in ice extent).

Note the lack of citation for the underlying causes being orbital. However, as we will see, there is “strong consensus”. In this specific paper from the words used I believe the authors are supporting the Theory C Family, not Theory B.


The last glacial cycle: transient simulations with an AOGCM, Robin Smith & Jonathan Gregory (2012)

It is generally accepted that the timing of glacials is linked to variations in solar insolation that result from the Earth’s orbit around the sun (Hays et al. 1976; Huybers and Wunsch 2005). These solar radiative anomalies must have been amplified by feedback processes within the climate system, including changes in atmospheric greenhouse gas (GHG) concentrations (Archer et al. 2000) and ice-sheet growth (Clark et al. 1999), and whilst hypotheses abound as to the details of these feedbacks, none is without its detractors and we cannot yet claim to know how the Earth system produced the climate we see recorded in numerous proxy records.

I think I will classify this one as “Still a mystery”.

Note that support for “linkage to variations in solar insolation” consists of Hays et al 1976 – Theory B – and Huybers and Wunsch 2005 who propose a contradictory theory (obliquity) – Theory C Family. In this case they absolve themselves by pointing out that all the theories have flaws.


The timing of major climate terminations, ME Raymo (1997)

For the past 20 years, the Milankovitch hypothesis, which holds that the Earth’s climate is controlled by variations in incoming solar radiation tied to subtle yet predictable changes in the Earth’s orbit around the Sun [Hays et al., 1976], has been widely accepted by the scientific community. However, the degree to which and the mechanisms by which insolation variations control regional and global climate are poorly understood. In particular, the “100-kyr” climate cycle, the dominant feature of nearly all climate records of the last 900,000 years, has always posed a problem to the Milankovitch hypothesis..

..time interval between terminations is not constant; it varies from 84 kyr between Terminations IV and V to 120 kyr between Terminations III and II.

“Still a mystery”. (Maureen Raymo has written many papers on ice ages, is the coauthor of the LR04 ocean core database and cannot be considered an outlier). Her paper claims she solves the problem:

In conclusion, it is proposed that the interaction between obliquity and the eccentricity-modulation of precession as it controls northern hemisphere summer radiation is responsible for the pattern of ice volume growth and decay observed in the late Quaternary.

Solution was unknown, but new proposed solution is from the Theory C Family.


Glacial termination: sensitivity to orbital and CO2 forcing in a coupled climate system model, Yoshimori, Weaver, Marshall & Clarke (2001)

Glaciation (deglaciation) is one of the most extreme and fundamental climatic events in Earth’s history.. As a result, fluctuations in orbital forcing (e.g. Berger 1978; Berger and Loutre 1991) have been widely recognised as the primary triggers responsible for the glacial-interglacial cycles (Berger 1988; Bradley 1999; Broecker and Denton 1990; Crowley and North 1991; Imbrie and Imbrie 1979). At the same time, these studies revealed the complexity of the climate system, and produced several paradoxes which cannot be explained by a simple linear response of the climate system to orbital forcing.

At this point I was interested to find out how well these 4 papers cited (Berger 1988; Bradley 1999; Broecker and Denton 1990; Crowley and North 1991; Imbrie and Imbrie 1979) backed up the evidence for orbital forcing being the primary triggers for glacial cycles.

Broecker & Denton (1990) is in Scientific American which I don’t think counts as a peer-reviewed journal (even though a long time ago I subscribed to it and thought it was a great magazine). I was able to find the abstract only, which coincides with their peer-reviewed paper The Role of Ocean-Atmosphere Reorganization in Glacial Cycles the same year in Quaternary Science Reviews, so I’ll assume they are media hounds promoting their peer-reviewed paper for a wider audience and look at the peer-reviewed paper. After commenting on the problems:

Such a linkage cannot explain synchronous climate changes of similar severity in both polar hemispheres. Also, it cannot account for the rapidity of the transition from full glacial toward full interglacial conditions. If glacial climates are driven by changes in seasonality, then another linkage must exist.

they state:

We propose that Quaternary glacial cycles were dominated by abrupt reorganizations of the ocean- atmosphere system driven by orbitally induced changes in fresh water transports which impact salt structure in the sea. These reorganizations mark switches between stable modes of operation of the ocean-atmosphere system. Although we think that glacial cycles were driven by orbital change, we see no basis for rejecting the possibility that the mode changes are part of a self- sustained internal oscillation that would operate even in the absence of changes in the Earth’s orbital parameters. If so, as pointed out by Saltzman et al. (1984), orbital cycles can merely modulate and pace a self-oscillating climate system.

So this paper is evidence for Theory B or Theory C Family? “..we think that..” “..we see no basis for rejecting the possibility ..self-sustained internal oscillation”. This is evidence for the astronomical theory?

I can’t access Milankovitch theory and climate, Berger 1988 (thanks, Reviews of Geophysics!). If someone has it, please email it to me at scienceofdoom – you know what goes here – gmail.com. The other two references are books, so I can’t access them. Crowley & North 1991 is Paleoclimatology. Vol 16 of Oxford Monograph on Geology and Geophysics, OUP. Imbrie & Imbrie 1979 is Ice Ages: solving the mystery.


Glacial terminations as southern warmings without northern control, E. W. Wolff, H. Fischer and R. Röthlisberger (2009)

However, the reason for the spacing and timing of interglacials, and the sequence of events at major warmings, remains obscure.

“Still a mystery”. This is a little different from Wolff’s comment in the paper above. Elsewhere (see his comments cited in Eleven – End of the Last Ice age) he has stated that ice age terminations are not understood:

Between about 19,000 and 10,000 years ago, Earth emerged from the last glacial period. The whole globe warmed, ice sheets retreated from Northern Hemisphere continents and atmospheric composition changed significantly. Many theories try to explain what triggered and sustained this transformation (known as the glacial termination), but crucial evidence to validate them is lacking.


The Last Glacial Termination, Denton, Anderson, Toggweiler, Edwards, Schaefer & Putnam (2009)

A major puzzle of paleoclimatology is why, after a long interval of cooling climate, each late Quaternary ice age ended with a relatively short warming leg called a termination. We here offer a comprehensive hypothesis of how Earth emerged from the last global ice age..

“Still a mystery”


Global warming preceded by increasing carbon dioxide concentrations during the last deglaciation, Shakun, Clark, He, Marcott, Mix, Zhengyu Liu, Otto-Bliesner,  Schmittner & Bard (2012)

Understanding the causes of the Pleistocene ice ages has been a significant question in climate dynamics since they were discovered in the mid-nineteenth century. The identification of orbital frequencies in the marine 18O/16O record, a proxy for global ice volume, in the 1970s demonstrated that glacial cycles are ultimately paced by astronomical forcing.

The citation is Hays, Imbrie & Shackleton 1976. Theory B with no support.


Northern Hemisphere forcing of Southern Hemisphere climate during the last deglaciation, He, Shakun, Clark, Carlson, Liu, Otto-Bliesner & Kutzbach (2013)

According to the Milankovitch theory, changes in summer insolation in the high-latitude Northern Hemisphere caused glacial cycles through their impact on ice-sheet mass balance. Statistical analyses of long climate records supported this theory, but they also posed a substantial challenge by showing that changes in Southern Hemisphere climate were in phase with or led those in the north.

The citation is Hays, Imbrie & Shackleton 1976. (Many of the same authors in this and the paper above).


Eight glacial cycles from an Antarctic ice core, EPICA Community Members (2004)

The climate of the last 500,000 years (500 kyr) was characterized by extremely strong 100-kyr cyclicity, as seen particularly in ice-core and marine-sediment records. During the earlier part of the Quaternary (before 1 million years ago; 1 Myr BP), cycles of 41 kyr dominated. The period in between shows intermediate behaviour, with marine records showing both frequencies and a lower amplitude of the climate signal. However, the reasons for the dominance of the 100-kyr (eccentricity) over the 41-kyr (obliquity) band in the later part of the record, and the amplifiers that allow small changes in radiation to cause large changes in global climate, are not well understood.

Is this accepting Theory B or not?


Now onto the alphabetical order..

Climatic Conditions for modelling the Northern Hemisphere ice sheets throughout the ice age cycle, Abe-Ouchi, Segawa & Saito (2007)

To explain why the ice sheets in the Northern Hemisphere grew to the size and extent that has been observed, and why they retreated quickly at the termination of each 100 kyr cycle is still a challenge (Tarasov and Peltier, 1997a; Berger et al., 1998; Paillard, 1998; Paillard and Parrenin, 2004). Although it is now broadly accepted that the orbital variations of the Earth influence climate changes (Milankovitch, 1930; Hays et al., 1976; Berger, 1978), the large amplitude of the ice volume changes and the geographical extent need to be reproduced by comprehensive models which include nonlinear mechanisms of ice sheet dynamics (Raymo, 1997; Tarasov and Peltier, 1997b; Paillard, 2001; Raymo et al., 2006).

The papers cited for this broad agreement are Hays et al 1976 once again. And Berger 1978 who says:

It is not the aim of this paper to draw definitive conclusions about the astronomical theory of paleoclimates but simply to provide geologists with accurate theoretical values of the earth’s orbital elements and insolation..

Berger does go on to comment on eccentricity:

Berger 1978

Berger 1978

And this is simply again noting that the period for eccentricity is “similar” to the period for the ice age terminations.

Theory B with no support.


Insolation-driven 100,000-year glacial cycles and hysteresis of ice-sheet volume, Abe-Ouchi, Saito, Kawamura, Raymo, Okuno, Takahashi & Blatter (2013)

Milankovitch theory proposes that summer insolation at high northern latitudes drives the glacial cycles, and statistical tests have demonstrated that the glacial cycles are indeed linked to eccentricity, obliquity and precession cycles. Yet insolation alone cannot explain the strong 100,000-year cycle, suggesting that internal climatic feedbacks may also be at work. Earlier conceptual models, for example, showed that glacial terminations are associated with the build-up of Northern Hemisphere ‘excess ice’, but the physical mechanisms underpinning the 100,000-year cycle remain unclear.

The citations for the statistical tests are Lisiecki 2010 and Huybers 2011.

Huybers 2011 claims that obliquity and precession (not eccentricity) are linked to deglaciations. This is development of his earlier, very interesting 2007 hypothesis (Glacial variability over the last two million years: an extended depth-derived agemodel, continuous obliquity pacing, and the Pleistocene progression – to which we will return) that obliquity is the prime factor (not necessarily the cause) in deglaciations.

Here is what Huybers says in his 2011 paper, Combined obliquity and precession pacing of late Pleistocene deglaciations:

The cause of these massive shifts in climate remains unclear not for lack of models, of which there are now over thirty, but for want of means to choose among them. Previous statistical tests have demonstrated that obliquity paces the 100-kyr glacial cycles [citations are his 2005 paper with Carl Wunsch and his 2007 paper], helping narrow the list of viable mechanisms, but have been inconclusive with respect to precession (that is, P > 0.05) because of small sample sizes and uncertain timing..

In Links between eccentricity forcing and the 100,000-year glacial cycle, (2010), Lisiecki says:

Variations in the eccentricity (100,000 yr), obliquity (41,000 yr) and precession (23,000 yr) of Earth’s orbit have been linked to glacial–interglacial climate cycles. It is generally thought that the 100,000-yr glacial cycles of the past 800,000 yr are a result of orbital eccentricity [1–4] . However, the eccentricity cycle produces negligible 100-kyr power in seasonal or mean annual insolation, although it does modulate the amplitude of the precession cycle.

Alternatively, it has been suggested that the recent glacial cycles are driven purely by the obliquity cycle [5–7]. Here I use statistical analyses of insolation and the climate of the past five million years to characterize the link between eccentricity and the 100,000-yr glacial cycles. Using cross-wavelet phase analysis, I show that the relative phase of eccentricity and glacial cycles has been stable since 1.2 Myr ago, supporting the hypothesis that 100,000-yr glacial cycles are paced [8–10] by eccentricity [4,11]. However, I find that the time-dependent 100,000-yr power of eccentricity has been anticorrelated with that of climate since 5 Myr ago, with strong eccentricity forcing associated with weaker power in the 100,000-yr glacial cycle.

I propose that the anticorrelation arises from the strong precession forcing associated with strong eccentricity forcing, which disrupts the internal climate feedbacks that drive the 100,000-yr glacial cycle. This supports the hypothesis that internally driven climate feedbacks are the source of the 100,000-yr climate variations.

So she accepts that Theory B is generally accepted, although some Theory C Family advocates are out there, but provides a new hybrid solution of her own.

References for the orbital eccentricity hypothesis [1-4] include Hays et al 1976 and Raymo 1997 cited above. However, Raymo didn’t think it had been demonstrated prior to her 1997 paper and in her 1997 paper introduces the hypothesis that is primarily ice sheet size, obliquity and precession modulated by eccentricity.

References for the obliquity hypothesis [5-7] include the Huybers & Wunsch 2005 and Huybers 2007 covered just before this reference.

So in summary – going back to how we dragged up these references – Abe-Ouchi and co-authors provide two citations in support of the statistical link between orbital variations and deglaciation. One citation claims primarily obliquity with maybe a place for precession – no link to eccentricity. Another citation claims a new theory for eccentricity as a phase-locking mechanism to an internal climate process.

These are two mutually exclusive ideas. But at least both papers attempted to prove their (exclusive) ideas.


Equatorial insolation: from precession harmonics to eccentricity frequencies, Berger, Loutre, & Mélice (2006):

Since the paper by Hays et al. (1976), spectral analyses of climate proxy records provide substantial evidence that a fraction of the climatic variance is driven by insolation changes in the frequency ranges of obliquity and precession variations. However, it is the variance components centered near 100 kyr which dominate most Upper Pleistocene climatic records, although the amount of insolation perturbation at the eccentricity driven periods close to 100-kyr (mainly the 95 kyr- and 123 kyr-periods) is much too small to cause directly a climate change of ice-age amplitude. Many attempts to find an explanation to this 100-kyr cycle in climatic records have been made over the last decades.

“Still a mystery”.


Multistability and hysteresis in the climate-cryosphere system under orbital forcing, Calov & Ganopolski (2005)

In spite of considerable progress in studies of past climate changes, the nature of vigorous climate variations observed during the past several million years remains elusive. A variety of different astronomical theories, among which the Milankovitch theory [Milankovitch, 1941] is the best known, suggest changes in Earth’s orbital parameters as a driver or, at least, a pacemaker of glacial-interglacial climate transitions. However, the mechanisms which translate seasonal and strongly latitude-dependent variations in the insolation into the global-scale climate shifts between glacial and interglacial climate states are the subject of debate.

“Still a mystery”


Ice Age Terminations, Cheng, Edwards, Broecker, Denton, Kong, Wang, Zhang, Wang (2009)

The ice-age cycles have been linked to changes in Earth’s orbital geometry (the Milankovitch or Astronomical theory) through spectral analysis of marine oxygen-isotope records (3), which demonstrate power in the ice-age record at the same three spectral periods as orbitally driven changes in insolation. However, explaining the 100 thousand- year (ky)–recurrence period of ice ages has proved to be problematic because although the 100-ky cycle dominates the ice-volume power spectrum, it is small in the insolation spectrum. In order to understand what factors control ice age cycles, we must know the extent to which terminations are systematically linked to insolation and how any such linkage can produce a non- linear response by the climate system at the end of ice ages.

“Still a mystery”. This paper claims (their new work) that terminations are all about high latitude NH insolation. They state, for the hypothesis of the paper:

In all four cases, observations are consistent with a classic Northern Hemisphere summer insolation intensity trigger for an initial retreat of northern ice sheets.

This is similar to Northern Hemisphere forcing of climatic cycles in Antarctica over the past 360,000 years, Kawamura et al (2007) – not cited here because they didn’t make a statement about “the problem so far”.


Orbital forcing and role of the latitudinal insolation/temperature gradient, Basil Davis & Simon Brewer (2009)

Orbital forcing of the climate system is clearly shown in the Earths record of glacial–interglacial cycles, but the mechanism underlying this forcing is poorly understood.

Not sure whether this is classified as “Still a mystery” or Theory B or Theory C Family.


Evidence for Obliquity Forcing of Glacial Termination II, Drysdale, Hellstrom, Zanchetta, Fallick, Sánchez Goñi, Couchoud, McDonald, Maas, Lohmann & Isola (2009)

During the Late Pleistocene, the period of glacial-to-interglacial transitions (or terminations) has increased relative to the Early Pleistocene [~100 thousand years (ky) versus 40 ky]. A coherent explanation for this shift still eludes paleoclimatologists (3). Although many different models have been proposed (4), the most widely accepted one invokes changes in the intensity of high-latitude Northern Hemisphere summer insolation (NHSI). These changes are driven largely by the precession of the equinoxes (5), which produces relatively large seasonal and hemispheric insolation intensity anomalies as the month of perihelion shifts through its ~23-ky cycle.

Their “widely accepted” theory is from the Theory C Family. This is a different theory from the “widely accepted” theory B. Perhaps both are “widely accepted”, hopefully by different groups of scientists.


The role of orbital forcing, carbon dioxide and regolith in 100 kyr glacial cycles, Ganopolski & Calov (2011)

The origin of the 100 kyr cyclicity, which dominates ice volume variations and other climate records over the past million years, remains debatable..

..One of the major challenges to the classical Milankovitch theory is the presence of 100 kyr cycles that dominate global ice volume and climate variability over the past million years (Hays et al., 1976; Imbrie et al., 1993; Paillard, 2001).

This periodicity is practically absent in the principal “Milankovitch forcing” – variations of summer insolation at high latitudes of the Northern Hemisphere (NH).

The eccentricity of Earth’s orbit does contain periodicities close to 100 kyr and the robust phase relationship between glacial cycles and 100-kyr eccentricity cycles has been found in the paleoclimate records (Hays et al., 1976; Berger et al., 2005; Lisiecki, 2010). However, the direct effect of the eccentricity on Earth’s global energy balance is very small.

Moreover, eccentricity variations are dominated by a 400 kyr cycle which is also seen in some older geological records (e.g. Zachos et al., 1997), but is practically absent in the frequency spectrum of the ice volume variations for the last million years.

In view of this long-standing problem, it was proposed that the 100 kyr cycles do not originate directly from the orbital forcing but rather represent internal oscillations in the climate-cryosphere (Gildor and Tziperman, 2001) or climate-cryosphere-carbonosphere system (e.g. Saltzman and Maasch, 1988; Paillard and Parrenin, 2004), which can be synchronized (phase locked) to the orbital forcing (Tziperman et al., 2006).

Alternatively, it was proposed that the 100 kyr cycles result from the terminations of ice sheet buildup by each second or third obliquity cycle (Huybers and Wunsch, 2005) or each fourth or fifth precessional cycle (Ridgwell et al., 1999) or they originate directly from a strong, nonlinear, climate-cryosphere system response to a combination of precessional and obliquity components of the orbital forcing (Paillard, 1998).

“Still a mystery”.


Modeling the Climatic Response to Orbital Variations, Imbrie & Imbrie (1980)

This is not to say that all important questions have been answered. In fact, one purpose of this article is to contribute to the solution of one of the remaining major problems: the origin and history of the 100,000-year climatic cycle.

At least over the past 600,000 years, almost all climatic records are dominated by variance components in a narrow frequency band centered near a 100,000-year cycle (5-8, 12, 21, 38). Yet a climatic response at these frequencies is not predicted by the Milankovitch version of the astronomical theory – or any other version that involves a linear response (5, 6).

This paper was worth citing because the first author is the coathor of Hays et al 1976. For interest let’s look at what they attempt to demonstrate in their paper. They take the approach of producing different (simple) models with orbital forcing, to try to reproduce the geological record:

The goal of our modeling effort has been to simulate the climatic response to orbital variations over the past 500 kyrs. The resulting model fails to simulate four important aspects of this record. It fails to produce sufficient 100k power; it produces too much 23K and 19K power; it produces too much 413k power and it loses its match with the record ardoun the time of the last 413k eccentricity minimum..

All of these failures are related to a fundamental shortcoming in the generation of 100k power.. Indeed it is possible that no function will yield a good simulation of the entire 500 kyr record under consideration here, because nonorbitally forced high-frequency fluctuations may have caused the system to flip or flop in an unpredictable fashion. This would be an example of Lorenz’s concept of an almost intransitive system..

..Progress in this direction will indicate what long-term variations need to be explained within the framework of a stochastic model and provide a basis for estimating the degree of unpredictability in climate.


On the structure and origin of major glaciation cycles, Imbrie, Boyle, Clemens, Duffy, Howard, Kukla, Kutzbach, Martinson, McIntyre, Mix, Molfino, Morley, Peterson, Pisias, Prell, Raymo, Shackleton & Toggweiler (1992)

It is now widely believed that these astronomical influences, through their control of the seasonal and latitudinal distribution of incident solar radiation, either drive the major climate cycles externally or set the phase of oscillations that are driven internally..

..In this paper we concentrate on the 23-kyr and 41- kyr cycles of glaciation. These prove to be so strongly correlated with large changes in seasonal radiation that we regard them as continuous, essentially linear responses to the Milankovitch forcing. In a subsequent paper we will remove these linearly forced components from each time series and examine the residual response. The residual response is dominated by a 100-kyr cycle, which has twice the amplitude of the 23- and 41-kyr cycles combined. In the band of periods near 100 kyr, variations in radiation correlated with climate are so small, compared with variations correlated with the two shorter climatic cycles, that the strength of the 100-kyr climate cycle must result from the channeling of energy into this band by mechanisms operating within the climate system itself.

In Part 2, Imbrie et al (same authors) 1993 they highlight in more detail the problem of explaining the 100 kyr period:

1. One difficulty in finding a simple Milankovitch explanation is that the amplitudes of all 100-kyr radiation signals are very small [Hays et al., 1976]. As an example, the amplitude of the 100-kyr radiation cycle at June 65N (a signal often used as a forcing in Milankovitch theories) is only 2W/m² (Figure 1). This is 1 order of magnitude smaller than the same insolation signal in the 23- and 41- kyr bands, yet the system’s response in these two bands combined has about half the amplitude observed at 100 kyr.

2. Another fundamental difficulty is that variations in eccentricity are not confined to periods near 100 kyr. In fact, during the late Pleistocene, eccentricity variations at periods near 100 kyr are of the same order of magnitude as those at 413 kyr.. yet the d18O record for this time interval has no corresponding spectral peak near 400 kyr..

3. The high coherency observed between 100 kyr eccentricity and d18O signals is an average that hides significant mismatches, notably about 400 kyrs ago.

Their proposed solution:

In our model, the coupled system acts as a nonlinear amplifier that is particularly sensitive to eccentricity-driven modulations in the 23,000-year sea level cycle. During an interval when sea level is forced upward from a major low stand by a Milankovitch response acting either alone or in combination with an internally driven, higher-frequency process, ice sheets grounded on continental shelves become unstable, mass wasting accelerates, and the resulting deglaciation sets the phase of one wave in the train of 100 kyr oscillations.

This doesn’t really appear to be Theory B.


Orbital forcing of Arctic climate: mechanisms of climate response and implications for continental glaciation, Jackson & Broccoli (2003)

The growth and decay of terrestrial ice sheets during the Quaternary ultimately result from the effects of changes in Earth’s orbital geometry on climate system processes. This link is convincingly established by Hays et al. (1976) who find a correlation between variations of terrestrial ice volume and variations in Earth’s orbital eccentricity, obliquity, and longitude of the perihelion.

Hays et al 1976. Theory B with no support.


A causality problem for Milankovitch, Karner & Muller (2000)

We can conclude that the standard Milankovitch insolation theory does not account for the terminations of the ice ages. That is a serious and disturbing conclusion by itself. We can conclude that models that attribute the terminations to large insolation peaks (or, equivalently, to peaks in the precession parameter), such as the recent one by Raymo (23), are incompatible with the observations.

I’ll take this as “Still a mystery”.


Linear and non-linear response of late Neogene glacial cycles to obliquity forcing and implications for the Milankovitch theory, Lourens, Becker, Bintanja, Hilgen, Tuenter & van de Wal, Ziegler (2010)

Through the spectral analyses of marine oxygen isotope (d18O) records it has been shown that ice-sheets respond both linearly and non-linearly to astronomical forcing.

References in support of this statement include Imbrie et al 1992 & Imbrie et al 1993 that we reviewed above, and Pacemaking the Ice Ages by Frequency Modulation of Earth’s Orbital Eccentricity, JA Rial (1999):

The theory finds support in the fact that the spectra of the d18O records contain some of the same frequencies as the astronomical variations (2– 4), but a satisfactory explanation of how the changes in orbital eccentricity are transformed into the 100-ky quasi-periodic fluctuations in global ice volume indicated by the data has not yet been found (5).

For interest, the claim for the new work in this paper:

Evidence from power spectra of deep-sea oxygen isotope time series suggests that the climate system of Earth responds nonlinearly to astronomical forcing by frequency modulating eccentricity-related variations in insolation. With the help of a simple model, it is shown that frequency modulation of the approximate 100,000-year eccentricity cycles by the 413,000-year component accounts for the variable duration of the ice ages, the multiple-peak character of the time series spectra, and the notorious absence of significant spectral amplitude at the 413,000-year period. The observed spectra are consistent with the classic Milankovitch theories of insolation..

So if we consider the 3 references the provide in support of the “astronomical hypothesis”, the latest one says that a solution to the 100 kyr problem has not yet been found – of course this 1999 paper gives it their own best shot. Rial (1999) clearly doesn’t think that Imbrie et al 1992 / 1993 solved the problem.

And, of course, Rial (1999) proposes a different solution to Imbrie et al 1992/1993.


Dynamics between order and chaos in conceptual models of glacial cycles, Takahito Mitsui & Kazuyuki Aihara, Climate Dynamics (2013)

Hays et al. (1976) presented strong evidence for astronomical theories of ice ages. They found the primary frequencies of astronomical forcing in the geological spectra of marine sediment cores. However, the dominant frequency in geological spectra is approximately 1/100 kyr-1, although this frequency component is negligible in the astronomical forcing. This is referred to as the ‘100 kyr problem.’

However, the linear response cannot appropriately account for the 100 kyr periodicity (Hays et al. 1976).

Ghil (1994) explained the appearance of the 100 kyr periodicity as a nonlinear resonance to the combination tone 1/109 kyr-1 between precessional frequencies 1/19 and 1/23 kyr-1. Contrary to the linear resonance, the nonlinear resonance can occur even if the forcing frequencies are far from the internal frequency of the response system.

Benzi et al. (1982) proposed stochastic resonance as a mechanism of the 100 kyr periodicity, where the response to small external forcing is amplified by the effect of noise.

Tziperman et al. (2006) proposed that the timing of deglaciations is set by the astronomical forcing via the phase- locking mechanism.. De Saedeleer et al. (2013) suggested generalized synchronization (GS) to describe the relation between the glacial cycles and the astronomical forcing. GS means that there is a functional relation between the climate state and the state of the astronomical forcing. They also showed that the functional relation may not be unique for a certain model.

However, the nature of the relation remains to be elucidated.

“Still a mystery”.


Glacial cycles and orbital inclination, Richard Muller & Gordon MacDonald, Nature (1995)

According to the Milankovitch theory, the 100 kyr glacial cycle is caused by changes in insolation (solar heating) brought about by variations in the eccentricity of the Earth’s orbit. There are serious difficulties with this theory: the insolation variations appear to be too small to drive the cycles and a strong 400 kyr modulation predicted by the theory is not present..

We suggest that a radical solution is necessary to solve these problems, and we propose that the 100 kyr glacial cycle is caused, not by eccentricity, but by a previously ignored parameter: the orbital inclination, the tilt of the Earth’s orbital plane..

“Still a mystery”, with the new solution of a member of the Theory C Family.


Terminations VI and VIII (∼ 530 and ∼ 720 kyr BP) tell us the importance of obliquity and precession in the triggering of deglaciations, F. Parrenin & D. Paillard (2012)

The main variations of ice volume of the last million years can be explained from orbital parameters by assuming climate oscillates between two states: glaciations and deglaciations (Parrenin and Paillard, 2003; Imbrie et al., 2011) (or terminations). An additional combination of ice volume and orbital parameters seems to form the trigger of a deglaciation, while only orbital parameters seem to play a role in the triggering of glaciations. Here we present an optimized conceptual model which realistically reproduce ice volume variations during the past million years and in partic- ular the timing of the 11 canonical terminations. We show that our model looses sensitivity to initial conditions only after ∼ 200 kyr at maximum: the ice volume observations form a strong attractor. Both obliquity and precession seem necessary to reproduce all 11 terminations and both seem to play approximately the same role.

Note that eccentricity variations are not cited as the cause.

The support for orbital parameters explaining the ice age glaciation/deglaciation are two papers. First, Parrenin & Paillard: Amplitude and phase of glacial cycles from a conceptual model (2003):

Although we find astronomical frequencies in almost all paleoclimatic records [1,2], it is clear that the climatic system does not respond linearly to insolation variations [3]. The first well-known paradox of the astronomical theory of climate is the ‘100 kyr problem’: the largest variations over the past million years occurred approximately every 100 kyr, but the amplitude of the insolation signal at this frequency is not significant. Although this problem remains puzzling in many respects, multiple equilibria and thresholds in the climate system seem to be key notions to explain this paradoxical frequency.

Their solution:

To explain these paradoxical amplitude and phase modulations, we suggest here that deglaciations started when a combination of insolation and ice volume was large enough. To illustrate this new idea, we present a simple conceptual model that simulates the sea level curve of the past million years with very realistic amplitude modulations, and with good phase modulations.

The other paper cited in support of an astronomical solution is A phase-space model for Pleistocene ice volume, Imbrie, Imbrie-Moore & Lisiecki, Earth and Planetary Science Letters (2011)

Numerous studies have demonstrated that Pleistocene glacial cycles are linked to cyclic changes in Earth’s orbital parameters (Hays et al., 1976; Imbrie et al., 1992; Lisiecki and Raymo, 2007); however, many questions remain about how orbital cycles in insolation produce the observed climate response. The most contentious problem is why late Pleistocene climate records are dominated by 100-kyr cyclicity.

Insolation changes are dominated by 41-kyr obliquity and 23-kyr precession cycles whereas the 100-kyr eccentricity cycle produces negligible 100-kyr power in seasonal or mean annual insolation. Thus, various studies have proposed that 100-kyr glacial cycles are a response to the eccentricity-driven modulation of precession (Raymo, 1997; Lisiecki, 2010b), bundling of obliquity cycles (Huybers and Wunsch, 2005; Liu et al., 2008), and/or internal oscillations (Saltzman et al., 1984; Gildor and Tziperman, 2000; Toggweiler, 2008).

Their new solution:

We present a new, phase-space model of Pleistocene ice volume that generates 100-kyr cycles in the Late Pleistocene as a response to obliquity and precession forcing. Like Parrenin and Paillard, (2003), we use a threshold for glacial terminations. However, ours is a phase-space threshold: a function of ice volume and its rate of change. Our model the first to produce an orbitally driven increase in 100-kyr power during the mid-Pleistocene transition without any change in model parameters.

Theory C Family – two (relatively) new papers (2003 & 2011) with similar theories are presented as support of the astronomical theory causing the ice ages. Note that the theory in Imbrie et al 2013 is not the 100 kyr eccentricity variation proposed by Hays, Imbrie and Shackleton 1976.


Coherence resonance and ice ages, Jon D. Pelletier, JGR (2003)

The processes and feedbacks responsible for the 100-kyr cycle of Late Pleistocene global climate change are still being debated. This paper presents a numerical model that integrates (1) long-wavelength outgoing radiation, (2) the ice-albedo feedback, and (3) lithospheric deflection within the simple conceptual framework of coherence resonance. Coherence resonance is a dynamical process that results in the amplification of internally generated variability at particular periods in a system with bistability and delay feedback..

..The 100-kyr cycle is a free oscillation in the model, present even in the absence of external forcing.

“Still a mystery” – with the new solution that is not astronomical forcing.


The 41 kyr world: Milankovitch’s other unsolved mystery, Maureen E. Raymo & Kerim Nisancioglu (2003)

All serious students of Earth’s climate history have heard of the ‘‘100 kyr problem’’ of Milankovitch orbital theory, namely the lack of an obvious explanation of the dominant 100 kyr periodicity in climate records of the last 800,000 years.

“Still a mystery” – except that Raymo thinks she has found the solution (see earlier)


Is the spectral signature of the 100 kyr glacial cycle consistent with a Milankovitch origin, Ridgwell, Watson & Raymo (1999)

Global ice volume proxy records obtained from deep-sea sediment cores, when analyzed in this way produce a narrow peak corresponding to a period of ~100 kyr that dominates the low frequency part of the spectrum. This contrasts with the spectrum of orbital eccentricity variation, often assumed to be the main candidate to pace the glaciations [Hays et al 1980], which shows two distinct peaks near 100 kyr and substantial power near the 413 kyr period.

Then their solution:

Milankovitch theory seeks to explain the Quaternary glaciations via changes in seasonal insolation caused by periodic changes in the Earth’s obliquity, orbital precession and eccentricity. However, recent high-resolution spectral analysis of d18O proxy climate records have cast doubt on the theory.. Here we show that the spectral signature of d18O records are entirely consistent with Milankovitch mechanisms in which deglaciations are triggered every fourth or fifth precessional cycle. Such mechanisms may involve the buildup of excess ice due to low summertime insolation at the previous precessional high.

So they don’t accept Theory B. They don’t claim the theory has been previously solved and they introduce a Theory C Family.


In defense of Milankovitch, Gerard Roe (2006) – we reviewed this paper in Fifteen – Roe vs Huybers:

The Milankovitch hypothesis is widely held to be one of the cornerstones of climate science. Surprisingly, the hypothesis remains not clearly defined despite an extensive body of research on the link between global ice volume and insolation changes arising from variations in the Earth’s orbit.

And despite his interesting efforts at solving the problem he states towards the end of his paper:

The Milankovitch hypothesis as formulated here does not explain the large rapid deglaciations that occurred at the end of some of the ice age cycles.

Was it still a mystery or just not well defined. And from his new work, I’m not sure whether that means he thinks he has solved the reason for some ice age terminations, or that terminations are still a mystery.


The 100,000-Year Ice-Age Cycle Identified and Found to Lag Temperature, Carbon Dioxide, and Orbital Eccentricity, Nicholas J. Shackleton (the Shackleton from Hays et al 1976), (2000)

It is generally accepted that this 100-ky cycle represents a major component of the record of changes in total Northern Hemisphere ice volume (3). It is difficult to explain this predominant cycle in terms of orbital eccentricity because “the 100,000-year radiation cycle (arising from eccentricity variations) is much too small in amplitude and too late in phase to produce the corresponding climatic cycle by direct forcing”

So the Hays, Imbrie & Shackleton 1976 Theory B is not correct.

He does state:

Hence, the 100,000-year cycle does not arise from ice sheet dynamics; instead, it is probably the response of the global carbon cycle that generates the eccentricity signal by causing changes in atmospheric carbon dioxide concentration.

Note that this is in opposition to the papers by Imbrie et al (2011) and Parrenin & Paillard (2003) that were cited by Parrenin & Paillard (2012) in support of the astronomical theory of the ice ages.


Consequences of pacing the Pleistocene 100 kyr ice ages by nonlinear phase locking to Milankovitch forcing, Tziperman, Raymo, Huybers & Wunsch (2006)

Hays et al. [1976] established that Milankovitch forcing (i.e., variations in orbital parameters and their effect on the insolation at the top of the atmosphere) plays a role in glacial cycle dynamics. However, precisely what that role is, and what is meant by ‘‘Milankovitch theories’’ remains unclear despite decades of work on the subject [e.g., Wunsch, 2004; Rial and Anaclerio, 2000]. Current views vary from the inference that Milankovitch variations in insolation drives the glacial cycle (i.e., the cycles would not exist without Milankovitch variations), to the Milankovitch forcing causing only weak climate perturbations superimposed on the glacial cycles. A further possibility is that the primary influence of the Milankovitch forcing is to set the frequency and phase of the cycles (e.g., controlling the timing of glacial terminations or of glacial inceptions). In the latter case, glacial cycles would exist even in the absence of the insolation changes, but with different timing.

“Still a mystery” – but now solved with a Theory C Family (in their paper).


Quantitative estimate of the Milankovitch-forced contribution to observed Quaternary climate change, Carl Wunsch (2004)

The so-called Milankovitch hypothesis, that much of inferred past climate change is a response to near- periodic variations in the earth’s position and orientation relative to the sun, has attracted a great deal of attention. Numerous textbooks (e.g., Bradley, 1999; Wilson et al., 2000; Ruddiman, 2001) of varying levels and sophistication all tell the reader that the insolation changes are a major element controlling climate on time scales beyond about 10,000 years.

A recent paper begins ‘‘It is widely accepted that climate variability on timescales of 10 kyrs to 10 kyrs is driven primarily by orbital, or so-called Milankovitch, forcing.’’ (McDermott et al., 2001). To a large extent, embrace of the Milankovitch hypothesis can be traced to the pioneering work of Hays et al. (1976), who showed, convincingly, that the expected astronomical periods were visible in deep-sea core records..

..The long-standing question of how the slight Milankovitch forcing could possibly force such an enormous glacial–interglacial change is then answered by concluding that it does not do so.

“Still a mystery” – Wunsch does not accept Theory B and in this year didn’t accept Theory C Family (later co-authors a Theory C Family paper with Huybers). I cited this before in Part Six – “Hypotheses Abound”.


Individual contribution of insolation and CO2 to the interglacial climates of the past 800,000 years, Qiu Zhen Yin & André Berger (2012)

Climate variations of the last 3 million years are characterized by glacial-interglacial cycles which are generally believed to be driven by astronomically induced insolation changes.

No citation for the claim. Of course I agree that it is “generally believed”. Is this theory B? Or theory C? Or not sure?


Summary of the Papers

Out of about 300 papers checked, I found 34 papers (I might have missed a few) with a statement on the major cause of the ice ages separate from what they attempted to prove in their paper. These 34 papers were reviewed, with a further handful of cited papers examined to see what support they offered for the claim of the paper in question.

In respect of “What has been demonstrated up until our paper” – I count:

  • 19 “still a mystery”
  • 9 propose theory B
  • 6 supporting theory C

I have question marks over my own classification of about 10 of these because they lack clarity on what they believe is the situation to date.

Of course, from the point of view of the papers reviewed each believes they have some solution for the mystery. That’s not primarily what I was interested in.

I wanted to see what all papers accept as the story so far, and what evidence they bring for this belief.

I found only one paper claiming theory B that attempted to produce any significant evidence in support.


Hays, Imbrie & Shackleton (1976) did not prove Theory B. They suggested it. Invoking “probably non-linearity” does not constitute proof for an apparent frequency correlation. Specifically, half an apparent frequency correlation – given that eccentricity has a 413 kyr component as well as a 100 kyr component.

Some physical mechanism is necessary. Of course, I’m certain Hays, Imbrie & Shackleton understood this (I’ve read many of their later papers).

Of the papers we reviewed, over half indicate that the solution is still a mystery. That is fine. I agree it is a mystery.

Some papers indicate that the theory is widely believed but not necessarily that they do. That’s probably fine. Although it is confusing for non-specialist readers of their paper.

Some papers cite Hays et al 1976 as support for theory B. This is amazing.

Some papers claim “astronomical forcing” and in support cite Hays et al 1976 plus a paper with a different theory from the Theory C Family. This is also amazing.

Some papers cite support for Theory C Family – an astronomical theory to explain the ice ages with a different theory than Hays et al 1976. Sometimes their cited papers align. However, between papers that accept something in the Theory C Family there is no consensus on which version of Theory C Family, and obviously therefore, on the papers which support it.

How can papers cite Hays et al for support of the astronomical theory of ice age inception/termination?

It is required to put forward citations for just about every claim in a paper even if the entire world has known it from childhood. It seems to be a journal convention/requirement:

The sun rises each day [see Kepler 1596; Newton 1687, Plato 370 BC]

Really? Newton didn’t actually prove it in his paper? Oh, you know what, I just had a quick look at the last few papers in my field and copied their citations so I could get on with putting forward my theory. Come on, we all know the sun rises every day, look out the window (unless you live in England). Anyway, so glad you called, let me explain my new theory, it solves all those other problems, I’ve really got something here..

Well, that might be part of the answer. It isn’t excusable, but introductions don’t have the focus they should have.

Why the Belief in Theory B?

This part I can’t answer. Lots of people have put forward theories, none is generally accepted. The reason for the ice age terminations is unknown. Or known by a few people and not yet accepted by the climate science community.

Is it ok to accept something that everyone else seems to believe even though they all actually have a different theory. Is it ok to accept something as proven that is not really proven because it is from a famous paper with 2500 citations?

Finally, the fact that most papers have some vague words at the start about the “orbital” or “astronomical” theory for the ice ages doesn’t mean that this theory has any support. Being scientific, being skeptical, means asking for evidence and definitely not accepting an idea just because “everyone else” appears to accept it.

I am sure people will take issue with me. In another blog I was told that scientists were just “dotting the i’s and crossing the t’s” and none of this was seriously in doubt. Apparently, I was following creationist tactics of selective and out-of-context quoting..

Well, I will be delighted and no doubt entertained to read these comments, but don’t forget to provide evidence for the astronomical theory of the ice ages.

Articles in this Series

Part One – An introduction

Part Two – Lorenz – one point of view from the exceptional E.N. Lorenz

Part Three – Hays, Imbrie & Shackleton – how everyone got onto the Milankovitch theory

Part Four – Understanding Orbits, Seasons and Stuff – how the wobbles and movements of the earth’s orbit affect incoming solar radiation

Part Five – Obliquity & Precession Changes – and in a bit more detail

Part Six – “Hypotheses Abound” – lots of different theories that confusingly go by the same name

Part Seven – GCM I – early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Seven and a Half – Mindmap – my mind map at that time, with many of the papers I have been reviewing and categorizing plus key extracts from those papers

Part Eight – GCM II – more recent work from the “noughties” – GCM results plus EMIC (earth models of intermediate complexity) again trying to produce perennial snow cover

Part Nine – GCM III – very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age – a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Eleven – End of the Last Ice age – latest data showing relationship between Southern Hemisphere temperatures, global temperatures and CO2

Twelve – GCM V – Ice Age Termination – very recent work from He et al 2013, using a high resolution GCM (CCSM3) to analyze the end of the last ice age and the complex link between Antarctic and Greenland

Thirteen – Terminator II – looking at the date of Termination II, the end of the penultimate ice age – and implications for the cause of Termination II

Fourteen – Concepts & HD Data – getting a conceptual feel for the impacts of obliquity and precession, and some ice age datasets in high resolution

Fifteen – Roe vs Huybers – reviewing In Defence of Milankovitch, by Gerard Roe

Sixteen – Roe vs Huybers II – remapping a deep ocean core dataset and updating the previous article

Seventeen – Proxies under Water I – explaining the isotopic proxies and what they actually measure

Nineteen – Ice Sheet Models I – looking at the state of ice sheet models


Note 1: The temperature fluctuations measured in Antarctica are a lot smaller than Greenland but still significant and still present for similar periods. There are also some technical challenges with calculating the temperature change in Antarctica (the relationship between d18O and local temperature) that have been better resolved in Greenland.


Get every new post delivered to your Inbox.

Join 334 other followers