Archive for the ‘Climate Models’ Category

In Part One we had a look at some introductory ideas. In this article we will look at one of the ground-breaking papers in chaos theory – Deterministic nonperiodic flow, Edward Lorenz (1963). It has been cited more than 13,500 times.

There might be some introductory books on non-linear dynamics and chaos that don’t include a discussion of this paper – or at least a mention – but they will be in a small minority.

Lorenz was thinking about convection in the atmosphere, or any fluid heated from below, and reduced the problem to just three simple equations. However, the equations were still non-linear and because of this they exhibit chaotic behavior.

Cencini et al describe Lorenz’s problem:

Consider a fluid, initially at rest, constrained by two infinite horizontal plates maintained at constant temperature and at a fixed distance from each other. Gravity acts on the system perpendicular to the plates. If the upper plate is maintained hotter than the lower one, the fluid remains at rest and in a state of conduction, i.e., a linear temperature gradient establishes between the two plates.

If the temperatures are inverted, gravity induced buoyancy forces tend to rise toward the top the hotter, and thus lighter fluid, that is at the bottom. This tendency is contrasted by viscous and dissipative forces of the fluid so that the conduction state may persist.

However, as the temperature differential exceeds a certain amount, the conduction state is replaced by a steady convection state: the fluid motion consists of steady counter-rotating vortices (rolls) which transport upwards the hot/light fluid in contact with the bottom plate and downwards the cold heavy fluid in contact with the upper one.

The steady convection state remains stable up to another critical temperature difference above which it becomes unsteady, very irregular and hardly predictable.

Willem Malkus and Lou Howard of MIT came up with an equivalent system – the simplest version is shown in this video:

Figure 1

Steven Strogatz (1994), an excellent introduction to dynamic and chaotic systems – explains and derives the equivalence between the classic Lorenz equations and this tilted waterwheel.

L63 (as I’ll call these equations) has three variables apart from time: intensity of convection (x), temperature difference between ascending and descending currents (y), deviation of temperature from a linear profile (z).

Here are some calculated results for L63  for the “classic” parameter values and three very slightly different initial conditions (blue, red, green in each plot) over 5,000 seconds, showing the start and end 50 seconds – click to expand:


Figure 2 – click to expand – initial conditions x,y,z = 0, 1, 0;  0, 1.001, 0;  0, 1.002, 0

We can see that quite early on the two conditions diverge, and 5000 seconds later the system still exhibits similar “non-periodic” characteristics.

For interest let’s zoom in on just over 10 seconds of ‘x’ near the start and end:


Figure 3

Going back to an important point from the first post, some chaotic systems will have predictable statistics even if the actual state at any future time is impossible to determine (due to uncertainty over the initial conditions).

So we’ll take a look at the statistics via a running average – click to expand:


Figure 4 – click to expand

Two things stand out – first of all the running average over more than 100 “oscillations” still shows a large amount of variability. So at any one time, if we were to calculate the average from our current and historical experience we could easily end up calculating a value that was far from the “long term average”. Second – the “short term” average, if we can call it that, shows large variation at any given time between our slightly divergent initial conditions.

So we might believe – and be correct – that the long term statistics of slightly different initial conditions are identical, yet be fooled in practice.

Of course, surely it sorts itself out over a longer time scale?

I ran the same simulation (with just the first two starting conditions) for 25,000 seconds and then used a filter window of 1,000 seconds – click to expand:


 Figure 5 – click to expand

The total variability is less, but we have a similar problem – it’s just lower in magnitude. Again we see that the statistics of two slightly different initial conditions – if we were to view them by the running average at any one time –  are likely to be different even over this much longer time frame.

From this 25,000 second simulation:

  • take 10,000 random samples each of 25 second length and plot a histogram of the means of each sample (the sample means)
  • same again for 100 seconds
  • same again for 500 seconds
  • same again for 3,000 seconds

Repeat for the data from the other initial condition.

Here is the result:


Figure 6

To make it easier to see, here is the difference between the two sets of histograms, normalized by the maximum value in each set:


Figure 7

This is a different way of viewing what we saw in figures 4 & 5.

The spread of sample means shrinks as we increase the time period but the difference between the two data sets doesn’t seem to disappear (note 2).

Attractors and Phase Space

The above plots show how variables change with time. There’s another way to view the evolution of system dynamics and that is by “phase space”. It’s a name for a different kind of plot.

So instead of plotting x vs time, y vs time and z vs time – let’s plot x vs y vs z – click to expand:


Figure 8 – Click to expand – the colors blue, red & green represent the same initial conditions as in figure 2

Without some dynamic animation we can’t now tell how fast the system evolves. But we learn something else that turns out to be quite amazing. The system always end up on the same “phase space”. Perhaps that doesn’t seem amazing yet..

Figure 7 was with three initial conditions that are almost identical. Let’s look at three initial conditions that are very different: x,y,z = 0, 1, 0;   5, 5, 5;   20, 8, 1:


Figure 9 - Click to expand

Here’s an example (similar to figure 7) from Strogatz – a set of 10,000 closely separated initial conditions and how they separate at 3, 6, 9 and 15 seconds. The two key points:

  1. the fast separation of initial conditions
  2. the long term position of any of the initial conditions is still on the “attractor”
From Strogatz 1994

From Strogatz 1994

Figure 10

A dynamic visualization on Youtube with 500,000 initial conditions:

Figure 11

There’s lot of theory around all of this as you might expect. But in brief, in a “dissipative system” the “phase volume” contracts exponentially to zero. Yet for the Lorenz system somehow it doesn’t quite manage that. Instead, there are an infinite number of 2-d surfaces. Or something. For the sake of a not overly complex discussion a wide range of initial conditions ends up on something very close to a 2-d surface.

This is known as a strange attractor. And the Lorenz strange attractor looks like a butterfly.


Lorenz 1963 reduced convective flow (e.g., heating an atmosphere from the bottom) to a simple set of equations. Obviously these equations are a massively over-simplified version of anything like the real atmosphere. Yet, even with this very simple set of equations we find chaotic behavior.

Chaotic behavior in this example means:

  • very small differences get amplified extremely quickly so that no matter how much you increase your knowledge of your starting conditions it doesn’t help much (note 3)
  • starting conditions within certain boundaries will always end up within “attractor” boundaries, even though there might be non-periodic oscillations around this attractor
  • the long term (infinite) statistics can be deterministic but over any “smaller” time period the statistics can be highly variable


Deterministic nonperiodic flow, EN Lorenz, Journal of the Atmospheric Sciences (1963)

Chaos: From Simple Models to Complex Systems, Cencini, Cecconi & Vulpiani, Series on Advances in Statistical Mechanics – Vol. 17 (2010)

Non Linear Dynamics and Chaos, Steven H. Strogatz, Perseus Books  (1994)


Note 1: The Lorenz equations:

dx/dt = σ (y-x)

dy/dt = rx – y – xz

dz/dt = xy – bz


x = intensity of convection

y = temperature difference between ascending and descending currents

z = devision of temperature from a linear profile

σ = Prandtl number, ratio of momentum diffusivity to thermal diffusivity

r = Rayleigh number

b = “another parameter”

And the “classic parameters” are σ=10, b = 8/3, r = 28

Note 2: Lorenz 1963 has over 13,000 citations so I haven’t been able to find out if this system of equations is transitive or intransitive. Running Matlab on a home Mac reaches some limitations and I maxed out at 25,000 second simulations mapped onto a 0.01 second time step.

However, I’m not trying to prove anything specifically about the Lorenz 1963 equations, more illustrating some important characteristics of chaotic systems

Note 3: Small differences in initial conditions grow exponentially, until we reach the limits of the attractor. So it’s easy to show the “benefit” of more accurate data on initial conditions.

If we increase our precision on initial conditions by 1,000,000 times the increase in prediction time is a massive 2½ times longer.

Read Full Post »

There are many classes of systems but in the climate blogosphere world two ideas about climate seem to be repeated the most.

In camp A:

We can’t forecast the weather two weeks ahead so what chance have we got of forecasting climate 100 years from now.

And in camp B:

Weather is an initial value problem, whereas climate is a boundary value problem. On the timescale of decades, every planetary object has a mean temperature mainly given by the power of its star according to Stefan-Boltzmann’s law combined with the greenhouse effect. If the sources and sinks of CO2 were chaotic and could quickly release and sequester large fractions of gas perhaps the climate could be chaotic. Weather is chaotic, climate is not.

Of course, like any complex debate, simplified statements don’t really help. So this article kicks off with some introductory basics.

Many inhabitants of the climate blogosphere already know the answer to this subject and with much conviction. A reminder for new readers that on this blog opinions are not so interesting, although occasionally entertaining. So instead, try to explain what evidence is there for your opinion. And, as suggested in About this Blog:

And sometimes others put forward points of view or “facts” that are obviously wrong and easily refuted.  Pretend for a moment that they aren’t part of an evil empire of disinformation and think how best to explain the error in an inoffensive way.


The equation for a simple pendulum is “non-linear”, although there is a simplified version of the equation, often used in introductions, which is linear. However, the number of variables involved is only two:

  • angle
  • speed

and this isn’t enough to create a “chaotic” system.

If we have a double pendulum, one pendulum attached at the bottom of another pendulum, we do get a chaotic system. There are some nice visual simulations around, which St. Google might help interested readers find.

If we have a forced damped pendulum like this one:


Figure 1 – the blue arrows indicate that the point O is being driven up and down by an external force

-we also get a chaotic system.

What am I talking about? What is linear & non-linear? What is a “chaotic system”?

Digression on Non-Linearity for Non-Technical People

Common experience teaches us about linearity. If I pick up an apple in the supermarket it weighs about 0.15 kg or 150 grams (also known in some countries as “about 5 ounces”). If I take 10 apples the collection weighs 1.5 kg. That’s pretty simple stuff. Most of our real world experience follows this linearity and so we expect it.

On the other hand, if I was near a very cold black surface held at 170K (-103ºC) and measured the radiation emitted it would be 47 W/m². Then we double the temperature of this surface to 340K (67ºC) what would I measure? 94 W/m²? Seems reasonable – double the absolute temperature and get double the radiation.. But it’s not correct.

The right answer is 758 W/m², which is 16x the amount. Surprising, but most actual physics, engineering and chemistry is like this. Double a quantity and you don’t get double the result.

It gets more confusing when we consider the interaction of other variables.

Let’s take riding a bike [updated thanks to Pekka]. Once you get above a certain speed most of the resistance comes from the wind so we will focus on that. Typically the wind resistance increases as the square of the speed. So if you double your speed you get four times the wind resistance. Work done = force x distance moved, so with no head wind power input has to go up as the cube of speed (note 4). This means you have to put in 8x the effort to get 2x the speed.

On Sunday you go for a ride and the wind speed is zero. You get to 25 km/hr (16 miles/hr) by putting a bit of effort in – let’s say you are producing 150W of power (I have no idea what the right amount is). You want your new speedo to register 50 km/hr – so you have to produce 1,200W.

On Monday you go for a ride and the wind speed is 20 km/hr into your face. Probably should have taken the day off.. Now with 150W you get to only 14 km/hr, it takes almost 500W to get to your basic 25 km/hr, and to get to 50 km/hr it takes almost 2,400W. No chance of getting to that speed!

On Tuesday you go for a ride and the wind speed is the same so you go in the opposite direction and take the train home. Now with only 6W you get to go 25 km/hr, to get to 50km/hr you only need to pump out 430W.

In mathematical terms it’s quite simple: F = k(v-w)², Force = (a constant, k) x (road speed – wind speed) squared. Power, P = Fv = kv(v-w)². But notice that the effect of the “other variable”, the wind speed, has really complicated things.

To double your speed on the first day you had to produce eight times the power. To double your speed the second day you had to produce almost five times the power. To double your speed the third day you had to produce just over 70 times the power. All with the same physics.

The real problem with nonlinearity isn’t the problem of keeping track of these kind of numbers. You get used to the fact that real science – real world relationships – has these kind of factors and you come to expect them. And you have an equation that makes calculating them easy. And you have computers to do the work.

No, the real problem with non-linearity (the real world) is that many of these equations link together and solving them is very difficult and often only possible using “numerical methods”.

It is also the reason why something like climate feedback is very difficult to measure. Imagine measuring the change in power required to double speed on the Monday. It’s almost 5x, so you might think the relationship is something like the square of speed. On Tuesday it’s about 70 times, so you would come up with a completely different relationship. In this simple case know that wind speed is a factor, we can measure it, and so we can “factor it out” when we do the calculation. But in a more complicated system, if you don’t know the “confounding variables”, or the relationships, what are you measuring? We will return to this question later.

When you start out doing maths, physics, engineering.. you do “linear equations”. These teach you how to use the tools of the trade. You solve equations. You rearrange relationships using equations and mathematical tricks, and these rearranged equations give you insight into how things work. It’s amazing. But then you move to “nonlinear” equations, aka the real world, which turns out to be mostly insoluble. So nonlinear isn’t something special, it’s normal. Linear is special. You don’t usually get it.

..End of digression

Back to Pendulums

Let’s take a closer look at a forced damped pendulum. Damped, in physics terms, just means there is something opposing the movement. We have friction from the air and so over time the pendulum slows down and stops. That’s pretty simple. And not chaotic. And not interesting.

So we need something to keep it moving. We drive the pivot point at the top up and down and now we have a forced damped pendulum. The equation that results (note 1) has the massive number of three variables – position, speed and now time to keep track of the driving up and down of the pivot point. Three variables seems to be the minimum to create a chaotic system (note 2).

As we increase the ratio of the forcing amplitude to the length of the pendulum (β in note 1) we can move through three distinct types of response:

  • simple response
  • a “chaotic start” followed by a deterministic oscillation
  • a chaotic system

This is typical of chaotic systems – certain parameter values or combinations of parameters can move the system between quite different states.

Here is a plot (note 3) of position vs time for the chaotic system, β=0.7, with two initial conditions, only different from each other by 0.1%:

Forced damped harmonic pendulum, b=0.7: Start angular speed 0.1; 0.1001

Forced damped harmonic pendulum, b=0.7: Start angular speed 0.1; 0.1001

Figure 1

It’s a little misleading to view the angle like this because it is in radians and so needs to be mapped between 0-2π (but then we get a discontinuity on a graph that doesn’t match the real world). We can map the graph onto a cylinder plot but it’s a mess of reds and blues.

Another way of looking at the data is via the statistics – so here is a histogram of the position (θ), mapped to 0-2π, and angular speed (dθ/dt) for the two starting conditions over the first 10,000 seconds:

Histograms for 10k seconds

Histograms for 10,000 seconds

Figure 2

We can see they are similar but not identical (note the different scales on the y-axis).

That might be due to the shortness of the run, so here are the results over 100,000 seconds:

Pendulum-0.7-100k seconds-2 conditions-hist

Histogram for 100,000 seconds

Figure 3

As we increase the timespan of the simulation the statistics of two slightly different initial conditions become more alike.

So if we want to know the state of a chaotic system at some point in the future, very small changes in the initial conditions will amplify over time, making the result unknowable – or no different from picking the state from a random time in the future. But if we look at the statistics of the results we might find that they are very predictable. This is typical of many (but not all) chaotic systems.

Orbits of the Planets

The orbits of the planets in the solar system are chaotic. In fact, even 3-body systems moving under gravitational attraction have chaotic behavior. So how did we land a man on the moon? This raises the interesting questions of timescales and amount of variation. Planetary movement – for our purposes – is extremely predictable over a few million years. But over 10s of millions of years we might have trouble predicting exactly the shape of the earth’s orbit – eccentricity, time of closest approach to the sun, obliquity.

However, it seems that even over a much longer time period the planets will still continue in their orbits – they won’t crash into the sun or escape the solar system. So here we see another important aspect of some chaotic systems – the “chaotic region” can be quite restricted. So chaos doesn’t mean unbounded.

According to Cencini, Cecconi & Vulpiani (2010):

Therefore, in principle, the Solar system can be chaotic, but not necessarily this implies events such as collisions or escaping planets..

However, there is evidence that the Solar system is “astronomically” stable, in the sense that the 8 largest planets seem to remain bound to the Sun in low eccentricity and low inclination orbits for time of the order of a billion years. In this respect, chaos mostly manifest in the irregular behavior of the eccentricity and inclination of the less massive planets, Mercury and Mars. Such variations are not large enough to provoke catastrophic events before extremely large time. For instance, recent numerical investigations show that for catastrophic events, such as “collisions” between Mercury and Venus or Mercury failure into the Sun, we should wait at least a billion years.

And bad luck, Pluto.

Deterministic, non-Chaotic, Systems with Uncertainty

Just to round out the picture a little, even if a system is not chaotic and is deterministic we might lack sufficient knowledge to be able to make useful predictions. If you take a look at figure 3 in Ensemble Forecasting you can see that with some uncertainty of the initial velocity and a key parameter the resulting velocity of an extremely simple system has quite a large uncertainty associated with it.

This case is quantitively different of course. By obtaining more accurate values of the starting conditions and the key parameters we can reduce our uncertainty. Small disturbances don’t grow over time to the point where our calculation of a future condition might as well just be selected from a randomly time in the future.

Transitive, Intransitive and “Almost Intransitive” Systems

Many chaotic systems have deterministic statistics. So we don’t know the future state beyond a certain time. But we do know that a particular position, or other “state” of the system, will be between a given range for x% of the time, taken over a “long enough” timescale. These are transitive systems.

Other chaotic systems can be intransitive. That is, for a very slight change in initial conditions we can have a different set of long term statistics. So the system has no “statistical” predictability. Lorenz 1968 gives a good example.

Lorenz introduces the concept of almost intransitive systems. This is where, strictly speaking, the statistics over infinite time are independent of the initial conditions, but the statistics over “long time periods” are dependent on the initial conditions. And so he also looks at the interesting case (Lorenz 1990) of moving between states of the system (seasons), where we can think of the precise starting conditions each time we move into a new season moving us into a different set of long term statistics. I find it hard to explain this clearly in one paragraph, but Lorenz’s papers are very readable.


This is just a brief look at some of the basic ideas.

Other Articles in the Series

Part Two – Lorenz 1963


Chaos: From Simple Models to Complex Systems, Cencini, Cecconi & Vulpiani, Series on Advances in Statistical Mechanics – Vol. 17 (2010)

Climatic Determinism, Edward Lorenz (1968) – free paper

Can chaos and intransivity lead to interannual variation, Edward Lorenz, Tellus (1990) – free paper


Note 1 – The equation is easiest to “manage” after the original parameters are transformed so that tω->t. That is, the period of external driving, T0=2π under the transformed time base.



where θ = angle, γ’ = γ/ω, α = g/Lω², β =h0/L;

these parameters based on γ = viscous drag coefficient, ω = angular speed of driving, g = acceleration due to gravity = 9.8m/s², L = length of pendulum, h0=amplitude of driving of pivot point

Note 2 – This is true for continuous systems. Discrete systems can be chaotic with less parameters

Note 3 – The results were calculated numerically using Matlab’s ODE (ordinary differential equation) solver, ode45.

Note 4 – Force = k(v-w)2 where k is a constant, v=velocity, w=wind speed. Work done = Force x distance moved so Power, P = Force x velocity.


P = kv(v-w)2

If we know k, v & w we can find P. If we have P, k & w and want to find v it is a cubic equation that needs solving.

Read Full Post »

In Ensemble Forecasting we had a look at the principles behind “ensembles of initial conditions” and “ensembles of parameters” in forecasting weather. Climate models are a little different from weather forecasting models but use the same physics and the same framework.

A lot of people, including me, have questions about “tuning” climate models. While looking for what the latest IPCC report (AR5) had to say about ensembles of climate models I found a reference to Tuning the climate of a global model by Mauritsen et al (2012). Unless you work in the field of climate modeling you don’t know the magic behind the scenes. This free paper (note 1) gives some important insights and is very readable:

The need to tune models became apparent in the early days of coupled climate modeling, when the top of the atmosphere (TOA) radiative imbalance was so large that models would quickly drift away from the observed state. Initially, a practice to input or extract heat and freshwater from the model, by applying flux-corrections, was invented to address this problem. As models gradually improved to a point when flux-corrections were no longer necessary, this practice is now less accepted in the climate modeling community.

Instead, the radiation balance is controlled primarily by tuning cloud-related parameters at most climate modeling centers while others adjust the ocean surface albedo or scale the natural aerosol climatology to achieve radiation balance. Tuning cloud parameters partly masks the deficiencies in the simulated climate, as there is considerable uncertainty in the representation of cloud processes. But just like adding flux-corrections, adjusting cloud parameters involves a process of error compensation, as it is well appreciated that climate models poorly represent clouds and convective processes. Tuning aims at balancing the Earth’s energy budget by adjusting a deficient representation of clouds, without necessarily aiming at improving the latter.

A basic requirement of a climate model is reproducing the temperature change from pre-industrial times (mid 1800s) until today. So the focus is on temperature change, or in common terminology, anomalies.

It was interesting to see that if we plot the “actual modeled temperatures” from 1850 to present the picture doesn’t look so good (the grey curves are models from the coupled model inter-comparison projects: CMIP3 and CMIP5):

From Mauritsen et al 2012

From Mauritsen et al 2012

Figure 1

The authors state:

There is considerable coherence between the model realizations and the observations; models are generally able to reproduce the observed 20th century warming of about 0.7 K..

Yet, the span between the coldest and the warmest model is almost 3 K, distributed equally far above and below the best observational estimates, while the majority of models are cold-biased. Although the inter-model span is only one percent relative to absolute zero, that argument fails to be reassuring. Relative to the 20th century warming the span is a factor four larger, while it is about the same as our best estimate of the climate response to a doubling of CO2, and about half the difference between the last glacial maximum and present.

They point out that adjusting parameters might just be offsetting one error against another..

In addition to targeting a TOA radiation balance and a global mean temperature, model tuning might strive to address additional objectives, such as a good representation of the atmospheric circulation, tropical variability or sea-ice seasonality. But in all these cases it is usually to be expected that improved performance arises not because uncertain or non-observable parameters match their intrinsic value – although this would clearly be desirable – rather that compensation among model errors is occurring. This raises the question as to whether tuning a model influences model-behavior, and places the burden on the model developers to articulate their tuning goals, as including quantities in model evaluation that were targeted by tuning is of little value. Evaluating models based on their ability to represent the TOA radiation balance usually reflects how closely the models were tuned to that particular target, rather than the models intrinsic qualities.

[Emphasis added]. And they give a bit more insight into the tuning process:

A few model properties can be tuned with a reasonable chain of understanding from model parameter to the impact on model representation, among them the global mean temperature. It is comprehendible that increasing the models low-level cloudiness, by for instance reducing the precipitation efficiency, will cause more reflection of the incoming sunlight, and thereby ultimately reduce the model’s surface temperature.

Likewise, we can slow down the Northern Hemisphere mid-latitude tropospheric jets by increasing orographic drag, and we can control the amount of sea ice by tinkering with the uncertain geometric factors of ice growth and melt. In a typical sequence, first we would try to correct Northern Hemisphere tropospheric wind and surface pressure biases by adjusting parameters related to the parameterized orographic gravity wave drag. Then, we tune the global mean temperature as described in Sections 2.1 and 2.3, and, after some time when the coupled model climate has come close to equilibrium, we will tune the Arctic sea ice volume (Section 2.4).

In many cases, however, we do not know how to tune a certain aspect of a model that we care about representing with fidelity, for example tropical variability, the Atlantic meridional overturning circulation strength, or sea surface temperature (SST) biases in specific regions. In these cases we would rather monitor these aspects and make decisions on the basis of a weak understanding of the relation between model formulation and model behavior.

Here we see how CMIP3 & 5 models “drift” – that is, over a long period of simulation time how the surface temperature varies with TOA flux imbalance (and also we see the cold bias of the models):

From Mauritsen et al 2012

From Mauritsen et al 2012

Figure 2

If a model equilibrates at a positive radiation imbalance it indicates that it leaks energy, which appears to be the case in the majority of models, and if the equilibrium balance is negative it means that the model has artificial energy sources. We speculate that the fact that the bulk of models exhibit positive TOA radiation imbalances, and at the same time are cold-biased, is due to them having been tuned without account for energy leakage.

[Emphasis added].

From that graph they discuss the implied sensitivity to radiative forcing of each model (the slope of each model and how it compares with the blue and red “sensitivity” curves).

We get to see some of the parameters that are played around with (a-h in the figure):

From Mauritsen et al 2012

From Mauritsen et al 2012

Figure 3

And how changing some of these parameters affects (over a short run) “headline” parameters like TOA imbalance and cloud cover:

From Mauritsen et al 2012

From Mauritsen et al 2012

Figure 4 – Click to Enlarge

There’s also quite a bit in the paper about tuning the Arctic sea ice that will be of interest for Arctic sea ice enthusiasts.

In some of the final steps we get a great insight into how the whole machine goes through its final tune up:

..After these changes were introduced, the first parameter change was a reduction in two non-dimensional parameters controlling the strength of orographic wave drag from 0.7 to 0.5.

This greatly reduced the low zonal mean wind- and sea-level pressure biases in the Northern Hemisphere in atmosphere-only simulations, and further had a positive impact on the global to Arctic temperature gradient and made the distribution of Arctic sea-ice far more realistic when run in coupled mode.

In a second step the conversion rate of cloud water to rain in convective clouds was doubled from 1×10-4 s-1 to 2×10-4 s-1 in order to raise the OLR to be closer to the CERES satellite estimates.

At this point it was clear that the new coupled model was too warm compared to our target pre- industrial temperature. Different measures using the convection entrainment rates, convection overshooting fraction and the cloud homogeneity factors were tested to reduce the global mean temperature.

In the end, it was decided to use primarily an increased homogeneity factor for liquid clouds from 0.70 to 0.77 combined with a slight reduction of the convective overshooting fraction from 0.22 to 0.21, thereby making low-level clouds more reflective to reduce the surface temperature bias. Now the global mean temperature was sufficiently close to our target value and drift was very weak. At this point we decided to increase the Arctic sea ice volume from 18×1012 m3 to 22×1012 m3 by raising the cfreeze parameter from 1/2 to 2/3. ECHAM5/MPIOM had this parameter set to 4/5. These three final parameter settings were done while running the model in coupled mode.

Some of the paper’s results (not shown here) are some “parallel worlds” with different parameters. In essence, while working through the model development phase they took a lot of notes of what they did, what they changed, and at the end they went back and created some alternatives from some of their earlier choices. The parameter choices along with a set of resulting climate properties are shown in their table 10.

Some summary statements:

Parameter tuning is the last step in the climate model development cycle, and invariably involves making sequences of choices that influence the behavior of the model. Some of the behavioral changes are desirable, and even targeted, but others may be a side effect of the tuning. The choices we make naturally depend on our preconceptions, preferences and objectives. We choose to tune our model because the alternatives – to either drift away from the known climate state, or to introduce flux-corrections – are less attractive. Within the foreseeable future climate model tuning will continue to be necessary as the prospects of constraining the relevant unresolved processes with sufficient precision are not good.

Climate model tuning has developed well beyond just controlling global mean temperature drift. Today, we tune several aspects of the models, including the extratropical wind- and pressure fields, sea-ice volume and to some extent cloud-field properties. By doing so we clearly run the risk of building the models’ performance upon compensating errors, and the practice of tuning is partly masking these structural errors. As one continues to evaluate the models, sooner or later these compensating errors will become apparent, but the errors may prove tedious to rectify without jeopardizing other aspects of the model that have been adjusted to them.

Climate models ability to simulate the 20th century temperature increase with fidelity has become something of a show-stopper as a model unable to reproduce the 20th century would probably not see publication, and as such it has effectively lost its purpose as a model quality measure. Most other observational datasets sooner or later meet the same destiny, at least beyond the first time they are applied for model evaluation. That is not to say that climate models can be readily adapted to fit any dataset, but once aware of the data we will compare with model output and invariably make decisions in the model development on the basis of the results. Rather, our confidence in the results provided by climate models is gained through the development of a fundamental physical understanding of the basic processes that create climate change. More than a century ago it was first realized that increasing the atmospheric CO2 concentration leads to surface warming, and today the underlying physics and feedback mechanisms are reasonably understood (while quantitative uncertainty in climate sensitivity is still large). Coupled climate models are just one of the tools applied in gaining this understanding..

[Emphasis added].

..In this paper we have attempted to illustrate the tuning process, as it is being done currently at our institute. Our hope is to thereby help de-mystify the practice, and to demonstrate what can and cannot be achieved. The impacts of the alternative tunings presented were smaller than we thought they would be in advance of this study, which in many ways is reassuring. We must emphasize that our paper presents only a small glimpse at the actual development and evaluation involved in preparing a comprehensive coupled climate model – a process that continues to evolve as new datasets emerge, model parameterizations improve, additional computational resources become available, as our interests, perceptions and objectives shift, and as we learn more about our model and the climate system itself.

 Note 1: The link to the paper gives the html version. From there you can click the “Get pdf” link and it seems to come up ok – no paywall. If not try the link to the draft paper (but the formatting makes it not so readable)

Read Full Post »

Ensemble Forecasting

I’ve had questions about the use of ensembles of climate models for a while. I was helped by working through a bunch of papers which explain the origin of ensemble forecasting. I still have my questions but maybe this post will help to create some perspective.

The Stochastic Sixties

Lorenz encapulated the problem in the mid-1960’s like this:

The proposed procedure chooses a finite ensemble of initial states, rather than the single observed initial state. Each state within the ensemble resembles the observed state closely enough so that the difference might be ascribed to errors or inadequacies in observation. A system of dynamic equations previously deemed to be suitable for forecasting is then applied to each member of the ensemble, leading to an ensemble of states at any future time. From an ensemble of future states, the probability of occurrence of any event, or such statistics as the ensemble mean and ensemble standard deviation of any quantity, may be evaluated.

Between the near future, when all states within an ensemble will look about alike, and the very distant future, when two states within an ensemble will show no more resemblance than two atmospheric states chosen at random, it is hoped that there will be an extended range when most of the states in an ensemble, while not constituting good pin-point forecasts, will possess certain important features in common. It is for this extended range that the procedure may prove useful.

[Emphasis added].

Epstein picked up some of these ideas in two papers in 1969. Here’s an extract from The Role of Initial Uncertainties in Prediction.

While it has long been recognized that the initial atmospheric conditions upon which meteorological forecasts are based are subject to considerable error, little if any explicit use of this fact has been made.

Operational forecasting consists of applying deterministic hydrodynamic equations to a single “best” initial condition and producing a single forecast value for each parameter..

..One of the questions which has been entirely ignored by the forecasters.. is whether of not one gets the “best” forecast by applying the deterministic equations to the “best” values of the initial conditions and relevant parameters..

..one cannot know a uniquely valid starting point for each forecast. There is instead an ensemble of possible starting points, but the identification of the one and only one of these which represents the “true” atmosphere is not possible.

In essence, the realization that small errors can grow in a non-linear system like weather and climate leads us to ask what the best method is of forecasting the future. In this paper Epstein takes a look at a few interesting simple problems to illustrate the ensemble approach.

Let’s take a look at one very simple example – the slowing of a body due to friction.

Rate of change of velocity (dv/dt) is proportional to the velocity, v. The “proportional” term is k, which increases with more friction.

dv/dt = -kv, therefore, v = v0.exp(-kt), where v0 = initial velocity

With a starting velocity of 10 m/s and k = 10-4 (in units of 1/s), how does velocity change with time?


Figure 1 – note the logarithm of time on the time axis, time runs from 10 secs – 100,000 secs

Probably no surprises there.

Now let’s consider in the real world that we don’t know the starting velocity precisely, and also we don’t know the coefficient of friction precisely. Instead, we might have some idea about the possible values, which could be expressed as a statistical spread. Epstein looks at the case for v0 with a normal distribution and k with a gamma distribution (for specific reasons not that important).

 Mean of v0:   <v0> = 10 m/s

Standard deviation of v0:   σv = 1m/s

Mean of k:    <k> = 10-4 /s

Standard deviation of k:   σk = 3×10-5 /s

The particular example he gave has equations that can be easily manipulated, allowing him to derive an analytical result. In 1969 that was necessary. Now we have computers and some lucky people have Matlab. My approach uses Matlab.

What I did was create a set of 1,000 random normally distributed values for v0, with the mean and standard deviation above. Likewise, a set of gamma distributed values for k.

Then we take each pair in turn and produce the velocity vs time curve. Then we look at the stats of the 1,000 curves.

Interestingly the standard deviation increases before fading away to zero. It’s easy to see why the standard deviation will end up at zero – because the final velocity is zero. So we could easily predict that. But it’s unlikely we would have predicted that the standard deviation of velocity would start to increase after 3,000 seconds and then peak at around 9,000 seconds.

Here is the graph of standard deviation of velocity vs time:


Figure 2

Now let’s look at the spread of results. The blue curves in the top graph (below) are each individual ensemble member and the green is the mean of the ensemble results. The red curve is the calculation of velocity against time using the mean of v0 and k:


Figure 3

The bottom curve zooms in on one portion (note the time axis is now linear), with the thin green lines being 1 standard deviation in each direction.

What is interesting is the significant difference between the mean of the ensemble members and the single value calculated using the mean parameters. This is quite usual with “non-linear” equations (aka the real world).

So, if you aren’t sure about your parameters or your initial conditions, taking the “best value” and running the simulation can well give you a completely different result from sampling the parameters and initial conditions and taking the mean of this “ensemble” of results..

Epstein concludes in his paper:

In general, the ensemble mean value of a variable will follow a different course than that of any single member of the ensemble. For this reason it is clearly not an optimum procedure to forecast the atmosphere by applying deterministic hydrodynamic equations to any single initial condition, no matter how well it fits the available, but nevertheless finite and fallible, set of observations.

In Epstein’s other 1969 paper, Stochastic Dynamic Prediction, is more involved. He uses Lorenz’s “minimum set of atmospheric equations” and compares the results after 3 days from using the “best value” starting point vs an ensemble approach. The best value approach has significant problems compared with the ensemble approach:

Note that this does not mean the deterministic forecast is wrong, only that it is a poor forecast. It is possible that the deterministic solution would be verified in a given situation but the stochastic solutions would have better average verification scores.


One of the important points in the earlier work on numerical weather forecasting was the understanding that parameterizations also have uncertainty associated with them.

For readers who haven’t seen them, here’s an example of a parameterization, for latent heat flux, LH:

LH = LρCDEUr(qs-qa)

which says Latent Heat flux = latent heat of vaporization x density x “aerodynamic transfer coefficient” x wind speed at the reference level x ( humidity at the surface – humidity in the air at the reference level)

The “aerodynamic transfer coefficient” is somewhere around 0.001 over ocean to 0.004 over moderately rough land.

The real formula for latent heat transfer is much simpler:

LH = the covariance of upwards moisture with vertical eddy velocity x density x latent heat of vaporization

These are values that vary even across very small areas and across many timescales. Across one “grid cell” of a numerical model we can’t use the “real formula” because we only get to put in one value for upwards eddy velocity and one value for upwards moisture flux and anyway we have no way of knowing the values “sub-grid”, i.e., at the scale we would need to know them to do an accurate calculation.

That’s why we need parameterizations. By the way, I don’t know whether this is a current formula in use in NWP, but it’s typical of what we find in standard textbooks.

So right away it should be clear why we need to apply the same approach of ensembles to the parameters describing these sub-grid processes as well as to initial conditions. Are we sure that over Connecticut the parameter CDE = 0.004, or should it be 0.0035? In fact, parameters like this are usually calculated from the average of a number of experiments. They conceal as much as they reveal. The correct value probably depends on other parameters. In so far as it represents a real physical property it will vary depending on the time of day, seasons and other factors. It might even be, “on average”, wrong. Because “on average” over the set of experiments was an imperfect sample. And “on average” over all climate conditions is a different sample.

The Numerical Naughties

The insights gained in the stochastic sixties weren’t so practically useful until some major computing power came along in the 90s and especially the 2000s.

Here is Palmer et al (2005):

Ensemble prediction provides a means of forecasting uncertainty in weather and climate prediction. The scientific basis for ensemble forecasting is that in a non-linear system, which the climate system surely is, the finite-time growth of initial error is dependent on the initial state. For example, Figure 2 shows the flow-dependent growth of initial uncertainty associated with three different initial states of the Lorenz (1963) model. Hence, in Figure 2a uncertainties grow significantly more slowly than average (where local Lyapunov exponents are negative), and in Figure 2c uncertainties grow significantly more rapidly than one would expect on average. Conversely, estimates of forecast uncertainty based on some sort of climatological-average error would be unduly pessimistic in Figure 2a and unduly optimistic in Figure 2c.


From Palmer et al 2005


Figure 4

The authors then provide an interesting example to demonstrate the practical use of ensemble forecasts. In the top left are the “deterministic predictions” using the “best estimate” of initial conditions. The rest of the charts 1-50 are the ensemble forecast members each calculated from different initial conditions. We can see that there was a low yet significant chance of a severe storm:


From Palmer et al 2005

Figure 5 – Click to enlarge

In fact a severe storm did occur so the probabilistic forecast was very useful, in that it provided information not available with the deterministic forecast.

This is a nice illustration of some benefits. It doesn’t tell us how well NWPs perform in general.

One measure is the forecast spread of certain variables as the forecast time increases. Generally single model ensembles don’t do so well – they under-predict the spread of results at later time periods.

Here’s an example of the performance of a multi-model ensemble vs single-model ensembles on saying whether an event will occur or not. (Intuitively, the axes seem the wrong way round). The single model versions are over-confident – so when the forecast probability is 1.0 (certain) the reality is 0.7; when the forecast probability is 0.8, the reality is 0.6; and so on:

From Palmer et al 2005

From Palmer et al 2005

Figure 6

We can see that, at least in this case, the multi-model did a pretty good job. However, similar work on forecasting precipitation events showed much less success.

In their paper, Palmer and his colleagues contrast multi-model vs multi-parameterization within one model. I am not clear what the difference is – whether it is a technicality or something fundamentally different in approach. The example above is multi-model. They do give some examples of multi-parameterizations (with a similar explanation to what I gave in the section above). Their paper is well worth a read, as is the paper by Lewis (see links below).


The concept of taking a “set of possible initial conditions” for weather forecasting makes a lot of sense. The concept of taking a “set of possible parameterizations” also makes sense although it might be less obvious at first sight.

In the first case we know that we don’t know the precise starting point because observations have errors and we lack a perfect observation system. In the second case we understand that a parameterization is some empirical formula which is clearly not “the truth”, but some approximation that is the best we have for the forecasting job at hand, and the “grid size” we are working to. So in both cases creating an ensemble around “the truth” has some clear theoretical basis.

Now what is also important for this theoretical basis is that we can test the results – at least with weather prediction (NWP). That’s because of the short time periods under consideration.

A statement from Palmer (1999) will resonate in the hearts of many readers:

A desirable if not necessary characteristic of any physical model is an ability to make falsifiable predictions

When we come to ensembles of climate models the theoretical case for multi-model ensembles is less clear (to me). There’s a discussion in IPCC AR5 that I have read. I will follow up the references and perhaps produce another article.


The Role of Initial Uncertainties in Prediction, Edward Epstein, Journal of Applied Meteorology (1969) – free paper

Stochastic Dynamic Prediction, Edward Epstein, Tellus (1969) – free paper

On the possible reasons for long-period fluctuations of the general circulation. Proc. WMO-IUGG Symp. on Research and Development Aspects of Long-Range Forecasting, Boulder, CO, World Meteorological Organization, WMO Tech. EN Lorenz (1965) – cited from Lewis (2005)

Roots of Ensemble Forecasting, John Lewis, Monthly Weather Forecasting (2005) – free paper

Predicting uncertainty in forecasts of weather and climate, T.N. Palmer (1999), also published as ECMWF Technical Memorandum No. 294 – free paper

Representing Model Uncertainty in Weather and Climate Prediction, TN Palmer, GJ Shutts, R Hagedorn, FJ Doblas-Reyes, T Jung & M Leutbecher, Annual Review Earth Planetary Sciences (2005) – free paper

Read Full Post »

In Part Seven – GCM I  through Part Ten – GCM IV we looked at GCM simulations of ice ages.

These were mostly attempts at “glacial inception”, that is, starting an ice age. But we also saw a simulation of the last 120 kyrs which attempted to model a complete ice age cycle including the last termination. As we saw, there were lots of limitations..

One condition for glacial inception, “perennial snow cover at high latitudes”, could be produced with a high-resolution coupled atmosphere-ocean GCM (AOGCM), but that model did suffer from the problem of having a cold bias at high latitudes.

The (reasonably accurate) simulation of a whole cycle including inception and termination came by virtue of having the internal feedbacks (ice sheet size & height and CO2 concentration) prescribed.

Just to be clear to new readers, these comments shouldn’t indicate that I’ve uncovered some secret that climate scientists are trying to hide, these points are all out in the open and usually highlighted by the authors of the papers.

In Part Nine – GCM III, one commenter highlighted a 2013 paper by Ayako Abe-Ouchi and co-workers, where the journal in question, Nature, had quite a marketing pitch on the paper. I made brief comment on it in a later article in response to another question, including that I had emailed the lead author asking a question about the modeling work (how was a 120 kyr cycle actually simulated?).

Most recently, in Eighteen – “Probably Nonlinearity” of Unknown Origin, another commented highlighted it, which rekindled my enthusiasm, and I went back and read the paper again. It turns out that my understanding of the paper had been wrong. It wasn’t really a GCM paper at all. It was an ice sheet paper.

There is a whole field of papers on ice sheet models deserving attention.

GCM review

Let’s review GCMs first of all to help us understand where ice sheet models fit in the hierarchy of climate simulations.

GCMs consist of a number of different modules coupled together. The first GCMs were mostly “atmospheric GCMs” = AGCMs, and either they had a “swamp ocean” = a mixed layer of fixed depth, or had prescribed ocean boundary conditions set from an ocean model or from an ocean reconstruction.

Less commonly, unless you worked just with oceans, there were ocean GCMs with prescribed atmospheric boundary conditions (prescribed heat and momentum flux from the atmosphere).

Then coupled atmosphere-ocean GCMs came along = AOGCMs. It was a while before these two parts matched up to the point where there was no “flux drift”, that is, no disappearing heat flux from one part of the model.

Why so difficult to get these two models working together? One important reason comes down to the time-scales involved, which result from the difference in heat capacity and momentum of the two parts of the climate system. The heat capacity and momentum of the ocean is much much higher than that of the atmosphere.

And when we add ice sheets models – ISMs – we have yet another time scale to consider.

  • the atmosphere changes in days, weeks and months
  • the ocean changes in years, decades and centuries
  • the ice sheets changes in centuries, millennia and tens of millenia

This creates a problem for climate scientists who want to apply the fundamental equations of heat, mass & momentum conservation along with parameterizations for “stuff not well understood” and “stuff quite-well-understood but whose parameters are sub-grid”. To run a high resolution AOGCM for a 1,000 years simulation might consume 1 year of supercomputer time and the ice sheet has barely moved during that period.

Ice Sheet Models

Scientists who study ice sheets have a whole bunch of different questions. They want to understand how the ice sheets developed.

What makes them grow, shrink, move, slide, melt.. What parameters are important? What parameters are well understood? What research questions are most deserving of attention? And:

Does our understanding of ice sheet dynamics allow us to model the last glacial cycle?

To answer that question we need a model for ice sheet dynamics, and to that we need to apply some boundary conditions from some other “less interesting” models, like GCMs. As a result, there are a few approaches to setting the boundary conditions so we can do our interesting work of modeling ice sheets.

Before we look at that, let’s look at the dynamics of ice sheets themselves.

Ice Sheet Dynamics

First, in the theme of the last paper, Eighteen – “Probably Nonlinearity” of Unknown Origin, here is Marshall & Clark 2002:

The origin of the dominant 100-kyr ice-volume cycle in the absence of substantive radiation forcing remains one of the most vexing questions in climate dynamics

We can add that to the 34 papers reviewed in that previous article. This paper by Marshall & Clark is definitely a good quick read for people who want to understand ice sheets a little more.

Ice doesn’t conduct a lot of heat – it is a very good insulator. So the important things with ice sheets happen at the top and the bottom.

At the top, ice melts, and the water refreezes, runs off or evaporates. In combination, the loss is called ablation. Then we have precipitation that adds to the ice sheet. So the net effect determines what happens at the top of the ice sheet.

At the bottom, when the ice sheet is very thin, heat can be conducted through from the atmosphere to the base and make it melt – if the atmosphere is warm enough. As the ice sheet gets thicker, very little heat is conducted through. However, there are two important sources of heat for surface heating which results in “basal sliding”. One source is geothermal energy. This is around 0.1 W/m² which is very small unless we are dealing with an insulating material (like ice) and lots of time (like ice sheets). The other source is the shear stress in the ice sheet which can create a lot of heat via the mechanics of deformation.

Once the ice sheet is able to start sliding, the dynamics create a completely different result compared to an ice sheet “cold-pinned” to the rock underneath.

Some comments from Marshall and Clark:

Ice sheet deglaciation involves an amount of energy larger than that provided directly from high-latitude radiation forcing associated with orbital variations. Internal glaciologic, isostatic, and climatic feedbacks are thus essential to explain the deglaciation.

..Moreover, our results suggest that thermal enabling of basal flow does not occur in response to surface warming, which may explain why the timing of the Termination II occurred earlier than predicted by orbital forcing [Gallup et al., 2002].

Results suggest that basal temperature evolution plays an important role in setting the stage for glacial termination. To confirm this hypothesis, model studies need improved basal process physics to incorporate the glaciological mechanisms associated with ice sheet instability (surging, streaming flow).

..Our simulations suggest that a substantial fraction (60% to 80%) of the ice sheet was frozen to the bed for the first 75 kyr of the glacial cycle, thus strongly limiting basal flow. Subsequent doubling of the area of warm-based ice in response to ice sheet thickening and expansion and to the reduction in downward advection of cold ice may have enabled broad increases in geologically- and hydrologically-mediated fast ice flow during the last deglaciation.

Increased dynamical activity of the ice sheet would lead to net thinning of the ice sheet interior and the transport of large amounts of ice into regions of intense ablation both south of the ice sheet and at the marine margins (via calving). This has the potential to provide a strong positive feedback on deglaciation.

The timescale of basal temperature evolution is of the same order as the 100-kyr glacial cycle, suggesting that the establishment of warm-based ice over a large enough area of the ice sheet bed may have influenced the timing of deglaciation. Our results thus reinforce the notion that at a mature point in their life cycle, 100-kyr ice sheets become independent of orbital forcing and affect their own demise through internal feedbacks.

[Emphasis added]

In this article we will focus on a 2007 paper by Ayako Abe-Ouchi, T Segawa & Fuyuki Saito. This paper is essentially the same modeling approach used in Abe-Ouchi’s 2013 Nature paper.

The Ice Model

The ice sheet model has a time step of 2 years, with 1° grid from 30°N to the north pole, 1° longitude and 20 vertical levels.

Equations for the ice sheet include sliding velocity, ice sheet deformation, the heat transfer through the lithosphere, the bedrock elevation and the accumulation rate on the ice sheet.

Note, there is a reference that some of the model is based on work described in Sensitivity of Greenland ice sheet simulation to the numerical procedure employed for ice sheet dynamics, F Saito & A Abe-Ouchi, Ann. Glaciol., (2005) – but I don’t have access to this journal. (If anyone does, please email the paper to me at scienceofdoom – you know what goes here – gmail.com).

How did they calculate the accumulation on the ice sheet? There is an equation:


Ts is the surface temperature, dP is a measure of aridity and Aref is a reference value for accumulation. This is a highly parameterized method of calculating how much thicker or thinner the ice sheet is growing. The authors reference Marshall et al 2002 for this equation, and that paper is very instructive in how poorly understood ice sheet dynamics actually are.

Here is one part of the relevant section in Marshall et al 2002:

..For completeness here, note that we have also experimented with spatial precipitation patterns that are based on present-day distributions.

Under this treatment, local precipitation rates diminish exponentially with local atmospheric cooling, reflecting the increased aridity that can be expected under glacial conditions (Tarasov and Peltier, 1999).

Paleo-precipitation under this parameterization has the form:

P(λ,θ,t) = Pobs(λ,θ)(1+dp)ΔT(λ,θ,t) x exp[βp.max[hs(λ,θ,t)-ht,0]]       (18)

The parameter dP in this equation represents the percentage of drying per 1C; Tarasov and Peltier (1999) choose a value of 3% per °C; dp = 0:03.

[Emphasis added, color added to highlight the relevant part of the equation]

So dp is a parameter that attempts to account for increasing aridity in colder glacial conditions, and in their 2002 paper Marshall et al describe it as 1 of 4 “free parameters” that are investigated to see what effect they have on ice sheet development around the LGM.

Abe-Ouchi and co-authors took a slightly different approach that certainly seems like an improvement over Marshall et al 2002:


So their value of aridity is just a linear function of ice sheet area – from zero to a fixed value, rather than a fixed value no matter the ice sheet size.

How is Ts calculated? That comes, in a way, from the atmospheric GCM, but probably not in a way that readers might expect. So let’s have a look at the GCM then come back to this calculation of Ts.

Atmospheric GCM Simulations

There were three groups of atmospheric GCM simulations, with parameters selected to try and tease out which factors have the most impact.

Group One: high resolution GCM – 1.1º latitude and longitude and 20 atmospheric vertical levels with fixed sea surface temperature. So there is no ocean model, the ocean temperature are prescribed. Within this group, four experiments:

  • A control experiment – modern day values
  • LGM (last glacial maximum) conditions for CO2 (note 1) and orbital parameters with
    • no ice
    • LGM ice extent but zero thickness
    • LGM ice extent and LGM thickness

So the idea is to compare results with and without the actual ice sheet so see how much impact orbital and CO2 values have vs the effect of the ice sheet itself – and then for the ice sheet to see whether the albedo or the elevation has the most impact. Why the elevation? Well, if an ice sheet is 1km thick then the surface temperature will be something like 6ºC colder. (Exactly how much colder is an interesting question because we don’t know what the lapse rate actually was). There will also be an effect on atmospheric circulation – you’ve stuck a “mountain range” in the path of wind so this changes the circulation.

Each of the four simulations was run for 11 or 13 years and the last 10 years’ results used:

From Abe-Ouchi et al 2007

From Abe-Ouchi et al 2007

Figure 1

It’s clear from this simulation that the full result (left graphic) is mostly caused by the ice sheet (right graphic) rather than CO2, orbital parameters and the SSTs (middle graphic). And the next figure in the paper shows the breakdown between the albedo effect and the height of the ice sheet:

From Abe-Ouchi et al 2007

From Abe-Ouchi et al 2007

Figure 2 – same color legend as figure 1

Now a lapse rate of 5K/km was used. What happens if the lapse rate of 9K/km was used instead? There were no simulations done with different lapse rates.

..Other lapse rates could be used which vary depending on the altitude or location, while a lapse rate larger than 7 K/km or smaller than 4 K/km is inconsistent with the overall feature. This is consistent with the finding of Krinner and Genthon (1999), who suggest a lapse rate of 5.5 K/km, but is in contrast with other studies which have conventionally used lapse rates of 8 K/km or 6.5 K/km to drive the ice sheet models..

Group Two – medium resolution GCM 2.8º latitude and longitude and 11 atmospheric vertical levels, with a “slab ocean” – this means the ocean is treated as one temperature through the depth of some fixed layer, like 50m. So it is allowing the ocean to be there as a heat sink/source responding to climate, but no heat transfer through to a deeper ocean.

There were five simulations in this group, one control (modern day everything) and four with CO2 & orbital parameters at the LGM:

  • no ice sheet
  • LGM ice extent, but flat
  • 12 kyrs ago ice extent, but flat
  • 12 kyrs ago ice extent and height

So this group takes a slightly more detailed look at ice sheet impact. Not surprisingly the simulation results give intermediate values for the ice sheet extent at 12 kyrs ago.

Group Three – medium resolution GCM as in group two, and ice sheets either at present day or LGM, with nine simulations covering different orbital values, different CO2 values of present day, 280 or 200 ppm.

There was also some discussion of the impact of different climate models. I found this fascinating because the difference between CCSM and the other models appears to be as great as the difference in figure 2 (above) which identifies the albedo effect as more significant than the lapse rate effect:

From Abe-Ouchi et al 2007

From Abe-Ouchi et al 2007

Figure 3

And this naturally has me wondering about how much significance to put on the GCM simulation results shown in the paper. The authors also comment:

Based on these GCM results we conclude there remains considerable uncertainty over the actual size of the albedo effect.

Given there is also uncertainty over the lapse rate that actually occurred, it seems there is considerable uncertainty over everything.

Now let’s return to the ice sheet model, because so far we haven’t seen any output from the ice sheet model.

GCM Inputs into the Ice Sheet Model

The equation which calculates the change in accumulation on the ice sheet used a fairly arbitrary parameter dp, with (1+dp) raised to the power of Ts.

The ice sheet model has a 2 year time step. The GCM results don’t provide Ts across the surface grid every 2 years, they are snapshots for certain conditions. The ice sheet model uses this calculation for Ts:

Ts = Tref + ΔTice + ΔTco2 + ΔTinsol + ΔTnonlinear

Tref is the reference temperature which is present day climatology. The other ΔT (change in temperature) values are basically a linear interpolation from two values of the GCM simulations. Here is the ΔTCo2 value:



So think of it like this – we have found Ts at one value of CO2 higher and one value of CO2 lower from some snapshot GCM simulations. We plot a graph with Co2 on the x-axis and Ts on the y-axis with just two points on the graph from these two experiments and we draw a straight line between the two points.

To calculate Ts at say 50 kyrs ago we look up the CO2 value at 50 kyrs from ice core data, and read the value of TCO2 from the straight line on the graph.

Likewise for the other parameters. Here is ΔTinsol:



So the method is extremely basic. Of course the model needs something..

Now, given that we have inputs for accumulation on the ice sheet, the ice sheet model can run. Here are the results. The third graph (3) is the sea level from proxy results so is our best estimate of reality, with (4) providing model outputs for different parameters of d0 (“desertification” or aridity) and lapse rate, and (5) providing outputs for different parameters of albedo and lapse rate:

From Abe-Ouchi et al 2007

From Abe-Ouchi et al 2007

Figure 4

There are three main points of interest.

Firstly, small changes in the parameters cause huge changes in the final results. The idea of aridity over ice sheets as just linear function of ice sheet size is very questionable itself. The idea of a constant lapse rate is extremely questionable. Together, using values that appear realistic, we can model much less ice sheet growth (sea level drop) or many times greater ice sheet growth than actually occurred.

Secondly, notice that the time of maximum ice sheet (lowest sea level) for realistic results show sea level starting to rise around 12 kyrs, rather than the actual 18 kyrs. This might be due to the impact of orbital factors which were at quite a low level (i.e., high latitude summer insolation was at quite a low level) when the last ice age finished, but have quite an impact in the model. Of course, we have covered this “problem” in a few previous articles in this series. In the context of this model it might be that the impact of the southern hemisphere leading the globe out of the last ice age is completely missing.

Thirdly – while this might be clear to some people, but for many new to this kind of model it won’t be obvious – the inputs for the model are some limits of the actual history. The model doesn’t simulate the actual start and end of the last ice age “by itself”. We feed into the GCM model a few CO2 values. We feed into the GCM model a few ice sheet extent and heights that (as best as can be reconstructed) actually occurred. The GCM gives us some temperature values for these snapshot conditions.

In the case of this ice sheet model, every 2 years (each time step of the ice sheet model) we “look up” the actual value of ice sheet extent and atmospheric CO2 and we linearly interpolate the GCM output temperatures for the current year. And then we crudely parameterize these values into some accumulation rate on the ice sheet.


This is our first foray into ice sheet models. It should be clear that the results are interesting but we are at a very early stage in modeling ice sheets.

The problems are:

  • the computational load required to run a GCM coupled with an ice sheet model over 120 kyrs is much too high, so it can’t be done
  • the resulting tradeoff uses a few GCM snapshot values to feed linearly interpolated temperatures into a parameterized accumulation equation
  • the effect of lapse rate on the results is extremely large and the actual value for lapse rate over ice sheets is very unlikely to be a constant and is also not known
  • our understanding of ice sheet fundamental equations are still at an early stage, as readers can see by reviewing the first two papers below, especially the second one

 Articles in this Series

Part One – An introduction

Part Two – Lorenz – one point of view from the exceptional E.N. Lorenz

Part Three – Hays, Imbrie & Shackleton – how everyone got onto the Milankovitch theory

Part Four – Understanding Orbits, Seasons and Stuff – how the wobbles and movements of the earth’s orbit affect incoming solar radiation

Part Five – Obliquity & Precession Changes – and in a bit more detail

Part Six – “Hypotheses Abound” – lots of different theories that confusingly go by the same name

Part Seven – GCM I – early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Seven and a Half – Mindmap – my mind map at that time, with many of the papers I have been reviewing and categorizing plus key extracts from those papers

Part Eight – GCM II – more recent work from the “noughties” – GCM results plus EMIC (earth models of intermediate complexity) again trying to produce perennial snow cover

Part Nine – GCM III – very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age – a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Eleven – End of the Last Ice age – latest data showing relationship between Southern Hemisphere temperatures, global temperatures and CO2

Twelve – GCM V – Ice Age Termination – very recent work from He et al 2013, using a high resolution GCM (CCSM3) to analyze the end of the last ice age and the complex link between Antarctic and Greenland

Thirteen – Terminator II – looking at the date of Termination II, the end of the penultimate ice age – and implications for the cause of Termination II

Fourteen – Concepts & HD Data – getting a conceptual feel for the impacts of obliquity and precession, and some ice age datasets in high resolution

Fifteen – Roe vs Huybers – reviewing In Defence of Milankovitch, by Gerard Roe

Sixteen – Roe vs Huybers II – remapping a deep ocean core dataset and updating the previous article

Seventeen – Proxies under Water I – explaining the isotopic proxies and what they actually measure

Eighteen – “Probably Nonlinearity” of Unknown Origin – what is believed and what is put forward as evidence for the theory that ice age terminations were caused by orbital changes


Basal temperature evolution of North American ice sheets and implications for the 100-kyr cycle, SJ Marshall & PU Clark, GRL (2002) – free paper

North American Ice Sheet reconstructions at the Last Glacial Maximum, SJ Marshall, TS James, GKC Clarke, Quaternary Science Reviews (2002) – free paper

Climatic Conditions for modelling the Northern Hemisphere ice sheets throughout the ice age cycle, A Abe-Ouchi, T Segawa, and F Saito, Climate of the Past (2007) – free paper

Insolation-driven 100,000-year glacial cycles and hysteresis of ice-sheet volume, Ayako Abe-Ouchi, Fuyuki Saito, Kenji Kawamura, Maureen E. Raymo, Jun’ichi Okuno, Kunio Takahashi & Heinz Blatter, Nature (2013) – paywall paper


Note 1 – the value of CO2 used in these simulations was 200 ppm, while CO2 at the LGM was actually 180 ppm. Apparently this value of 200 ppm was used in a major inter-comparison project (the PMIP), but I don’t know the reason why. PMIP = Paleoclimate Modelling Intercomparison Project, Joussaume and Taylor, 1995.

Read Full Post »

A while ago, in Part Three – Hays, Imbrie & Shackleton we looked at a seminal paper from 1976.

In that paper, the data now stretched back far enough in time for the authors to demonstrate something of great importance. They showed that changes in ice volume recorded by isotopes in deep ocean cores (see Seventeen – Proxies under Water I) had significant signals at the frequencies of obliquity, precession and one of the frequencies of eccentricity.

Obliquity is the changes in the tilt of the earth’s axis, on a period around 40 kyrs. Precession is the change in the closest approach to the sun through the year (right now the closest approach is in NH winter), on a period around 20 kyrs (see Four – Understanding Orbits, Seasons and Stuff).

Both of these involve significant redistributions of solar energy. Obliquity changes the amount of solar insolation received by the poles versus the tropics. Precession changes the amount of solar insolation at high latitudes in summer versus winter. (Neither changes total solar insolation). This was nicely in line with Milankovitch’s theory – for a recap see Part Three.

I’m going to call this part Theory A, and paraphrase it like this:

The waxing and waning of the ice sheets has 40 kyr and 20 kyr periods which is caused by the changing distribution of solar insolation due to obliquity and precession.

The largest signal in ocean cores over the last 800 kyrs has a component of about 100 kyrs (with some variability). That is, the ice ages start and end with a period of about 100 kyrs. Eccentricity varies on time periods of 100 kyrs and 400 kyrs, but with a very small change in total insolation (see Part Four).

Hays et al produced a completely separate theory, which I’m going to call Theory B, and paraphrase it like this:

The start and end of the ice ages has 100 kyr periods which is caused by the changing eccentricity of the earth’s orbit.

Theory A and Theory B are both in the same paper and are both theories that “link ice ages to orbital changes”. In their paper they demonstrated Theory A but did not prove or demonstrate Theory B. Unfortunately, Theory B is the much more important one.

Here is what they said:

The dominant 100,000 year climatic component has an average period close to, and is in phase with, orbital eccentricity. Unlike the correlations between climate and the higher frequency orbital variations (which can be explained on the assumption that the climate system responds linearly to orbital forcing) an explanation of the correlations between climate and eccentricity probably requires an assumption of non-linearity.

[Emphasis added].

The only quibble I have with the above paragraph is the word “probably”. This word should have been removed. There is no doubt. An assumption of non-linearity is required as a minimum.

Now why does it “probably” or “definitely” require an assumption of non-linearity? And what does that mean?

A linearity assumption is one where the output is proportional to the input. For example: double the weight of a vehicle and the acceleration halves. Most things in the real world, and most things in climate are non-linear. So for example, double the temperature (absolute temperature) and the emitted radiation goes up by a factor of 16.

However, there isn’t a principle, an energy balance equation or even a climate model that can take this tiny change in incoming solar insolation over a 100 kyr period and cause the end of an ice age.

In fact, their statement wasn’t so much “an assumption of non-linearity” but “some non-linearity relationship that we are not currently able to model or demonstrate, some non-linearity relationship we have yet to discover”.

There is nothing wrong with their original statement as such (apart from “probably”), but an alternative way of writing from the available evidence could be:

The dominant 100,000 year climatic component has an average period close to, and is in phase with, orbital eccentricity. Unlike the correlations between climate and the higher frequency orbital variations.. an explanation of the correlations between climate and eccentricity is as yet unknown, remains to be demonstrated and there may in fact be no relationship at all.

Unfortunately, because Theory A and Theory B were in the same paper and because Theory A is well demonstrated and because there is no accepted alternative on the cause of the start and end of ice ages (there are alternative hypotheses around natural resonance) Theory B has become “well accepted”.

And because everyone familiar with climate science knows that Theory A is almost certainly true, when you point out that Theory B doesn’t have any evidence, many people are confused and wonder why you are rejecting well-proven theories.

In the series so far, except in occasional comments, I haven’t properly explained the separation between the two theories and this article is an attempt to clear that up.

Now I will produce a sufficient quantity of papers and quote their “summary of the situation so far” to demonstrate that there isn’t any support for Theory B. The only support is the fact that one component frequency of eccentricity is “similar” to the frequency of the ice age terminations/inceptions, plus the safety in numbers support of everyone else believing it.

One other comment on paleoclimate papers attempts to explain the 100 kyr period. It is the norm for published papers to introduce a new hypothesis. That doesn’t make the new hypothesis correct.

So if I produce a paper, and quote the author’s summary of “the state of work up to now” and that paper then introduces their new hypothesis which claims to perhaps solve the mystery, I haven’t quoted the author’s summary out of context.

Let’s take it as read that lots of climate scientists think they have come up with something new. What we are interesting in is their review of the current state of the field and their evidence cited in support of Theory B.

Before producing the papers I also want to explain why I think the idea behind Theory B is so obviously flawed, and not just because 38 years after Hays, Imbrie & Shackleton the mechanism is still a mystery.

Why Theory B is Unsupportable

If a non-linear relationship can be established between a 0.1% change in insolation over a long period, it must also explain why significant temperature fluctuations in high latitude regions during glacials do not cause a termination.

Here are two high resolution examples from a Greenland ice core (NGRIP) during the last glaciation:

From Wolff et al 2010

From Wolff et al 2010

The “non-linearity” hypothesis has more than one hill to climb. This second challenge is even more difficult than the first.

A tiny change in total insolation causes, via a yet to be determined non-linear effect, the end of each ice age, but this same effect does not amplify frequent large temperature changes of long duration to end an ice age (note 1).

Food for thought.

Theory C Family

Many papers which propose orbital reasons for ice age terminations do not propose eccentricity variations as the cause. Instead, they attribute terminations to specific insolation changes at specific latitudes, or various combinations of orbital factors completely unrelated to eccentricity variations. See Part Six – “Hypotheses Abound”.

Of course, one of these might be right. For now I will call them the family, so we remember that Theory C is not one theory, but a whole range of mostly incompatible theories.

But remember where the orbital hypothesis for ice age termination came from – the 100,000 year period of eccentricity variation “matching” (kind of matching) the 100,000 year period of the ice ages.

The Theory C Family does not have that starting point.


So let’s move onto papers. I started by picking off papers from the right category in my mind map that might have something to say, then I opened up every one of about 300 papers in my ice ages folder (alphabetical by author) and checked to see whether they had something to say on the cause of ice ages in the abstract or introduction. Most papers don’t have a comment because they are about details like d18O proxies, or the CO2 concentration in the Vostok ice core, etc. That’s why there aren’t 300 citations here.

And bold text within a citation is added by me for emphasis.

I looked for their citations (evidence) to back up any claim that orbital variations caused ice age terminations. In some cases I pull up what the citations said.


Last Interglacial Climates, Kukla et al (2002), by a cast of many including the famous Wallace S. Broecker, John Imbrie and Nicholas J. Shackleton:

At the end of the last interglacial period, over 100,000 yr ago, the Earth’s environments, similar to those of today, switched into a profoundly colder glacial mode. Glaciers grew, sea level dropped, and deserts expanded. The same transition occurred many times earlier, linked to periodic shifts of the Earth’s orbit around the Sun. The mechanism of this change, the most important puzzle of climatology, remains unsolved.

Note that “linked to periodic shifts of the Earth’s orbit” is followed by an “unknown mechanism”. Two of the authors were the coauthors of the classic 1976 paper that is most commonly cited as evidence for Theory B.


Millennial-scale variability during the last glacial: The ice core record, Wolff, Chappellaz, Blunier, Rasmussen & Svensson (2010)

The most significant climate variability in the Quaternary record is the alternation between glacial and interglacial, occurring at approximately 100 ka periodicity in the most recent 800 ka. This signal is of global scale, and observed in all climate records, including the long Antarctic ice cores (Jouzel et al., 2007a) and marine sediments (Lisiecki and Raymo, 2005). There is a strong consensus that the underlying cause of these changes is orbital (i.e. due to external forcing from changes in the seasonal and latitudinal pattern of insolation), but amplified by a whole range of internal factors (such as changes in greenhouse gas concentration and in ice extent).

Note the lack of citation for the underlying causes being orbital. However, as we will see, there is “strong consensus”. In this specific paper from the words used I believe the authors are supporting the Theory C Family, not Theory B.


The last glacial cycle: transient simulations with an AOGCM, Robin Smith & Jonathan Gregory (2012)

It is generally accepted that the timing of glacials is linked to variations in solar insolation that result from the Earth’s orbit around the sun (Hays et al. 1976; Huybers and Wunsch 2005). These solar radiative anomalies must have been amplified by feedback processes within the climate system, including changes in atmospheric greenhouse gas (GHG) concentrations (Archer et al. 2000) and ice-sheet growth (Clark et al. 1999), and whilst hypotheses abound as to the details of these feedbacks, none is without its detractors and we cannot yet claim to know how the Earth system produced the climate we see recorded in numerous proxy records.

I think I will classify this one as “Still a mystery”.

Note that support for “linkage to variations in solar insolation” consists of Hays et al 1976 – Theory B – and Huybers and Wunsch 2005 who propose a contradictory theory (obliquity) – Theory C Family. In this case they absolve themselves by pointing out that all the theories have flaws.


The timing of major climate terminations, ME Raymo (1997)

For the past 20 years, the Milankovitch hypothesis, which holds that the Earth’s climate is controlled by variations in incoming solar radiation tied to subtle yet predictable changes in the Earth’s orbit around the Sun [Hays et al., 1976], has been widely accepted by the scientific community. However, the degree to which and the mechanisms by which insolation variations control regional and global climate are poorly understood. In particular, the “100-kyr” climate cycle, the dominant feature of nearly all climate records of the last 900,000 years, has always posed a problem to the Milankovitch hypothesis..

..time interval between terminations is not constant; it varies from 84 kyr between Terminations IV and V to 120 kyr between Terminations III and II.

“Still a mystery”. (Maureen Raymo has written many papers on ice ages, is the coauthor of the LR04 ocean core database and cannot be considered an outlier). Her paper claims she solves the problem:

In conclusion, it is proposed that the interaction between obliquity and the eccentricity-modulation of precession as it controls northern hemisphere summer radiation is responsible for the pattern of ice volume growth and decay observed in the late Quaternary.

Solution was unknown, but new proposed solution is from the Theory C Family.


Glacial termination: sensitivity to orbital and CO2 forcing in a coupled climate system model, Yoshimori, Weaver, Marshall & Clarke (2001)

Glaciation (deglaciation) is one of the most extreme and fundamental climatic events in Earth’s history.. As a result, fluctuations in orbital forcing (e.g. Berger 1978; Berger and Loutre 1991) have been widely recognised as the primary triggers responsible for the glacial-interglacial cycles (Berger 1988; Bradley 1999; Broecker and Denton 1990; Crowley and North 1991; Imbrie and Imbrie 1979). At the same time, these studies revealed the complexity of the climate system, and produced several paradoxes which cannot be explained by a simple linear response of the climate system to orbital forcing.

At this point I was interested to find out how well these 4 papers cited (Berger 1988; Bradley 1999; Broecker and Denton 1990; Crowley and North 1991; Imbrie and Imbrie 1979) backed up the evidence for orbital forcing being the primary triggers for glacial cycles.

Broecker & Denton (1990) is in Scientific American which I don’t think counts as a peer-reviewed journal (even though a long time ago I subscribed to it and thought it was a great magazine). I was able to find the abstract only, which coincides with their peer-reviewed paper The Role of Ocean-Atmosphere Reorganization in Glacial Cycles the same year in Quaternary Science Reviews, so I’ll assume they are media hounds promoting their peer-reviewed paper for a wider audience and look at the peer-reviewed paper. After commenting on the problems:

Such a linkage cannot explain synchronous climate changes of similar severity in both polar hemispheres. Also, it cannot account for the rapidity of the transition from full glacial toward full interglacial conditions. If glacial climates are driven by changes in seasonality, then another linkage must exist.

they state:

We propose that Quaternary glacial cycles were dominated by abrupt reorganizations of the ocean- atmosphere system driven by orbitally induced changes in fresh water transports which impact salt structure in the sea. These reorganizations mark switches between stable modes of operation of the ocean-atmosphere system. Although we think that glacial cycles were driven by orbital change, we see no basis for rejecting the possibility that the mode changes are part of a self- sustained internal oscillation that would operate even in the absence of changes in the Earth’s orbital parameters. If so, as pointed out by Saltzman et al. (1984), orbital cycles can merely modulate and pace a self-oscillating climate system.

So this paper is evidence for Theory B or Theory C Family? “..we think that..” “..we see no basis for rejecting the possibility ..self-sustained internal oscillation”. This is evidence for the astronomical theory?

I can’t access Milankovitch theory and climate, Berger 1988 (thanks, Reviews of Geophysics!). If someone has it, please email it to me at scienceofdoom – you know what goes here – gmail.com. The other two references are books, so I can’t access them. Crowley & North 1991 is Paleoclimatology. Vol 16 of Oxford Monograph on Geology and Geophysics, OUP. Imbrie & Imbrie 1979 is Ice Ages: solving the mystery.


Glacial terminations as southern warmings without northern control, E. W. Wolff, H. Fischer and R. Röthlisberger (2009)

However, the reason for the spacing and timing of interglacials, and the sequence of events at major warmings, remains obscure.

“Still a mystery”. This is a little different from Wolff’s comment in the paper above. Elsewhere (see his comments cited in Eleven – End of the Last Ice age) he has stated that ice age terminations are not understood:

Between about 19,000 and 10,000 years ago, Earth emerged from the last glacial period. The whole globe warmed, ice sheets retreated from Northern Hemisphere continents and atmospheric composition changed significantly. Many theories try to explain what triggered and sustained this transformation (known as the glacial termination), but crucial evidence to validate them is lacking.


The Last Glacial Termination, Denton, Anderson, Toggweiler, Edwards, Schaefer & Putnam (2009)

A major puzzle of paleoclimatology is why, after a long interval of cooling climate, each late Quaternary ice age ended with a relatively short warming leg called a termination. We here offer a comprehensive hypothesis of how Earth emerged from the last global ice age..

“Still a mystery”


Global warming preceded by increasing carbon dioxide concentrations during the last deglaciation, Shakun, Clark, He, Marcott, Mix, Zhengyu Liu, Otto-Bliesner,  Schmittner & Bard (2012)

Understanding the causes of the Pleistocene ice ages has been a significant question in climate dynamics since they were discovered in the mid-nineteenth century. The identification of orbital frequencies in the marine 18O/16O record, a proxy for global ice volume, in the 1970s demonstrated that glacial cycles are ultimately paced by astronomical forcing.

The citation is Hays, Imbrie & Shackleton 1976. Theory B with no support.


Northern Hemisphere forcing of Southern Hemisphere climate during the last deglaciation, He, Shakun, Clark, Carlson, Liu, Otto-Bliesner & Kutzbach (2013)

According to the Milankovitch theory, changes in summer insolation in the high-latitude Northern Hemisphere caused glacial cycles through their impact on ice-sheet mass balance. Statistical analyses of long climate records supported this theory, but they also posed a substantial challenge by showing that changes in Southern Hemisphere climate were in phase with or led those in the north.

The citation is Hays, Imbrie & Shackleton 1976. (Many of the same authors in this and the paper above).


Eight glacial cycles from an Antarctic ice core, EPICA Community Members (2004)

The climate of the last 500,000 years (500 kyr) was characterized by extremely strong 100-kyr cyclicity, as seen particularly in ice-core and marine-sediment records. During the earlier part of the Quaternary (before 1 million years ago; 1 Myr BP), cycles of 41 kyr dominated. The period in between shows intermediate behaviour, with marine records showing both frequencies and a lower amplitude of the climate signal. However, the reasons for the dominance of the 100-kyr (eccentricity) over the 41-kyr (obliquity) band in the later part of the record, and the amplifiers that allow small changes in radiation to cause large changes in global climate, are not well understood.

Is this accepting Theory B or not?


Now onto the alphabetical order..

Climatic Conditions for modelling the Northern Hemisphere ice sheets throughout the ice age cycle, Abe-Ouchi, Segawa & Saito (2007)

To explain why the ice sheets in the Northern Hemisphere grew to the size and extent that has been observed, and why they retreated quickly at the termination of each 100 kyr cycle is still a challenge (Tarasov and Peltier, 1997a; Berger et al., 1998; Paillard, 1998; Paillard and Parrenin, 2004). Although it is now broadly accepted that the orbital variations of the Earth influence climate changes (Milankovitch, 1930; Hays et al., 1976; Berger, 1978), the large amplitude of the ice volume changes and the geographical extent need to be reproduced by comprehensive models which include nonlinear mechanisms of ice sheet dynamics (Raymo, 1997; Tarasov and Peltier, 1997b; Paillard, 2001; Raymo et al., 2006).

The papers cited for this broad agreement are Hays et al 1976 once again. And Berger 1978 who says:

It is not the aim of this paper to draw definitive conclusions about the astronomical theory of paleoclimates but simply to provide geologists with accurate theoretical values of the earth’s orbital elements and insolation..

Berger does go on to comment on eccentricity:

Berger 1978

Berger 1978

And this is simply again noting that the period for eccentricity is “similar” to the period for the ice age terminations.

Theory B with no support.


Insolation-driven 100,000-year glacial cycles and hysteresis of ice-sheet volume, Abe-Ouchi, Saito, Kawamura, Raymo, Okuno, Takahashi & Blatter (2013)

Milankovitch theory proposes that summer insolation at high northern latitudes drives the glacial cycles, and statistical tests have demonstrated that the glacial cycles are indeed linked to eccentricity, obliquity and precession cycles. Yet insolation alone cannot explain the strong 100,000-year cycle, suggesting that internal climatic feedbacks may also be at work. Earlier conceptual models, for example, showed that glacial terminations are associated with the build-up of Northern Hemisphere ‘excess ice’, but the physical mechanisms underpinning the 100,000-year cycle remain unclear.

The citations for the statistical tests are Lisiecki 2010 and Huybers 2011.

Huybers 2011 claims that obliquity and precession (not eccentricity) are linked to deglaciations. This is development of his earlier, very interesting 2007 hypothesis (Glacial variability over the last two million years: an extended depth-derived agemodel, continuous obliquity pacing, and the Pleistocene progression – to which we will return) that obliquity is the prime factor (not necessarily the cause) in deglaciations.

Here is what Huybers says in his 2011 paper, Combined obliquity and precession pacing of late Pleistocene deglaciations:

The cause of these massive shifts in climate remains unclear not for lack of models, of which there are now over thirty, but for want of means to choose among them. Previous statistical tests have demonstrated that obliquity paces the 100-kyr glacial cycles [citations are his 2005 paper with Carl Wunsch and his 2007 paper], helping narrow the list of viable mechanisms, but have been inconclusive with respect to precession (that is, P > 0.05) because of small sample sizes and uncertain timing..

In Links between eccentricity forcing and the 100,000-year glacial cycle, (2010), Lisiecki says:

Variations in the eccentricity (100,000 yr), obliquity (41,000 yr) and precession (23,000 yr) of Earth’s orbit have been linked to glacial–interglacial climate cycles. It is generally thought that the 100,000-yr glacial cycles of the past 800,000 yr are a result of orbital eccentricity [1–4] . However, the eccentricity cycle produces negligible 100-kyr power in seasonal or mean annual insolation, although it does modulate the amplitude of the precession cycle.

Alternatively, it has been suggested that the recent glacial cycles are driven purely by the obliquity cycle [5–7]. Here I use statistical analyses of insolation and the climate of the past five million years to characterize the link between eccentricity and the 100,000-yr glacial cycles. Using cross-wavelet phase analysis, I show that the relative phase of eccentricity and glacial cycles has been stable since 1.2 Myr ago, supporting the hypothesis that 100,000-yr glacial cycles are paced [8–10] by eccentricity [4,11]. However, I find that the time-dependent 100,000-yr power of eccentricity has been anticorrelated with that of climate since 5 Myr ago, with strong eccentricity forcing associated with weaker power in the 100,000-yr glacial cycle.

I propose that the anticorrelation arises from the strong precession forcing associated with strong eccentricity forcing, which disrupts the internal climate feedbacks that drive the 100,000-yr glacial cycle. This supports the hypothesis that internally driven climate feedbacks are the source of the 100,000-yr climate variations.

So she accepts that Theory B is generally accepted, although some Theory C Family advocates are out there, but provides a new hybrid solution of her own.

References for the orbital eccentricity hypothesis [1-4] include Hays et al 1976 and Raymo 1997 cited above. However, Raymo didn’t think it had been demonstrated prior to her 1997 paper and in her 1997 paper introduces the hypothesis that is primarily ice sheet size, obliquity and precession modulated by eccentricity.

References for the obliquity hypothesis [5-7] include the Huybers & Wunsch 2005 and Huybers 2007 covered just before this reference.

So in summary – going back to how we dragged up these references – Abe-Ouchi and co-authors provide two citations in support of the statistical link between orbital variations and deglaciation. One citation claims primarily obliquity with maybe a place for precession – no link to eccentricity. Another citation claims a new theory for eccentricity as a phase-locking mechanism to an internal climate process.

These are two mutually exclusive ideas. But at least both papers attempted to prove their (exclusive) ideas.


Equatorial insolation: from precession harmonics to eccentricity frequencies, Berger, Loutre, & Mélice (2006):

Since the paper by Hays et al. (1976), spectral analyses of climate proxy records provide substantial evidence that a fraction of the climatic variance is driven by insolation changes in the frequency ranges of obliquity and precession variations. However, it is the variance components centered near 100 kyr which dominate most Upper Pleistocene climatic records, although the amount of insolation perturbation at the eccentricity driven periods close to 100-kyr (mainly the 95 kyr- and 123 kyr-periods) is much too small to cause directly a climate change of ice-age amplitude. Many attempts to find an explanation to this 100-kyr cycle in climatic records have been made over the last decades.

“Still a mystery”.


Multistability and hysteresis in the climate-cryosphere system under orbital forcing, Calov & Ganopolski (2005)

In spite of considerable progress in studies of past climate changes, the nature of vigorous climate variations observed during the past several million years remains elusive. A variety of different astronomical theories, among which the Milankovitch theory [Milankovitch, 1941] is the best known, suggest changes in Earth’s orbital parameters as a driver or, at least, a pacemaker of glacial-interglacial climate transitions. However, the mechanisms which translate seasonal and strongly latitude-dependent variations in the insolation into the global-scale climate shifts between glacial and interglacial climate states are the subject of debate.

“Still a mystery”


Ice Age Terminations, Cheng, Edwards, Broecker, Denton, Kong, Wang, Zhang, Wang (2009)

The ice-age cycles have been linked to changes in Earth’s orbital geometry (the Milankovitch or Astronomical theory) through spectral analysis of marine oxygen-isotope records (3), which demonstrate power in the ice-age record at the same three spectral periods as orbitally driven changes in insolation. However, explaining the 100 thousand- year (ky)–recurrence period of ice ages has proved to be problematic because although the 100-ky cycle dominates the ice-volume power spectrum, it is small in the insolation spectrum. In order to understand what factors control ice age cycles, we must know the extent to which terminations are systematically linked to insolation and how any such linkage can produce a non- linear response by the climate system at the end of ice ages.

“Still a mystery”. This paper claims (their new work) that terminations are all about high latitude NH insolation. They state, for the hypothesis of the paper:

In all four cases, observations are consistent with a classic Northern Hemisphere summer insolation intensity trigger for an initial retreat of northern ice sheets.

This is similar to Northern Hemisphere forcing of climatic cycles in Antarctica over the past 360,000 years, Kawamura et al (2007) – not cited here because they didn’t make a statement about “the problem so far”.


Orbital forcing and role of the latitudinal insolation/temperature gradient, Basil Davis & Simon Brewer (2009)

Orbital forcing of the climate system is clearly shown in the Earths record of glacial–interglacial cycles, but the mechanism underlying this forcing is poorly understood.

Not sure whether this is classified as “Still a mystery” or Theory B or Theory C Family.


Evidence for Obliquity Forcing of Glacial Termination II, Drysdale, Hellstrom, Zanchetta, Fallick, Sánchez Goñi, Couchoud, McDonald, Maas, Lohmann & Isola (2009)

During the Late Pleistocene, the period of glacial-to-interglacial transitions (or terminations) has increased relative to the Early Pleistocene [~100 thousand years (ky) versus 40 ky]. A coherent explanation for this shift still eludes paleoclimatologists (3). Although many different models have been proposed (4), the most widely accepted one invokes changes in the intensity of high-latitude Northern Hemisphere summer insolation (NHSI). These changes are driven largely by the precession of the equinoxes (5), which produces relatively large seasonal and hemispheric insolation intensity anomalies as the month of perihelion shifts through its ~23-ky cycle.

Their “widely accepted” theory is from the Theory C Family. This is a different theory from the “widely accepted” theory B. Perhaps both are “widely accepted”, hopefully by different groups of scientists.


The role of orbital forcing, carbon dioxide and regolith in 100 kyr glacial cycles, Ganopolski & Calov (2011)

The origin of the 100 kyr cyclicity, which dominates ice volume variations and other climate records over the past million years, remains debatable..

..One of the major challenges to the classical Milankovitch theory is the presence of 100 kyr cycles that dominate global ice volume and climate variability over the past million years (Hays et al., 1976; Imbrie et al., 1993; Paillard, 2001).

This periodicity is practically absent in the principal “Milankovitch forcing” – variations of summer insolation at high latitudes of the Northern Hemisphere (NH).

The eccentricity of Earth’s orbit does contain periodicities close to 100 kyr and the robust phase relationship between glacial cycles and 100-kyr eccentricity cycles has been found in the paleoclimate records (Hays et al., 1976; Berger et al., 2005; Lisiecki, 2010). However, the direct effect of the eccentricity on Earth’s global energy balance is very small.

Moreover, eccentricity variations are dominated by a 400 kyr cycle which is also seen in some older geological records (e.g. Zachos et al., 1997), but is practically absent in the frequency spectrum of the ice volume variations for the last million years.

In view of this long-standing problem, it was proposed that the 100 kyr cycles do not originate directly from the orbital forcing but rather represent internal oscillations in the climate-cryosphere (Gildor and Tziperman, 2001) or climate-cryosphere-carbonosphere system (e.g. Saltzman and Maasch, 1988; Paillard and Parrenin, 2004), which can be synchronized (phase locked) to the orbital forcing (Tziperman et al., 2006).

Alternatively, it was proposed that the 100 kyr cycles result from the terminations of ice sheet buildup by each second or third obliquity cycle (Huybers and Wunsch, 2005) or each fourth or fifth precessional cycle (Ridgwell et al., 1999) or they originate directly from a strong, nonlinear, climate-cryosphere system response to a combination of precessional and obliquity components of the orbital forcing (Paillard, 1998).

“Still a mystery”.


Modeling the Climatic Response to Orbital Variations, Imbrie & Imbrie (1980)

This is not to say that all important questions have been answered. In fact, one purpose of this article is to contribute to the solution of one of the remaining major problems: the origin and history of the 100,000-year climatic cycle.

At least over the past 600,000 years, almost all climatic records are dominated by variance components in a narrow frequency band centered near a 100,000-year cycle (5-8, 12, 21, 38). Yet a climatic response at these frequencies is not predicted by the Milankovitch version of the astronomical theory – or any other version that involves a linear response (5, 6).

This paper was worth citing because the first author is the coathor of Hays et al 1976. For interest let’s look at what they attempt to demonstrate in their paper. They take the approach of producing different (simple) models with orbital forcing, to try to reproduce the geological record:

The goal of our modeling effort has been to simulate the climatic response to orbital variations over the past 500 kyrs. The resulting model fails to simulate four important aspects of this record. It fails to produce sufficient 100k power; it produces too much 23K and 19K power; it produces too much 413k power and it loses its match with the record ardoun the time of the last 413k eccentricity minimum..

All of these failures are related to a fundamental shortcoming in the generation of 100k power.. Indeed it is possible that no function will yield a good simulation of the entire 500 kyr record under consideration here, because nonorbitally forced high-frequency fluctuations may have caused the system to flip or flop in an unpredictable fashion. This would be an example of Lorenz’s concept of an almost intransitive system..

..Progress in this direction will indicate what long-term variations need to be explained within the framework of a stochastic model and provide a basis for estimating the degree of unpredictability in climate.


On the structure and origin of major glaciation cycles, Imbrie, Boyle, Clemens, Duffy, Howard, Kukla, Kutzbach, Martinson, McIntyre, Mix, Molfino, Morley, Peterson, Pisias, Prell, Raymo, Shackleton & Toggweiler (1992)

It is now widely believed that these astronomical influences, through their control of the seasonal and latitudinal distribution of incident solar radiation, either drive the major climate cycles externally or set the phase of oscillations that are driven internally..

..In this paper we concentrate on the 23-kyr and 41- kyr cycles of glaciation. These prove to be so strongly correlated with large changes in seasonal radiation that we regard them as continuous, essentially linear responses to the Milankovitch forcing. In a subsequent paper we will remove these linearly forced components from each time series and examine the residual response. The residual response is dominated by a 100-kyr cycle, which has twice the amplitude of the 23- and 41-kyr cycles combined. In the band of periods near 100 kyr, variations in radiation correlated with climate are so small, compared with variations correlated with the two shorter climatic cycles, that the strength of the 100-kyr climate cycle must result from the channeling of energy into this band by mechanisms operating within the climate system itself.

In Part 2, Imbrie et al (same authors) 1993 they highlight in more detail the problem of explaining the 100 kyr period:

1. One difficulty in finding a simple Milankovitch explanation is that the amplitudes of all 100-kyr radiation signals are very small [Hays et al., 1976]. As an example, the amplitude of the 100-kyr radiation cycle at June 65N (a signal often used as a forcing in Milankovitch theories) is only 2W/m² (Figure 1). This is 1 order of magnitude smaller than the same insolation signal in the 23- and 41- kyr bands, yet the system’s response in these two bands combined has about half the amplitude observed at 100 kyr.

2. Another fundamental difficulty is that variations in eccentricity are not confined to periods near 100 kyr. In fact, during the late Pleistocene, eccentricity variations at periods near 100 kyr are of the same order of magnitude as those at 413 kyr.. yet the d18O record for this time interval has no corresponding spectral peak near 400 kyr..

3. The high coherency observed between 100 kyr eccentricity and d18O signals is an average that hides significant mismatches, notably about 400 kyrs ago.

Their proposed solution:

In our model, the coupled system acts as a nonlinear amplifier that is particularly sensitive to eccentricity-driven modulations in the 23,000-year sea level cycle. During an interval when sea level is forced upward from a major low stand by a Milankovitch response acting either alone or in combination with an internally driven, higher-frequency process, ice sheets grounded on continental shelves become unstable, mass wasting accelerates, and the resulting deglaciation sets the phase of one wave in the train of 100 kyr oscillations.

This doesn’t really appear to be Theory B.


Orbital forcing of Arctic climate: mechanisms of climate response and implications for continental glaciation, Jackson & Broccoli (2003)

The growth and decay of terrestrial ice sheets during the Quaternary ultimately result from the effects of changes in Earth’s orbital geometry on climate system processes. This link is convincingly established by Hays et al. (1976) who find a correlation between variations of terrestrial ice volume and variations in Earth’s orbital eccentricity, obliquity, and longitude of the perihelion.

Hays et al 1976. Theory B with no support.


A causality problem for Milankovitch, Karner & Muller (2000)

We can conclude that the standard Milankovitch insolation theory does not account for the terminations of the ice ages. That is a serious and disturbing conclusion by itself. We can conclude that models that attribute the terminations to large insolation peaks (or, equivalently, to peaks in the precession parameter), such as the recent one by Raymo (23), are incompatible with the observations.

I’ll take this as “Still a mystery”.


Linear and non-linear response of late Neogene glacial cycles to obliquity forcing and implications for the Milankovitch theory, Lourens, Becker, Bintanja, Hilgen, Tuenter & van de Wal, Ziegler (2010)

Through the spectral analyses of marine oxygen isotope (d18O) records it has been shown that ice-sheets respond both linearly and non-linearly to astronomical forcing.

References in support of this statement include Imbrie et al 1992 & Imbrie et al 1993 that we reviewed above, and Pacemaking the Ice Ages by Frequency Modulation of Earth’s Orbital Eccentricity, JA Rial (1999):

The theory finds support in the fact that the spectra of the d18O records contain some of the same frequencies as the astronomical variations (2– 4), but a satisfactory explanation of how the changes in orbital eccentricity are transformed into the 100-ky quasi-periodic fluctuations in global ice volume indicated by the data has not yet been found (5).

For interest, the claim for the new work in this paper:

Evidence from power spectra of deep-sea oxygen isotope time series suggests that the climate system of Earth responds nonlinearly to astronomical forcing by frequency modulating eccentricity-related variations in insolation. With the help of a simple model, it is shown that frequency modulation of the approximate 100,000-year eccentricity cycles by the 413,000-year component accounts for the variable duration of the ice ages, the multiple-peak character of the time series spectra, and the notorious absence of significant spectral amplitude at the 413,000-year period. The observed spectra are consistent with the classic Milankovitch theories of insolation..

So if we consider the 3 references the provide in support of the “astronomical hypothesis”, the latest one says that a solution to the 100 kyr problem has not yet been found – of course this 1999 paper gives it their own best shot. Rial (1999) clearly doesn’t think that Imbrie et al 1992 / 1993 solved the problem.

And, of course, Rial (1999) proposes a different solution to Imbrie et al 1992/1993.


Dynamics between order and chaos in conceptual models of glacial cycles, Takahito Mitsui & Kazuyuki Aihara, Climate Dynamics (2013)

Hays et al. (1976) presented strong evidence for astronomical theories of ice ages. They found the primary frequencies of astronomical forcing in the geological spectra of marine sediment cores. However, the dominant frequency in geological spectra is approximately 1/100 kyr-1, although this frequency component is negligible in the astronomical forcing. This is referred to as the ‘100 kyr problem.’

However, the linear response cannot appropriately account for the 100 kyr periodicity (Hays et al. 1976).

Ghil (1994) explained the appearance of the 100 kyr periodicity as a nonlinear resonance to the combination tone 1/109 kyr-1 between precessional frequencies 1/19 and 1/23 kyr-1. Contrary to the linear resonance, the nonlinear resonance can occur even if the forcing frequencies are far from the internal frequency of the response system.

Benzi et al. (1982) proposed stochastic resonance as a mechanism of the 100 kyr periodicity, where the response to small external forcing is amplified by the effect of noise.

Tziperman et al. (2006) proposed that the timing of deglaciations is set by the astronomical forcing via the phase- locking mechanism.. De Saedeleer et al. (2013) suggested generalized synchronization (GS) to describe the relation between the glacial cycles and the astronomical forcing. GS means that there is a functional relation between the climate state and the state of the astronomical forcing. They also showed that the functional relation may not be unique for a certain model.

However, the nature of the relation remains to be elucidated.

“Still a mystery”.


Glacial cycles and orbital inclination, Richard Muller & Gordon MacDonald, Nature (1995)

According to the Milankovitch theory, the 100 kyr glacial cycle is caused by changes in insolation (solar heating) brought about by variations in the eccentricity of the Earth’s orbit. There are serious difficulties with this theory: the insolation variations appear to be too small to drive the cycles and a strong 400 kyr modulation predicted by the theory is not present..

We suggest that a radical solution is necessary to solve these problems, and we propose that the 100 kyr glacial cycle is caused, not by eccentricity, but by a previously ignored parameter: the orbital inclination, the tilt of the Earth’s orbital plane..

“Still a mystery”, with the new solution of a member of the Theory C Family.


Terminations VI and VIII (∼ 530 and ∼ 720 kyr BP) tell us the importance of obliquity and precession in the triggering of deglaciations, F. Parrenin & D. Paillard (2012)

The main variations of ice volume of the last million years can be explained from orbital parameters by assuming climate oscillates between two states: glaciations and deglaciations (Parrenin and Paillard, 2003; Imbrie et al., 2011) (or terminations). An additional combination of ice volume and orbital parameters seems to form the trigger of a deglaciation, while only orbital parameters seem to play a role in the triggering of glaciations. Here we present an optimized conceptual model which realistically reproduce ice volume variations during the past million years and in partic- ular the timing of the 11 canonical terminations. We show that our model looses sensitivity to initial conditions only after ∼ 200 kyr at maximum: the ice volume observations form a strong attractor. Both obliquity and precession seem necessary to reproduce all 11 terminations and both seem to play approximately the same role.

Note that eccentricity variations are not cited as the cause.

The support for orbital parameters explaining the ice age glaciation/deglaciation are two papers. First, Parrenin & Paillard: Amplitude and phase of glacial cycles from a conceptual model (2003):

Although we find astronomical frequencies in almost all paleoclimatic records [1,2], it is clear that the climatic system does not respond linearly to insolation variations [3]. The first well-known paradox of the astronomical theory of climate is the ‘100 kyr problem’: the largest variations over the past million years occurred approximately every 100 kyr, but the amplitude of the insolation signal at this frequency is not significant. Although this problem remains puzzling in many respects, multiple equilibria and thresholds in the climate system seem to be key notions to explain this paradoxical frequency.

Their solution:

To explain these paradoxical amplitude and phase modulations, we suggest here that deglaciations started when a combination of insolation and ice volume was large enough. To illustrate this new idea, we present a simple conceptual model that simulates the sea level curve of the past million years with very realistic amplitude modulations, and with good phase modulations.

The other paper cited in support of an astronomical solution is A phase-space model for Pleistocene ice volume, Imbrie, Imbrie-Moore & Lisiecki, Earth and Planetary Science Letters (2011)

Numerous studies have demonstrated that Pleistocene glacial cycles are linked to cyclic changes in Earth’s orbital parameters (Hays et al., 1976; Imbrie et al., 1992; Lisiecki and Raymo, 2007); however, many questions remain about how orbital cycles in insolation produce the observed climate response. The most contentious problem is why late Pleistocene climate records are dominated by 100-kyr cyclicity.

Insolation changes are dominated by 41-kyr obliquity and 23-kyr precession cycles whereas the 100-kyr eccentricity cycle produces negligible 100-kyr power in seasonal or mean annual insolation. Thus, various studies have proposed that 100-kyr glacial cycles are a response to the eccentricity-driven modulation of precession (Raymo, 1997; Lisiecki, 2010b), bundling of obliquity cycles (Huybers and Wunsch, 2005; Liu et al., 2008), and/or internal oscillations (Saltzman et al., 1984; Gildor and Tziperman, 2000; Toggweiler, 2008).

Their new solution:

We present a new, phase-space model of Pleistocene ice volume that generates 100-kyr cycles in the Late Pleistocene as a response to obliquity and precession forcing. Like Parrenin and Paillard, (2003), we use a threshold for glacial terminations. However, ours is a phase-space threshold: a function of ice volume and its rate of change. Our model the first to produce an orbitally driven increase in 100-kyr power during the mid-Pleistocene transition without any change in model parameters.

Theory C Family – two (relatively) new papers (2003 & 2011) with similar theories are presented as support of the astronomical theory causing the ice ages. Note that the theory in Imbrie et al 2013 is not the 100 kyr eccentricity variation proposed by Hays, Imbrie and Shackleton 1976.


Coherence resonance and ice ages, Jon D. Pelletier, JGR (2003)

The processes and feedbacks responsible for the 100-kyr cycle of Late Pleistocene global climate change are still being debated. This paper presents a numerical model that integrates (1) long-wavelength outgoing radiation, (2) the ice-albedo feedback, and (3) lithospheric deflection within the simple conceptual framework of coherence resonance. Coherence resonance is a dynamical process that results in the amplification of internally generated variability at particular periods in a system with bistability and delay feedback..

..The 100-kyr cycle is a free oscillation in the model, present even in the absence of external forcing.

“Still a mystery” – with the new solution that is not astronomical forcing.


The 41 kyr world: Milankovitch’s other unsolved mystery, Maureen E. Raymo & Kerim Nisancioglu (2003)

All serious students of Earth’s climate history have heard of the ‘‘100 kyr problem’’ of Milankovitch orbital theory, namely the lack of an obvious explanation of the dominant 100 kyr periodicity in climate records of the last 800,000 years.

“Still a mystery” – except that Raymo thinks she has found the solution (see earlier)


Is the spectral signature of the 100 kyr glacial cycle consistent with a Milankovitch origin, Ridgwell, Watson & Raymo (1999)

Global ice volume proxy records obtained from deep-sea sediment cores, when analyzed in this way produce a narrow peak corresponding to a period of ~100 kyr that dominates the low frequency part of the spectrum. This contrasts with the spectrum of orbital eccentricity variation, often assumed to be the main candidate to pace the glaciations [Hays et al 1980], which shows two distinct peaks near 100 kyr and substantial power near the 413 kyr period.

Then their solution:

Milankovitch theory seeks to explain the Quaternary glaciations via changes in seasonal insolation caused by periodic changes in the Earth’s obliquity, orbital precession and eccentricity. However, recent high-resolution spectral analysis of d18O proxy climate records have cast doubt on the theory.. Here we show that the spectral signature of d18O records are entirely consistent with Milankovitch mechanisms in which deglaciations are triggered every fourth or fifth precessional cycle. Such mechanisms may involve the buildup of excess ice due to low summertime insolation at the previous precessional high.

So they don’t accept Theory B. They don’t claim the theory has been previously solved and they introduce a Theory C Family.


In defense of Milankovitch, Gerard Roe (2006) – we reviewed this paper in Fifteen – Roe vs Huybers:

The Milankovitch hypothesis is widely held to be one of the cornerstones of climate science. Surprisingly, the hypothesis remains not clearly defined despite an extensive body of research on the link between global ice volume and insolation changes arising from variations in the Earth’s orbit.

And despite his interesting efforts at solving the problem he states towards the end of his paper:

The Milankovitch hypothesis as formulated here does not explain the large rapid deglaciations that occurred at the end of some of the ice age cycles.

Was it still a mystery or just not well defined. And from his new work, I’m not sure whether that means he thinks he has solved the reason for some ice age terminations, or that terminations are still a mystery.


The 100,000-Year Ice-Age Cycle Identified and Found to Lag Temperature, Carbon Dioxide, and Orbital Eccentricity, Nicholas J. Shackleton (the Shackleton from Hays et al 1976), (2000)

It is generally accepted that this 100-ky cycle represents a major component of the record of changes in total Northern Hemisphere ice volume (3). It is difficult to explain this predominant cycle in terms of orbital eccentricity because “the 100,000-year radiation cycle (arising from eccentricity variations) is much too small in amplitude and too late in phase to produce the corresponding climatic cycle by direct forcing”

So the Hays, Imbrie & Shackleton 1976 Theory B is not correct.

He does state:

Hence, the 100,000-year cycle does not arise from ice sheet dynamics; instead, it is probably the response of the global carbon cycle that generates the eccentricity signal by causing changes in atmospheric carbon dioxide concentration.

Note that this is in opposition to the papers by Imbrie et al (2011) and Parrenin & Paillard (2003) that were cited by Parrenin & Paillard (2012) in support of the astronomical theory of the ice ages.


Consequences of pacing the Pleistocene 100 kyr ice ages by nonlinear phase locking to Milankovitch forcing, Tziperman, Raymo, Huybers & Wunsch (2006)

Hays et al. [1976] established that Milankovitch forcing (i.e., variations in orbital parameters and their effect on the insolation at the top of the atmosphere) plays a role in glacial cycle dynamics. However, precisely what that role is, and what is meant by ‘‘Milankovitch theories’’ remains unclear despite decades of work on the subject [e.g., Wunsch, 2004; Rial and Anaclerio, 2000]. Current views vary from the inference that Milankovitch variations in insolation drives the glacial cycle (i.e., the cycles would not exist without Milankovitch variations), to the Milankovitch forcing causing only weak climate perturbations superimposed on the glacial cycles. A further possibility is that the primary influence of the Milankovitch forcing is to set the frequency and phase of the cycles (e.g., controlling the timing of glacial terminations or of glacial inceptions). In the latter case, glacial cycles would exist even in the absence of the insolation changes, but with different timing.

“Still a mystery” – but now solved with a Theory C Family (in their paper).


Quantitative estimate of the Milankovitch-forced contribution to observed Quaternary climate change, Carl Wunsch (2004)

The so-called Milankovitch hypothesis, that much of inferred past climate change is a response to near- periodic variations in the earth’s position and orientation relative to the sun, has attracted a great deal of attention. Numerous textbooks (e.g., Bradley, 1999; Wilson et al., 2000; Ruddiman, 2001) of varying levels and sophistication all tell the reader that the insolation changes are a major element controlling climate on time scales beyond about 10,000 years.

A recent paper begins ‘‘It is widely accepted that climate variability on timescales of 10 kyrs to 10 kyrs is driven primarily by orbital, or so-called Milankovitch, forcing.’’ (McDermott et al., 2001). To a large extent, embrace of the Milankovitch hypothesis can be traced to the pioneering work of Hays et al. (1976), who showed, convincingly, that the expected astronomical periods were visible in deep-sea core records..

..The long-standing question of how the slight Milankovitch forcing could possibly force such an enormous glacial–interglacial change is then answered by concluding that it does not do so.

“Still a mystery” – Wunsch does not accept Theory B and in this year didn’t accept Theory C Family (later co-authors a Theory C Family paper with Huybers). I cited this before in Part Six – “Hypotheses Abound”.


Individual contribution of insolation and CO2 to the interglacial climates of the past 800,000 years, Qiu Zhen Yin & André Berger (2012)

Climate variations of the last 3 million years are characterized by glacial-interglacial cycles which are generally believed to be driven by astronomically induced insolation changes.

No citation for the claim. Of course I agree that it is “generally believed”. Is this theory B? Or theory C? Or not sure?


Summary of the Papers

Out of about 300 papers checked, I found 34 papers (I might have missed a few) with a statement on the major cause of the ice ages separate from what they attempted to prove in their paper. These 34 papers were reviewed, with a further handful of cited papers examined to see what support they offered for the claim of the paper in question.

In respect of “What has been demonstrated up until our paper” – I count:

  • 19 “still a mystery”
  • 9 propose theory B
  • 6 supporting theory C

I have question marks over my own classification of about 10 of these because they lack clarity on what they believe is the situation to date.

Of course, from the point of view of the papers reviewed each believes they have some solution for the mystery. That’s not primarily what I was interested in.

I wanted to see what all papers accept as the story so far, and what evidence they bring for this belief.

I found only one paper claiming theory B that attempted to produce any significant evidence in support.


Hays, Imbrie & Shackleton (1976) did not prove Theory B. They suggested it. Invoking “probably non-linearity” does not constitute proof for an apparent frequency correlation. Specifically, half an apparent frequency correlation – given that eccentricity has a 413 kyr component as well as a 100 kyr component.

Some physical mechanism is necessary. Of course, I’m certain Hays, Imbrie & Shackleton understood this (I’ve read many of their later papers).

Of the papers we reviewed, over half indicate that the solution is still a mystery. That is fine. I agree it is a mystery.

Some papers indicate that the theory is widely believed but not necessarily that they do. That’s probably fine. Although it is confusing for non-specialist readers of their paper.

Some papers cite Hays et al 1976 as support for theory B. This is amazing.

Some papers claim “astronomical forcing” and in support cite Hays et al 1976 plus a paper with a different theory from the Theory C Family. This is also amazing.

Some papers cite support for Theory C Family – an astronomical theory to explain the ice ages with a different theory than Hays et al 1976. Sometimes their cited papers align. However, between papers that accept something in the Theory C Family there is no consensus on which version of Theory C Family, and obviously therefore, on the papers which support it.

How can papers cite Hays et al for support of the astronomical theory of ice age inception/termination?

It is required to put forward citations for just about every claim in a paper even if the entire world has known it from childhood. It seems to be a journal convention/requirement:

The sun rises each day [see Kepler 1596; Newton 1687, Plato 370 BC]

Really? Newton didn’t actually prove it in his paper? Oh, you know what, I just had a quick look at the last few papers in my field and copied their citations so I could get on with putting forward my theory. Come on, we all know the sun rises every day, look out the window (unless you live in England). Anyway, so glad you called, let me explain my new theory, it solves all those other problems, I’ve really got something here..

Well, that might be part of the answer. It isn’t excusable, but introductions don’t have the focus they should have.

Why the Belief in Theory B?

This part I can’t answer. Lots of people have put forward theories, none is generally accepted. The reason for the ice age terminations is unknown. Or known by a few people and not yet accepted by the climate science community.

Is it ok to accept something that everyone else seems to believe even though they all actually have a different theory. Is it ok to accept something as proven that is not really proven because it is from a famous paper with 2500 citations?

Finally, the fact that most papers have some vague words at the start about the “orbital” or “astronomical” theory for the ice ages doesn’t mean that this theory has any support. Being scientific, being skeptical, means asking for evidence and definitely not accepting an idea just because “everyone else” appears to accept it.

I am sure people will take issue with me. In another blog I was told that scientists were just “dotting the i’s and crossing the t’s” and none of this was seriously in doubt. Apparently, I was following creationist tactics of selective and out-of-context quoting..

Well, I will be delighted and no doubt entertained to read these comments, but don’t forget to provide evidence for the astronomical theory of the ice ages.

Articles in this Series

Part One – An introduction

Part Two – Lorenz – one point of view from the exceptional E.N. Lorenz

Part Three – Hays, Imbrie & Shackleton – how everyone got onto the Milankovitch theory

Part Four – Understanding Orbits, Seasons and Stuff – how the wobbles and movements of the earth’s orbit affect incoming solar radiation

Part Five – Obliquity & Precession Changes – and in a bit more detail

Part Six – “Hypotheses Abound” – lots of different theories that confusingly go by the same name

Part Seven – GCM I – early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Seven and a Half – Mindmap – my mind map at that time, with many of the papers I have been reviewing and categorizing plus key extracts from those papers

Part Eight – GCM II – more recent work from the “noughties” – GCM results plus EMIC (earth models of intermediate complexity) again trying to produce perennial snow cover

Part Nine – GCM III – very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age – a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Eleven – End of the Last Ice age – latest data showing relationship between Southern Hemisphere temperatures, global temperatures and CO2

Twelve – GCM V – Ice Age Termination – very recent work from He et al 2013, using a high resolution GCM (CCSM3) to analyze the end of the last ice age and the complex link between Antarctic and Greenland

Thirteen – Terminator II – looking at the date of Termination II, the end of the penultimate ice age – and implications for the cause of Termination II

Fourteen – Concepts & HD Data – getting a conceptual feel for the impacts of obliquity and precession, and some ice age datasets in high resolution

Fifteen – Roe vs Huybers – reviewing In Defence of Milankovitch, by Gerard Roe

Sixteen – Roe vs Huybers II – remapping a deep ocean core dataset and updating the previous article

Seventeen – Proxies under Water I – explaining the isotopic proxies and what they actually measure

Nineteen – Ice Sheet Models I – looking at the state of ice sheet models


Note 1: The temperature fluctuations measured in Antarctica are a lot smaller than Greenland but still significant and still present for similar periods. There are also some technical challenges with calculating the temperature change in Antarctica (the relationship between d18O and local temperature) that have been better resolved in Greenland.

Read Full Post »

In Part Eleven we looked at the end of the last ice age. We mainly reviewed Shakun et al 2012, who provided some very interesting data on the timing of Southern Hemisphere and Northern Hemisphere temperatures, along with atmospheric CO2 values – in brief, the SH starts to heat up, then CO2 increases very close in time with SH temperatures, providing positive feedback on an initial temperature rise – and global temperatures follow SH the whole way:

From Shakun et al 2012

From Shakun et al 2012

Figure 1

This Nature paper also provided some modeling work which I had some criticism of, but it wasn’t the focus of the paper. Eric Wolff, one of the key EPICA steering committee members, also had similar criticisms of the modeling, which were published in the same Nature edition.

In this article we will look at He et al 2013, published in Nature, which is a modeling study of the same events. One of the co-authors is Jeremy Shakun, the lead author of our earlier paper. The co-authors also include Bette Otto-Bliesner, one of the lead authors of the IPCC AR5 on Paleoclimate.

For new readers, I suggest reading:

He et al 2013

Readers who have followed this series will see that the abstract (cited below) covers some familiar territory:

According to the Milankovitch theory, changes in summer insolation in the high-latitude Northern Hemisphere caused glacial cycles through their impact on ice-sheet mass balance.

Statistical analyses of long climate records supported this theory, but they also posed a substantial challenge by showing that changes in Southern Hemisphere climate were in phase with or led those in the north.

Although an orbitally forced Northern Hemisphere signal may have been transmitted to the Southern Hemisphere, insolation forcing can also directly influence local Southern Hemisphere climate, potentially intensified by sea-ice feedback, suggesting that the hemispheres may have responded independently to different aspects of orbital forcing.

Signal processing of climate records cannot distinguish between these conditions, however, because the proposed insolation forcings share essentially identical variability.

Here we use transient simulations with a coupled atmosphere–ocean general circulation model to identify the impacts of forcing from changes in orbits, atmospheric CO2 concentration, ice sheets and the Atlantic meridional overturning circulation (AMOC) on hemispheric temperatures during the first half of the last deglaciation (22–14.3 kyr BP).

Although based on a single model, our transient simulation with only orbital changes supports the Milankovitch theory in showing that the last deglaciation was initiated by rising insolation during spring and summer in the mid-latitude to high-latitude Northern Hemisphere and by terrestrial snow–albedo feedback.

[Emphasis added]. The abstract continues:

The simulation with all forcings best reproduces the timing and magnitude of surface temperature evolution in the Southern Hemisphere in deglacial proxy records.

This is a similar modeling result to the paper in Part Nine which had the same approach of individual “forcings” and a simulation with all “forcings” combined. I put “forcings” in quotes, because the forcings (ice sheets, GHGs & meltwater fluxes) are actually feedbacks, but GCMs are currently unable to simulate them.

AMOC changes associated with an orbitally induced retreat of Northern Hemisphere ice sheets is the most plausible explanation for the early Southern Hemisphere deglacial warming and its lead over Northern Hemisphere temperature; the ensuing rise in atmospheric CO2 concentration provided the critical feedback on global deglaciation.

In this paper the GCM simulations are:

  • ORB (22–14.3 kyr BP), forced only by transient variations of orbital configuration
  • GHG (22–14.3 kyr BP), forced only by transient variations of atmospheric greenhouse gas concentrations
  • MOC (19–14.3 kyr BP), forced only by transient variations of meltwater fluxes from the Northern Hemisphere (NH) and Antarctic ice sheets
  • ICE (19– 14.3 kyr BP), forced only by quasi-transient variations of ice-sheet orography and extent based on the ICE-5G (VM2) reconstruction.

And then there is an ALL simulation which combines all of these forcings. The GCM used is CCSM3 (we saw CCSM4, an updated version of CCSM3, used in Part Ten – GCM IV).

The idea behind the paper is to answer a few questions, one of which is why, if high latitude Northern Hemisphere (NH) solar insolation changes are the key to understanding ice ages, did the SH warm first? (This question was also addressed in Shakun et al 2012).

Another idea behind the paper is to try and simulate the actual temperature rises in both NH and SH during the last deglaciation.

Let’s take a look..

Their first figure is a little hard to get into but the essence is that blue is the model with only orbital forcing (ORB), red is the model with ALL forcings and black is the proxy reconstruction of temperature (at various locations).

From He et al 2013

From He et al 2013

Figure 2 

We can see that orbital forcing on its own has just about no impact on any of the main temperature metrics, and we can see that Antarctica and Greenland have different temperature histories in this period.

  • We can see that their ALL model did a reasonable job of reconstructing the main temperature trends.
  • We can also see that it did a poor job of capturing the temperature fluctuations on the scale of centuries to 1kyr when they occurred.

For reference here is the Greenland record from NGRIP from 20k – 10kyrs BP:


Figure 3 – NGRIP data

We can see that the main warming in Greenland (at least this location in N. Greenland) took place around 15 kyrs ago, whereas Antarctica (compare figure 1 from Shakun et al) started its significant warming around 18 kyrs ago.

The paper basically demonstrates that they can capture two main temperature trends due to two separate effects:

  1. The “cooling” in Greenland from 19k-17k years with a warming in Antarctica over the same period – due to the MOC
  2. The continued warming from 17k-15k in both regions due to GHGs

Note that my NGRIP data shows a different temperature trend from their GISP data and I don’t know why.

Let’s understand what the model shows – this data is from their Supplementary data found on the Nature website.

First, 19-17 kyrs ago, Antartica (AN) has a significant warming, while Greenland (GI) has a bigger cooling:

He et al 2013-figS25

Figure 4

Note that the MOC (yellow) is the simulation that produces both the GI and AN change. (SUM is the values of the individual runs added together, while ALL is the simulation with all forcings combined).

Second, 17 – 15 kyrs ago, Antartica (AN) continues its warming and Greenland (GI) also warms:

He et al 2013-figS27

Figure 5

Note that GHG (pink) is the primary cause of the temperature rises now.

We can see the temperature trends over time as a better way of viewing it. I added some annotations because the layout wasn’t immediately obvious (to me):

He et al 2013-fig4-annotated-499px

Figure 6 – Red/blue annotations on side, and orange annotations on top

Again, as with figure 1, we can see that the main trends are quite well simulated but the model doesn’t pick shorter period variations.

The MOC in brief

A quick explanation – the MOC brings warmer surface tropical water to the high northern latitudes, warming up the high latitudes. The cold water returns at depth, making a really big circulation. When this circulation is disrupted Antarctica gets more tropical water and warms up (Antarctica has a similar large scale circulation running from the tropics on the surface and back at depth), while the northern polar region cools down.

It’s called the bipolar seesaw.

When your pour an extremely big vat of fresh water into the high latitudes it slows down, or turns off, the MOC. This is because fresh water is not as heavy as salty water, it can’t sink and it slows down the circulation.

So – if you have lots of ice melting in the high northern latitudes it flows into the ocean, slowing down the MOC, cooling the high northern latitudes and warming up Antarctica.

That’s what their model shows.

The available data on the MOC supports the idea, here is the part d from their fig 1 – the black line is the proxy reconstruction:

He et al 2013 fig1d

Figure 7

The units on the left are volume rates of water flowing between the tropics and high northern latitudes.

What Did Orbital Forcing do in their Model?

If we look back at their figure 1 (our figure 2) we see no change to anything as a result of simulation ORB so the abstract might seem a little confusing when their paper indicates that insolation, aka the Milankovitch theory, is what causes the whole chain of events.

In their figure 2 they show a geographical look of polar and high latitude summer temperature changes as a result of simulation ORB.

The initial increase of the mid-latitude to high-latitude NH spring–summer insolation between 22 and 19 kyr BP was about threefold that in the SH (Fig. 2a, b). Furthermore, the decrease in surface albedo from the melting of terrestrial snow cover in the NH results in additional net solar flux absorption in the NH (Supplementary Figs 12–15). Consequently, NH summers in simulation ORB warm by up to 2°C in the Arctic and by up to 4°C over Eurasia, with an area average of 0.9°C warming in mid to high latitudes in the NH (Fig. 2c, e).

In their model this doesn’t affect Greenland (for reasons I don’t understand). They claim:

Our ORB simulation thus supports the Milankovitch theory in showing that substantial summer warming occurs in the NH at the end of the Last Glacial Maximum as a result of the larger increase in high-latitude spring–summer insolation in the NH and greater sensitivity of the land-dominated northern high latitudes to insolation forcing from the snow–albedo feedback.

This orbitally induced warming probably initiated the retreat of NH ice sheets and helped sustain their retreat during the Oldest Dryas.

[Emphasis added].


1. If we run the same orbital simulation at 104, 83 or 67 kyrs BP (or quite a few other times) what would we find? Here are the changes in insolation at 60°N from 130 kyrs BP to the present:


Figure 8

It’s not at all clear what is special about 21-18 kyr BP insolation. It’s no surprise that a GCM produces a local temperature increase when local insolation rises.

2. The meltwater pulse injected in the model is not derived from a calculation of any ice melt as a result of increased summer temperatures over ice sheets, it is an applied forcing. Given that the ice melt slows down the MOC and therefore acts to reduce the high latitude temperature, the MOC should act as a negative feedback on any ice/snow melt.

4. The Smith & Gregory 2012 paper that we look at in Part Nine maybe has different effects from the individual forcings to those found by He et al. Because 20 – 15 kyrs is a little compressed in Smith & Gregory I can’t be sure. Take, for example, the effect on (only) ice sheets during this period. Quite an effect in SG2012 over Greenland, nothing in He et al (see fig 6 above).

From Smith & Gregory 2012

From Smith & Gregory 2012

Figure 9


It’s an interesting paper, showing that changes in the large scale ocean currents between tropics and poles (the MOC) can account for a Greenland cooling and an Antarctic warming roughly in line with the proxy records. Most lines of evidence suggest that large-scale ice melt is the factor that disrupts the MOC.

Perhaps high latitude insolation changes at about 20 kyrs BP caused massive ice melt, which slowed the MOC, which warmed Antarctica, which led (by mechanisms unknown) to large increases in CO2, which created positive feedback on the temperature rise and terminated the last ice age.

Perhaps CO2 increased at the same time as Antarctic temperature (see the brief section on Parrenin et al 2013 in Part Eleven), therefore raising questions about where the cause and effect lies.

To make sense of climate we need to understand why:

a) previous higher insolation in the high latitudes didn’t set off the same chain of events
b) previous temperature changes in Antarctica didn’t set off the same chain of events
c) whether the temperature changes produced in simulation ORB can account for enough ice melt seen in the MOC changes (and what feedback effect that has)

And of course, we need to understand why CO2 increased so sharply at the end of the last ice age.

Articles in the Series

Part One – An introduction

Part Two – Lorenz – one point of view from the exceptional E.N. Lorenz

Part Three – Hays, Imbrie & Shackleton – how everyone got onto the Milankovitch theory

Part Four – Understanding Orbits, Seasons and Stuff – how the wobbles and movements of the earth’s orbit affect incoming solar radiation

Part Five – Obliquity & Precession Changes – and in a bit more detail

Part Six – “Hypotheses Abound” – lots of different theories that confusingly go by the same name

Part Seven – GCM I – early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Seven and a Half – Mindmap – my mind map at that time, with many of the papers I have been reviewing and categorizing plus key extracts from those papers

Part Eight – GCM II – more recent work from the “noughties” – GCM results plus EMIC (earth models of intermediate complexity) again trying to produce perennial snow cover

Part Nine – GCM III – very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age – a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Eleven – End of the Last Ice age – latest data showing relationship between Southern Hemisphere temperatures, global temperatures and CO2

Thirteen – Terminator II – looking at the date of Termination II, the end of the penultimate ice age – and implications for the cause of Termination II

Fourteen – Concepts & HD Data – getting a conceptual feel for the impacts of obliquity and precession, and some ice age datasets in high resolution

Fifteen – Roe vs Huybers – reviewing In Defence of Milankovitch, by Gerard Roe

Sixteen – Roe vs Huybers II – remapping a deep ocean core dataset and updating the previous article

Seventeen – Proxies under Water I – explaining the isotopic proxies and what they actually measure

Eighteen – “Probably Nonlinearity” of Unknown Origin – what is believed and what is put forward as evidence for the theory that ice age terminations were caused by orbital changes

Nineteen – Ice Sheet Models I – looking at the state of ice sheet models


Northern Hemisphere forcing of Southern Hemisphere climate during the last deglaciation, He, Shakun, Clark, Carlson, Liu, Otto-Bliesner & Kutzbach, Nature (2013) – free paper (there is considerable supplementary information probably only available on the Nature website)

Read Full Post »

In Part Nine we looked at a GCM simulation over the last 120,000 years, quite an ambitious project, which had some mixed results. The biggest challenge is simply running a full GCM over such a long time frame. To do this, the model had a reduced spatial resolution, and “speeded” up all the forcings so that the model really ran over 1,200 years.

The forcings included ice sheet size/location/height, as well as GHGs in the atmosphere. In reality these are feedbacks, but GCMs are not currently able to produce them.

In this article we will look one of the latest GCMs but running over a “snapshot” period of about 700 years. This allows full spatial resolution, but has the downside of not covering anything like a full glacial cycle. The aim here is to run the model with the orbital conditions of 116 kyrs BP to see if perennial snow cover forms in the right locations. This is a similar project to what we covered with early GCMs in Part Seven – GCM I and work from around a decade ago in Part Eight – GCM II.

The paper has some very interesting results on the feedbacks which we will take a look at.

Jochum et al (2012)

The problem:

Models of intermediate complexity.. and flux- corrected GCMs have typically been able to simulate a connection between orbital forcing, temperature, and snow volume. So far, however, fully coupled, nonflux- corrected primitive equation general circulation models (GCMs) have failed to reproduce glacial inception, the cooling and increase in snow and ice cover that leads from the warm interglacials to the cold glacial periods.

Milankovitch (1941) postulated that the driver for this cooling is the orbitally induced reduction in Northern Hemisphere summertime insolation and the subsequent increase of perennial snow cover. The increased perennial snow cover and its positive albedo feedback are, of course, only precursors to ice sheet growth. The GCMs failure to recreate glacial inception, which indicates a failure of either the GCMs or of Milankovitch’s hypothesis.

Of course, if the hypothesis would be the culprit, one would have to wonder if climate is sufficiently understood to assemble a GCM in the first place. Either way, it appears that reproducing the observed glacial–interglacial changes in ice volume and temperature represents a good test bed for evaluating the fidelity of some key model feedbacks relevant to climate projections.

The potential causes for GCMs failing to reproduce inception are plentiful, ranging from numerics on the GCMs side to neglected feedbacks of land, atmosphere, or ocean processes on the theory side. It is encouraging, though, that for some GCMs it takes only small modifications to produce an increase in perennial snow cover (e.g., Dong and Valdes 1995). Nevertheless, the goal for the GCM community has to be the recreation of increased perennial snow cover with a GCM that has been tuned to the present-day climate, and is subjected to changes in orbital forcing only.

Their model:

The numerical experiments are performed using the latest version of the National Center for Atmospheric Research (NCAR) CCSM4, which consists of the fully coupled atmosphere, ocean, land, and sea ice models..

CCSM4 is a state-of-the-art climate model that has improved in many aspects from its predecessor CCSM3. For the present context, the most important improvement is the increased atmospheric resolution, because it allows for a more accurate representation of altitude and therefore land snow cover.

See Note 1 for some more model specifics from the paper. And a long time we looked at some basics of CCSM3 – Models, On – and Off – the Catwalk – Part Two.

Limitations of the model – no ice sheet module (as with the FAMOUS model in Part Nine):

The CCSM does not yet contain an ice sheet module, so we use snow accumulation as the main metric to evaluate the inception scenario. The snow accumulation on land is computed as the sum of snowfall, frozen rain, snowmelt, and removal of excess snow. Excess snow is defined as snow exceeding 1 m of water equivalent, approximately 3–5 m of snow.

This excess snow removal is a very crude parameterization of iceberg calving, and together with the meltwater the excess snow is delivered to the river network, and eventually added to the coastal surface waters of the adjacent ocean grid cells. Thus, the local ice sheet volume and the global fresh- water volume are conserved.

Problems of the model:

Another bias relevant for the present discussion is the temperature bias of the northern high-latitude land. As discussed in the next section, much of the CCSM4 response to orbital forcing is due to reduced summer melt of snow. A cold bias in the control will make it more likely to keep the summer temperature below freezing, and will overestimate the model’s snow accumulation. In the annual mean, northern Siberia and northern Canada are too cold by about 1ºC–2ºC, and Baffin Island by about 5ºC (Gent et al. 2011). The Siberian biases are not so dramatic, but it is quite unfortunate that Baffin Island, the nucleus of the Laurentide ice sheet, has one of the worst temperature biases in CCSM4. A closer look at the temperature biases in North America, though, reveals that the cold bias is dominated by the fall and winter biases, whereas during spring and summer Baffin Island is too cold by approximately 3ºC, and the Canadian Archipelago even shows a weak warm bias.

[Emphasis added, likewise with all bold text in quotes].

Their plan:

The subsequent sections will analyze and compare two different simulations: an 1850 control (CONT), in which the earth’s orbital parameters are set to the 1990 values and the atmospheric composition is fixed at its 1850 values; and a simulation identical to CONT, with the exception of the orbital parameters, which are set to the values of 115 kya (OP115). The atmospheric CO2 concentration in both experiments is 285 ppm.

The models were run for about 700 (simulated) years. They give some interesting metrics on why they can’t run a 120 kyr simulation:

This experimental setup is not optimal, of course. Ideally one would like to integrate the model from the last interglacial, approximately 126 kya ago, for 10 000 years into the glacial with slowly changing orbital forcing. However, this is not affordable; a 100-yr integration of CCSM on the NCAR supercomputers takes approximately 1 month and a substantial fraction of the climate group’s computing allocation.


First of all, they do produce perennial snow cover at high latitudes.

The paper has a very good explanation of how the different climate factors go together in the high latitudes where we are looking to get perennial snow cover. It helps us see why doing stuff in your head, using basic energy balance models, and even running models of intermediate complexity (EMICs) cannot (with confidence) produce useful answers.

Let’s take a look.

Jochum et al 2012

Jochum et al 2012

Figure 1

This graph is comparing the annual solar radiation by latitude between 115 kyrs ago and today.

Incoming solar radiation –  black curve – notice the basic point that – at 115 kyrs ago the tropics have higher annual insolation while the high latitudes have lower annual insolation.

Our focus will be on the Northern Hemisphere north of 60ºN, which covers the areas of large cooling and increased snow cover. Compared to CONT [control], the annual average of the incoming radiation over this Arctic domain is smaller in OP115 by 4.3 W/m² (black line), but the large albedo reduces this difference at the TOA to only 1.9 W/m² (blue line, see also Table 1).

Blue shows the result when we take into account existing albedo – that is, because a lot of solar radiation is already reflected away in high latitudes, any changes in incoming radiation are reduced by the albedo effect (before albedo itself changes).

Green shows the result when we take into account changed albedo with the increased snow cover found in the 115 kyr simulation.

In CCSM4 this larger albedo in OP115 leads to a TOA clear-sky shortwave radiation that is 8.6 W/m² smaller than in CONT —more than 4 times the original signal.

The snow/ice–albedo feedback is then calculated as 6.7 W/m² (8.6–1.9 W/m²). Interestingly, the low cloud cover is smaller in OP115 than in CONT, reducing the difference in total TOA shortwave radiation by 3.1 to 5.5 W/m² (green line). Summing up, an initial forcing of 1.9 W/m² north of 60ºN, is amplified through the snow–ice–albedo feedback by 6.7 W/m², and damped through a negative cloud feedback by 3.1 W/m².

The summary table:


Because of the larger meridional temperature (Fig. 1a) and moisture gradient (Fig. 4a), the lateral atmospheric heat flux into the Arctic is increased from 2.88 to 3.00 PW. This 0.12 PW difference translates into an Arctic average of 3.1 W/m²; this is a negative feedback as large as the cloud feedback, and 6 times as large as the increase in the ocean meridional heat transport at 60ºN (next section).

Thus, the negative feedback of the clouds and the meridional heat transport almost compensate for the positive albedo feedback, leading to a total feedback of only 0.5 W/m². One way to look at these feedbacks is that the climate system is quite stable, with clouds and meridional transports limiting the impact of albedo changes. This may explain why some numerical models have difficulties creating the observed cooling associated with the orbital forcing.

I think it’s important to note that they get their result through a different mechanism from one of the papers we reviewed in Part Nine:

Thus, in contrast to the results of Vettoretti and Peltier (2003) the increase in snowfall is negligible compared to the reduction in snowmelt.

Their result:

The global net difference in melting and snowfall between OP115 and CONT leads to an implied snow accumulation that is equivalent to a sea level drop of 20 m in 10,000 years, some of it being due to the Baffin Island cold bias. This is less than the 50-m estimate based on sea level reconstructions between present day and 115 kya, but nonetheless it suggests that the model response is of the right magnitude.

Atlantic Meridional Overturning Current (AMOC)

This current has a big impact on the higher latitudes of the Atlantic because it brings warmer water from the tropics.

The meridional heat transport of the AMOC is a major source of heat for the northern North Atlantic Ocean, but it is also believed to be susceptible to small perturbations.

This raises the possibility that the AMOC amplifies the orbital forcing, or even that this amplification is necessary for the Northern Hemisphere glaciations and terminations. In fact, JPML demonstrates that at least in one GCM changes in orbital forcing can lead to a weakening of the MOC and a subsequent large Northern Hemisphere cooling. Here, we revisit the connection between orbital forcing and AMOC strength with the CCSM4, which features improved physics and higher spatial resolution compared to JPML.

In essence they found a limited change in the AMOC in this study. Interested readers can review the free paper. This is an important result because earlier studies with lower resolution models or GCMs that are not fully coupled have often found a strong role for the MOC in amplifying changes.


This is an interesting paper, important because it uses a full resolution state-of-the-art GCM to simulate perennial snow cover at 115 kys BP, simply with pre-industrial GHG concentrations and insolation from 115 kyrs BP.

The model has a cold bias (and an increased moisture bias) in high latitude NH regions and this raises questions on the significance of the result (to my skeptical mind):

  • Can a high resolution AOGCM with no high latitude cold bias reproduce perennial snow cover with just pre-industrial GHG concentration and orbital forcing from 115 kyrs ago?
  • Can this model, with its high latitude cold bias, reproduce a glacial termination?

That doesn’t mean the paper isn’t very valuable and the authors have certainly not tried to gloss over the shortcomings of the model – in fact, they have highlighted them.

What the paper also reveals – in conjunction with what we have seen from earlier articles – is that as we move through generations and complexities of models we can get success, then a better model produces failure, then a better model again produces success. Also we noted that whereas the 2003 model (also cold-biased) of Vettoretti & Peltier found perennial snow cover through increased moisture transport into the critical region (which they describe as an “atmospheric–cryospheric feedback mechanism”), this more recent study with a better model found no increase in moisture transport.

The details of how different models achieve the same result is important. I don’t think any climate scientist would disagree, but it means that multiple papers with “success” may not equate to “success for all” and may not equate to “general success”. The details need to be investigated.

This 2012 paper also demonstrates the importance of all of the (currently known) feedbacks – increased albedo from increased snow cover is almost wiped out by negative feedbacks.

Lastly, the paper also points out that their model, run over 700 years, fails to produce significant cooling of the Southern Polar region:

More importantly, though, the lack of any significant Southern Hemisphere polar response needs explaining (Fig. 1). While Petit et al. (1999) suggests that Antarctica cooled by about 10ºC during the last inception, the more recent high-resolution analysis by Jouzel et al. (2007) suggest that it was only slightly cooler than today (less than 3ºC at the European Project for Ice Coring in Antarctica (EPICA) Dome C site on the Antarctic Plateau). Of course, there are substantial uncertainties in reconstructing Antarctic temperatures..

I don’t have any comment on this particular point, lacking much understanding of recent work in dating and correlating EPICA (Antarctic ice core) with Greenland ice cores.

Articles in the Series

Part One – An introduction

Part Two – Lorenz – one point of view from the exceptional E.N. Lorenz

Part Three – Hays, Imbrie & Shackleton – how everyone got onto the Milankovitch theory

Part Four – Understanding Orbits, Seasons and Stuff – how the wobbles and movements of the earth’s orbit affect incoming solar radiation

Part Five – Obliquity & Precession Changes – and in a bit more detail

Part Six – “Hypotheses Abound” – lots of different theories that confusingly go by the same name

Part Seven – GCM I – early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Seven and a Half – Mindmap – my mind map at that time, with many of the papers I have been reviewing and categorizing plus key extracts from those papers

Part Eight – GCM II – more recent work from the “noughties” – GCM results plus EMIC (earth models of intermediate complexity) again trying to produce perennial snow cover

Part Nine – GCM III – very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age – a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Eleven – End of the Last Ice age – latest data showing relationship between Southern Hemisphere temperatures, global temperatures and CO2

Twelve – GCM V – Ice Age Termination – very recent work from He et al 2013, using a high resolution GCM (CCSM3) to analyze the end of the last ice age and the complex link between Antarctic and Greenland

Thirteen – Terminator II – looking at the date of Termination II, the end of the penultimate ice age – and implications for the cause of Termination II

Fourteen – Concepts & HD Data – getting a conceptual feel for the impacts of obliquity and precession, and some ice age datasets in high resolution

Fifteen – Roe vs Huybers – reviewing In Defence of Milankovitch, by Gerard Roe

Sixteen – Roe vs Huybers II – remapping a deep ocean core dataset and updating the previous article

Seventeen – Proxies under Water I – explaining the isotopic proxies and what they actually measure

Eighteen – “Probably Nonlinearity” of Unknown Origin – what is believed and what is put forward as evidence for the theory that ice age terminations were caused by orbital changes

Nineteen – Ice Sheet Models I – looking at the state of ice sheet models


True to Milankovitch: Glacial Inception in the New Community Climate System Model, Jochum, Jahn, Peacock, Bailey, Fasullo, Kay, Levis & Otto-Bliesner, Journal of Climate (2012) – free paper


Note 1 – more on the model:

The ocean component has a horizontal resolution that is constant at 1.125º in longitude and varies from 0.27º at the equator to approximately 0.7º in the high latitudes. In the vertical there are 60 depth levels; the uppermost layer has a thickness of 10 m and the deepest layer has a thickness of 250 m. The atmospheric component uses a horizontal resolution of 0.9º x 1.25º with 26 levels in the vertical. The sea ice model shares the same horizontal grid as the ocean model and the land model is on the same horizontal grid as the atmospheric model.

Read Full Post »

In Part Seven we looked at some early GCM work – late 80’s to mid 90’s. In Part Eight we looked at some papers from the “Noughties” – atmospheric GCMs with prescribed ocean temperatures and some intermediate complexity models.

All of these papers were attempting to do the most fundamental of ice age inception – perennial snow cover at high latitudes. Perennial snow cover may lead to permanent ice sheets – but it may not. This requires an ice sheet model which handles the complexities of how ice sheets grow, collapse, slide and transfer heat.

Given the computational limitations of models even running a model to produce (or not) the basics of perennial snow cover has not been a trivial exercise, but a full atmospheric ocean GCM with an ice sheet model run for 130,000 years was not a possibility.

In this article we will look at a very recent paper, where fully coupled GCMs are used. “Fully coupled” means an atmospheric model and an ocean model working in tandem – transferring heat, moisture and momentum.

Smith & Gregory (2012)

The problem:

It is generally accepted that the timing of glacials is linked to variations in solar insolation that result from the Earth’s orbit around the sun (Hays et al. 1976; Huybers and Wunsch 2005). These solar radiative anomalies must have been amplified by feedback processes within the climate system, including changes in atmospheric greenhouse gas (GHG) concentrations (Archer et al. 2000) and ice-sheet growth (Clark et al. 1999), and whilst hypotheses abound as to the details of these feedbacks, none is without its detractors and we cannot yet claim to know how the Earth system produced the climate we see recorded in numerous proxy records. This is of more than purely intellectual interest: a full understanding of the carbon cycle during a glacial cycle, or the details of how regional sea-level changed as the ice-sheets waxed and waned would be of great use in accurately predicting the future climatic effects of anthropogenic CO2 emissions, as we might expect many of the same fundamental feedbacks to be at play in both scenarios..

..The multi-millennial timescales involved in modelling even a single glacial cycle present an enormous challenge to comprehensive Earth system models based on coupled atmosphere–ocean general circulation models (AOGCMs). Due to the computational expense involved, AOGCMs are usually limited to runs of a few hundred years at most, and their use in paleoclimate studies has generally been through short, ‘‘snapshot’’ runs of specific periods of interest.

Transient simulations of glacial cycles have hitherto only been run with models where important climate processes such as clouds or atmospheric moisture transports are more crudely parameterised than in an AOGCM or omitted entirely. The heavy restrictions on the feedbacks involved in such models limit what we can learn of the evolution of the climate from them, particularly in paleoclimate states that may be significantly different from the better-known modern climates which the models are formulated to reproduce. Simulating past climate states in AOGCMs and comparing the results to climate reconstructions based on proxies also allows us to test the models’ sensitivities to climate forcings and build confidence in their predictions of future climate.

[Emphasis added. And likewise for all bold text in future citations].

Their model:

For these simulations we use FAMOUS (FAst Met. Office and UK universities Simulator), a low resolution version of the Hadley Centre Coupled Model (HadCM3) AOGCM. FAMOUS has approximately half the spatial resolution of HadCM3, which reduces the computational cost of the model by a factor of 10.

[For more on the model, see note 1]

Their plan:

Here we present the first AOGCM transient simulations of the whole of the last glacial cycle. We have reduced the computational expense of these simulations by using FAMOUS, an AOGCM with a relatively low spatial resolution, and by accelerating the boundary conditions that we apply by a factor of ten, such that the 120,000 year cycle occurs in 12,000 years. We investigate how the influences of orbital variations in solar irradiance, GHGs and northern hemisphere ice-sheets combine to affect the evolution of the climate.

There is a problem with the speeding up process – the oceans respond on completely different timescales from the atmosphere. Some ocean processes take place over thousands of years, so whether or not the acceleration approach produces a real climate is open to discussion.

Their approach:

The aim of this study is to investigate the physical climate of the atmosphere and ocean through the last glacial cycle. Along with changes in solar insolation that result from variations in the Earth’s orbit around the sun, we treat northern hemisphere ice-sheets and changes in the GHG composition of the atmosphere as external forcing factors of the climate system which we specify as boundary conditions, either alone or in combination. Changes in solar activity, Antarctic ice, surface vegetation, or sea- level and meltwater fluxes implied by the evolving ice- sheets are not included in these simulations. Our experimental setup is thus somewhat simplified, with certain potential climate feedbacks excluded. Although partly a matter of necessity due to missing or poorly modelled processes in this version of FAMOUS, this simplification allows us to more clearly see the influence of the specified forcings, as well as ensuring that the simulations stay close to the real climate.

Let’s understand the key points of this modeling exercise:

  1. A full GCM is used, but at reduced spatial resolution
  2. The forcings are speeded up by a factor of 10 over their real life versions
  3. Two of the critical forcings applied are actually feedbacks that need to be specified to make the model work – that is, the model is not able to calculate these critical feedbacks (CO2 concentration and ice sheet extent)
  4. Five different simulations were run to see the effect of different factors:
    • Orbital forcing only applied (ORB)
    • GHG only forcing applied (GHG)
    • Ice sheet extent only applied (ICE)
    • All of the above with 2 different ice sheet reconstructions (ALL-ZH & ALL-5G – note that ALL-ZH has the same ice sheet reconstruction as ICE, while ALL-5G has a different one)

Here are the modeled temperature results compared against actual (Black) for Antarctica and Greenland:

From Smith & Gregory 2012

From Smith & Gregory 2012

Figure 1

Lots of interesting things to note here.

When we look at Antarctica we see that orbital forcing alone and Northern hemisphere ice sheets alone do little or nothing to model past temperatures. But GHG concentrations by themselves as a forcing provide a modeled temperature that is broadly similar to the last 120kyrs – apart from higher frequency temperature variations, something we return to later. When we add the NH ice sheets we get an even better match. I’m surprised that the ice sheets don’t have more impact given that amount of solar radiation they reflect.

Both GHGs and ice sheets can be seen as positive feedbacks in reality (although in this model they are specified), and for the southern polar region GHGs have a much bigger effect.

Looking at Greenland, we see that orbital forcing once again has little effect on its own, while GHGs and ice sheets alone have similar effects but individually are a long way off the actual climate. Combining into all forcings, we see a reasonable match with actual temperatures with one sheet reconstruction and not so great a match for the other. This implies – for other models that try to model dynamic ice sheets (rather than specify) the accuracy may be critical for modeling success.

We again see that higher frequency temperature variations are not modeled at all well, and even some lower frequency variations – for example the period from 110 kyr to 85 kyr has some important missing variability (in the model).

The authors note:

The EPICA data [Antarctica] shows that, relative to their respective longer term trends, temperature fell more rapidly than CO2 during this period [120 - 110 kyrs], but in our experiments simulated Antarctic temperatures drop in line with CO2. This suggests that there is an important missing feedback in our model, or that our model is perhaps over-sensitive to CO2, and under-sensitive to one of the other forcing factors. Tests of the model where the forcings were not artificially accelerated rule out the possibility of the acceleration being a factor.

Abrupt Climate Change

What about the higher frequency temperature signals? The Greenland data has a much larger magnitude than Antarctica for this frequency, but neither are really reproduced in the model.

The other striking difference between the model and the NGRIP reconstruction is the model’s lack of the abrupt, millennial scale events of large amplitude in the ice-core data. It is thought that periodic surges of meltwater from the northern hemisphere ice-sheets and subsequent disruption of oceanic heat transports are involved in these events (Bond et al. 1993; Blunier et al. 1998), and the lack of ice-sheet meltwater runoff in our model is probably a large part of the reason why we do not simulate them.

The authors then discuss this a little more as the story is not at all settled and conclude:

Taken together, the lack of both millennial scale warm events in the south and abrupt events in the north strongly imply a missing feedback of some importance in our model.

CO2 Feedback

The processes by which sufficient quantities of carbon are drawn down into the glacial ocean to produce the atmospheric CO2 concentrations seen in ice-core records are not well understood, and have to date not been successfully modelled by a realistic coupled model. FAMOUS, as used in this study, does have a simple marine biogeochemistry model, although it does not respond to the forcings in these simulations in a way that would imply an increased uptake of carbon. A further FAMOUS simulation with interactive atmospheric CO2 did not produce any significant changes in atmospheric CO2 during the early glacial when forced with orbital variations and a growing northern hemisphere ice-sheet.

Accurately modelling a glacial cycle with interactive carbon chemistry requires a significant increase in our understanding of the processes involved, not simply the inclusion of a little extra complexity to the current model.


This is a very interesting paper, highlighting some successes, computational limitations, poorly understand feedbacks and missing feedbacks in climate models.

The fact that 120 kyrs of climate history has been simulated with a full GCM is great to see.

The lack of abrupt climate change in the simulation, the failure to track the fast rate of temperature fall at the start of ice age inception and the lack of ability to model key feedbacks all indicate that climate models – at least as far as the ice ages are concerned – are at a rudimentary stage.

(This doesn’t mean they aren’t hugely sophisticated, it just means climate is a little bit tricky).

Articles in the Series

Part One – An introduction

Part Two – Lorenz – one point of view from the exceptional E.N. Lorenz

Part Three – Hays, Imbrie & Shackleton – how everyone got onto the Milankovitch theory

Part Four – Understanding Orbits, Seasons and Stuff – how the wobbles and movements of the earth’s orbit affect incoming solar radiation

Part Five – Obliquity & Precession Changes – and in a bit more detail

Part Six – “Hypotheses Abound” – lots of different theories that confusingly go by the same name

Part Seven – GCM I – early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Seven and a Half – Mindmap – my mind map at that time, with many of the papers I have been reviewing and categorizing plus key extracts from those papers

Part Eight – GCM II – more recent work from the “noughties” – GCM results plus EMIC (earth models of intermediate complexity) again trying to produce perennial snow cover

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age – a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Eleven – End of the Last Ice age – latest data showing relationship between Southern Hemisphere temperatures, global temperatures and CO2

Twelve – GCM V – Ice Age Termination – very recent work from He et al 2013, using a high resolution GCM (CCSM3) to analyze the end of the last ice age and the complex link between Antarctic and Greenland

Thirteen – Terminator II – looking at the date of Termination II, the end of the penultimate ice age – and implications for the cause of Termination II

Fourteen – Concepts & HD Data – getting a conceptual feel for the impacts of obliquity and precession, and some ice age datasets in high resolution

Fifteen – Roe vs Huybers – reviewing In Defence of Milankovitch, by Gerard Roe

Sixteen – Roe vs Huybers II – remapping a deep ocean core dataset and updating the previous article

Seventeen – Proxies under Water I – explaining the isotopic proxies and what they actually measure

Eighteen – “Probably Nonlinearity” of Unknown Origin – what is believed and what is put forward as evidence for the theory that ice age terminations were caused by orbital changes

Nineteen – Ice Sheet Models I – looking at the state of ice sheet models


The last glacial cycle: transient simulations with an AOGCM, Smith & Gregory, Climate Dynamics (2012)


Note 1: FAMOUS

The ocean component is based on the rigid-lid Cox-Bryan model (Pacanowski et al. 1990), and is run at a resolution of 2.5° latitude by 3.75° longitude, with 20 vertical levels. The atmosphere is based on the primitive equations, with a resolution of 5° latitude by 7.5° longitude with 11 vertical levels (see Table 1).

Version XDBUA of FAMOUS (simply FAMOUS hereafter, see Smith et al. (2008) for full details) has a preindustrial control climate that is reasonably similar to that of HadCM3, although FAMOUS has a high latitude cold bias in the northern hemisphere during winter of about 5°C with respect to HadCM3 (averaged north of 40°N), and a consequent overestimate of winter sea-ice extent in the North Atlantic.

The global climate sensitivity of FAMOUS to increases in atmospheric CO2 is, however, similar to that of HadCM3.

FAMOUS incorporates a number of differences from HadCM3 intended to improve its climate simulation—for example, Iceland has been removed (Jones 2003) to encourage more northward ocean heat transport in the Atlantic. Smith and Gregory (2009) demonstrate that the sensitivity of the Atlantic meridional overturning circulation (AMOC) to perturbations in this version of FAMOUS is in the middle of the range when compared to many other coupled climate models. The model used in this study differs from XDBUA FAMOUS in that two technical bugs in the code have been fixed. Latent and sensible heat fluxes from the ocean were mistakenly interchanged in part of the coupling routine, and snow falling on sea-ice at coastal points was lost from the model. Correction of these errors results in an additional surface cold bias of a degree or so around high latitude coastal areas with respect to XDBUA, but no major changes to the model climatology. In addition, the basic land topography used in these runs was interpolated from the modern values in the ICE-5G dataset (Peltier 2004), which differs somewhat from the US Navy-derived topography used in Smith et al. (2008) and HadCM3.

Read Full Post »

In Part Seven we looked at a couple of papers from 1989 and 1994 which attempted to use GCMs to “start an ice age”. The evolution of the “climate science in progress” has been:

  1. Finding indications that the timing of ice age inception was linked to redistribution of solar insolation via orbital changes – possibly reduced summer insolation in high latitudes (Hays et al 1976 – discussed in Part Three)
  2. Using simple energy balance models to demonstrate there was some physics behind the plausible ideas (we saw a subset of the plausible ideas in Part Six – Hypotheses Abound)
  3. Using a GCM with the starting conditions of around 115,000 years ago to see if “perennial snow cover” could be achieved at high latitudes that weren’t ice covered in the last inter-glacial – i.e., can we start a new ice age?

Why, if an energy balance model can “work”, i.e., produce perennial snow cover to start a new ice age, do we need to use a more complex model? As Rind and his colleagues said in their 1989 paper:

Various energy balance climate models have been used to assess how much cooling would be associated with changed orbital parameters.. With the proper tuning of parameters, some of which is justified on observational grounds, the models can be made to simulate the gross glacial/interglacial climate changes. However, these models do not calculate from first principles all the various influences on surface air temperature noted above, nor do they contain a hydrologic cycle which would allow snow cover to be generated or increase. The actual processes associated with allowing snow cover to remain through the summer will involve complex hydrologic and thermal influences, for which simple models can only provide gross approximations.

[Emphases added - and likewise in all following quotations, bold is emphasis added]. So interestingly, moving to a more complex model with better physics showed that there was a problem with (climate models) starting an ice age. Still, that was early GCMs with much more limited computing power. In this article we will look at the results a decade or so later.


We’ll start with a couple of papers that include excellent reviews of “the problem so far”, one in 2002 by Yoshimori and his colleagues and one in 2004 by Vettoretti & Peltier. Yoshimori et al 2002:

One of the fundamental and challenging issues in paleoclimate modelling is the failure to capture the last glacial inception (Rind et al. 1989)..

..Between 118 and 110 kaBP, the sea level records show a rapid drop of 50 – 80 m from the last interglacial, which itself had a sea level only 3 – 5 m higher than today. This sea level lowering, as a reference, is about half of the last glacial maximum. ..As the last glacial inception offers one of few valuable test fields for the validation of climate models, particularly atmospheric general circulation models (AGCMs), many studies regarding this event have been conducted.

Phillipps & Held (1994) and Gallimore & Kutzbach (1995).. conducted a series of sensitivity experiments with respect to orbital parameters by specifying several extreme orbital configurations. These included a case with less obliquity and perihelion during the NH winter, which produces a cooler summer in the NH. Both studies came to a similar conclusion that although a cool summer orbital configuration brings the most favorable conditions for the development of permanent snow and expansion of glaciers, orbital forcing alone cannot account for the permanent snow cover in North America and Europe.

This conclusion was confirmed by Mitchell (1993), Schlesinger & Verbitsky (1996), and Vavrus (1999).. ..Schlesinger & Verbitsky (1996), integrating an ice sheet-asthenosphere model with AGCM output, found that a combination of orbital forcing and greenhouse forcing by reduced CO2 and CH4 was enough to nucleate ice sheets in Europe and North America. However, the simulated global ice volume was only 31% of the estimate derived from proxy records.

..By using a higher resolution model, Dong & Valdes (1995) simulated the growth of perennial snow under combined orbital and CO2 forcing. As well as the resolution of the model, an important difference between their model and others was the use of “envelope orography” [playing around with the height of land].. found that the changes in sea surface temperature due to orbital perturbations played a very important role in initiating the Laurentide and Fennoscandian ice sheets.

And as a note on the last quote, it’s important to understand that these studies were with an Atmospheric GCM, not an Atmospheric Ocean GCM – i.e., a model of the atmosphere with some prescribed sea surface temperatures (these might be from a separate run using a simpler model, or from values determined from proxies). The authors then comment on the potential impact of vegetation:

..The role of the biosphere in glacial inception has been studied by Gallimore & Kutzbach (1996), de Noblet et al. (1996), and Pollard and Thompson (1997).

..Gallimore & Kutzbach integrated an AGCM with a mixed layer ocean model under five different forcings:  1) control; 2) orbital; 3) #2 plus CO2; 4) #3 plus 25% expansion of tundra based on the study of Harrison et al. (1995); and (5) #4 plus further 25% expansion of tundra. The effect of the expansion of tundra through a vegetation-snow masking feedback was approximated by increasing the snow cover fraction. In only the last case was perennial snow cover seen..

..Pollard and Thompson (1997) also conducted an interactive vegetation and AGCM experiment under both orbital and CO2 forcing. They further integrated a dynamic ice-sheet model for 10 ka under the surface mass balance calculated from AGCM output using a multi-layer snow/ice-sheet surface column model on the grid of the dynamical ice-sheet model including the effect of refreezing of rain and meltwater. Although their model predicted the growth of an ice sheet over Baffin Island and the Canadian Archipelago, it also predicted a much faster growth rate in north western Canada and southern Alaska, and no nucleation was seen on Keewatin or Labrador [i.e. the wrong places]. Furthermore, the rate of increase of ice volume over North America was an order of magnitude less than that estimated from proxy records.

They conclude:

It is difficult to synthesise the results of these earlier studies since each model used different parameterisations of unresolved physical processes, resolution, and had different control climates as well as experimental design.

They summarize that results to date indicate that orbital forcing alone nor CO2  alone can explain glacial inception, and the combined effects are not consistent. And the difficulty appears to relate to the resolution of the model or feedback from the biosphere (vegetation).

A couple of years later Vettoretti & Peltier (2004) had a good review at the start of their paper.

Initial attempts to gain deeper understanding of the nature of the glacial–interglacial cycles involved studies based upon the use of simple energy balance models (EBMs), which have been directed towards the simulation of perennial snow cover under the influence of appropriately modified orbital forcing (e.g. Suarez and Held, 1979).

Analyses have since evolved such that the models of the climate system currently employed include explicit coupling of ice sheets to the EBM or to more complete AGCM models of the atmosphere.

The most recently developed models of the complete 100 kyr iceage cycle have evolved to the point where three model components have been interlinked, respectively, an EBM of the atmosphere that includes the influence of ice-albedo feedback including both land ice and sea ice, a model of global glaciology in which ice sheets are forced to grow and decay in response to meteorologically mediated changes in mass balance, and a model of glacial isostatic adjustment, through which process the surface elevation of the ice sheet may be depressed or elevated depending upon whether accumulation or ablation is dominant..

..Such models have also been employed to investigate the key role that variations in atmospheric carbon dioxide play in the 100 kyr cycle, especially in the transition out of the glacial state (Tarasov and Peltier, 1997; Shackleton, 2000). Since such models are rather efficient in terms of the computer resources required to integrate them, they are able to simulate the large number of glacial– interglacial cycles required to understand model sensitivities.

There has also been a movement within the modelling community towards the use of models that are currently referred to as earth models of intermediate complexity (EMICs) which incorporate sub-components that are of reduced levels of sophistication compared to the same components in modern Global ClimateModels (GCMs). These EMICs attempt to include representations of most of the components of the real Earth system including the atmosphere, the oceans, the cryosphere and the biosphere/carbon cycle (e.g. Claussen, 2002). Such models have provided, and will continue to provide, useful insight into long-term climate variability by making it possible to perform a large number of sensitivity studies designed to investigate the role of various feedback mechanisms that result from the interaction between the components that make up the climate system (e.g. Khodri et al., 2003).

Then the authors comment on the same studies and issues covered by Yoshimori et al, and additionally on their own 2003 paper and another study. On their own research:

Vettoretti and Peltier (2003a), more recently, have demonstrated that perennial snow cover is achieved in a recalibrated version of the CCCma AGCM2 solely as a consequence of orbital forcing when the atmospheric CO2 concentration is fixed to the pre-industrial level as constrained by measurements on air bubbles contained in the Vostok ice core (Petit et al., 1999).

This AGCM simulation demonstrated that perennial snow cover develops at high northern latitudes without the necessity of including any feedbacks due to vegetation or other effects. In this work, the process of glacial inception was analysed using three models having three different control climates that were, respectively, the original CCCma cold biased model, a reconfigured model modified so as to be unbiased, and a model that was warm biased with respect to the modern set of observed AMIP2 SSTs.. ..Vettoretti and Peltier (2003b) suggested a number of novel feedback mechanisms to be important for the enhancement of perennial snow cover.

In particular, this work demonstrated that successively colder climates increased moisture transport into glacial inception sensitive regions through increased baroclinic eddy activity at mid- to high latitudes. In order to assess this phenomenon quantitatively, a detailed investigation was conducted of changes in the moisture balance equation under 116 ka BP orbital forcing for the Arctic polar cap. As well as illustrating the action of a ‘‘cyrospheric moisture pump’’, the authors also proposed that the zonal asymmetry of the inception process at high latitudes, which has been inferred on the basis of geological observations, is a consequence of zonally heterogeneous increases and decreases of the northwards transport of heat and moisture.

And they go on to discuss other papers with an emphasis on moisture transport poleward. Now we’ll take a look at some work from that period.

Newer GCM work

Yoshimori et al 2002

Their models – an AGCM (atmospheric GCM) with 116kyrs orbital conditions and a) present day SSTs b) 116 kyrs SSTs. Then another model run with the above conditions and changed vegetation based on temperature (if the summer temperature is less than -5ºC the vegetation type is changed to tundra). Because running a “fully coupled” GCM (atmosphere and ocean) over a long time period required too much computing resources a compromise approach was used.

The SSTs were calculated using an intermediate complexity model, with a simple atmospheric model and a full ocean model (including sea ice) – and by running the model for 2000 years (oceans have a lot of thermal inertia). The details of this is described in section 2.1 of their paper. The idea is to get some SSTs that are consistent between ocean and atmosphere.

The SSTs are then used as boundary conditions for a “proper” atmospheric GCM run over 10 years – this is described in section 2.2 of their paper. The insolation anomaly, with respect to present day: Yoshimori-2002-Fig1-insolation-anomaly-116kaBP

Figure 1

They use 240 ppm CO2 for the 116 kyr condition, as “the lowest probably equivalent CO2 level” (combining radiative forcing of CO2 and CH4). This equates to a reduction of 2.2 W/m² of radiative forcing. The SSTs calculated from the preliminary model are colder globally by 1.1ºC for the 116 kyr condition compared to the present day SST run. This is not due to the insolation anomaly, which just “redistributes” solar energy, it is due to the lower atmospheric CO2 concentration. The 116kyr SST in the northern North Atlantic is about 6ºC colder. This is due to the lower insolation value in summer plus a reduction in the MOC (note 1). The results of their work:

  • with modern SSTs, orbital and CO2 values from 116 kyrs – small extension of perennial snow cover
  • with calculated 116 kyr SST, orbital and CO2 values – a large extension in perennial snow cover into Northern Alaska, eastern Canada and some other areas
  • with vegetation changes (tundra) added – further extension of snow cover north of 60º

They comment (and provide graphs) that increased snow cover is partly from reduced snow melt but also from additional snowfall. This is the case even though colder temperatures generally favor less precipitation.

Contrary to the earlier ice age hypothesis, our results suggest that the capturing of glacial inception at 116kaBP requires the use of “cooler” sea surface conditions than those of the present climate. Also, the large impact of vegetation change on climate suggests that the inclusion of vegetation feedback is important for model validation, at least, in this particular period of Earth history.

What we don’t find out is why their model produces perennial snow cover (even without vegetation changes) where earlier attempts failed. What appears unstated is that although the “orbital hypothesis” is “supported” by the paper, the necessary conditions are colder sea surface temperatures induced by much lower atmospheric CO2. Without the lower CO2 this model cannot start an ice age. And an additional point to note, Vettoretti & Peltier 2004, say this about the above paper:

The meaningfulness of these results, however, remain to be seen as the original CCCma AGCM2 model is cold biased in summer surface temperature at high latitudes and sensitive to the low value of CO2 specified in the simulations.

Vettoretti & Peltier 2003

This is the paper referred to by their 2004 paper.

This simulation demonstrates that entry into glacial conditions at 116 kyr BP requires only the introduction of post-Eemian orbital insolation and standard preindustrial CO2 concentrations

Here are the seasonal and latitudinal variations in solar TOA of 116 kyrs ago vs today:

From Vettoretti & Peltier 2003

From Vettoretti & Peltier 2003

The essence of their model testing was they took an atmospheric GCM coupled to prescribed SSTs – for three different sets of SSTs – with orbital and GHG conditions from 116 kyrs BP and looked to see if perennial snow cover occurred (and where):

The three 116 kyr BP experiments demonstrated that glacial inception was successfully achieved in two of the three simulations performed with this model.

The warm-biased experiment delivered no perennial snow cover in the Arctic region except over central Greenland.

The cold-biased 116 kyr BP experiment had large portions of the Arctic north of 608N latitude covered in perennial snowfall. Strong regions of accumulation occurred over the Canadian Arctic archipelago and eastern and central Siberia. The accumulation over eastern Siberia appears to be excessive since there is little evidence that eastern Siberia ever entered into a glacial state. The accumulation pattern in this region is likely a result of the excessive precipitation in the modern simulation.

They also comment:

All three simulations are characterized by excessive summer precipitation over the majority of the polar land areas. Likewise, a plot of the annual mean precipitation in this region of the globe (not shown) indicates that the CCCma model is in general wet biased in the Arctic region. It has previously been demonstrated that the CCCma GCMII model also has a hydrological cycle that is more vigorous than is observed (Vettoretti et al. 2000b).

I’m not clear how much the model bias of excessive precipitation also affects their result of snow accumulation in the “right” areas.

In Part II of their paper they dig into the details of the changes in evaporation, precipitation and transport of moisture into the arctic region.

Crucifix & Loutre 2002

This paper (and the following paper) used an EMIC – an intermediate complexity model – which is a trade off model that has courser resolution, simpler parameterization but consequently much faster run time  – allowing for lots of different simulations over much longer time periods than can be done with a GCM. The EMICs are also able to have coupled biosphere, ocean, ice sheets and atmosphere – whereas the GCM runs we saw above had only an atmospheric GCM with some method of prescribing sea surface temperatures.

This study addresses the mechanisms of climatic change in the northern high latitudes during the last interglacial (126–115 kyr BP) using the earth system model of intermediate complexity ‘‘MoBidiC’’.

Two series of sensitivity experiments have been performed to assess (a) the respective roles played by different feedbacks represented in the model and (b) the respective impacts of obliquity and precession..

..MoBidiC includes representations for atmosphere dynamics, ocean dynamics, sea ice and terrestrial vegetation. A total of ten transient experiments are presented here..

..The model simulates important environmental changes at northern high latitudes prior the last glacial inception, i.e.: (a) an annual mean cooling of 5 °C, mainly taking place between 122 and 120 kyr BP; (b) a southward shift of the northern treeline by 14° in latitude; (c) accumulation of perennial snow starting at about 122 kyr BP and (d) gradual appearance of perennial sea ice in the Arctic.

..The response of the boreal vegetation is a serious candidate to amplify significantly the orbital forcing and to trigger a glacial inception. The basic concept is that at a large scale, a snow field presents a much higher albedo over grass or tundra (about 0.8) than in forest (about 0.4).

..It must be noted that planetary albedo is also determined by the reflectance of the atmosphere and, in particular, cloud cover. However, clouds being prescribed in MoBidiC, surface albedo is definitely the main driver of planetary albedo changes.

In their summary:

At high latitudes, MoBidiC simulates an annual mean cooling of 5 °C over the continents and a decrease of 0.3 °C in SSTs.

This cooling is mainly related to a decrease in the shortwave balance at the top-of-the atmosphere by 18 W/m², partly compensated for by an increase by 15 W/m² in the atmospheric meridional heat transport divergence.

These changes are primarily induced by the astronomical forcing but are almost quadrupled by sea ice, snow and vegetation albedo feedbacks. The efficiency of these feedbacks is enhanced by the synergies that take place between them. The most critical synergy involves snow and vegetation and leads to settling of perennial snow north of 60°N starting 122 kyr BP. The temperature-albedo feedback is also responsible for an acceleration of the cooling trend between 122 and 120 kyr BP. This acceleration is only simulated north of 60° and is absent at lower latitudes.

See note 2 for details on the model. This model has a cold bias of up to 5°C in the winter high latitudes.

Calov et al 2005

We study the mechanisms of glacial inception by using the Earth system model of intermediate complexity, CLIMBER-2, which encompasses dynamic modules of the atmosphere, ocean, biosphere and ice sheets. Ice-sheet dynamics are described by the three- dimensional polythermal ice-sheet model SICOPOLIS. We have performed transient experiments starting at the Eemian interglacial, at 126 ky BP (126,000 years before present). The model runs for 26 kyr with time-dependent orbital and CO2 forcings.

The model simulates a rapid expansion of the area covered by inland ice in the Northern Hemisphere, predominantly over Northern America, starting at about 117 kyr BP. During the next 7 kyr, the ice volume grows gradually in the model at a rate which corresponds to a change in sea level of 10 m per millennium.

We have shown that the simulated glacial inception represents a bifurcation transition in the climate system from an interglacial to a glacial state caused by the strong snow-albedo feedback. This transition occurs when summer insolation at high latitudes of the Northern Hemisphere drops below a threshold value, which is only slightly lower than modern summer insolation.

By performing long-term equilibrium runs, we find that for the present-day orbital parameters at least two different equilibrium states of the climate system exist—the glacial and the interglacial; however, for the low summer insolation corresponding to 115 kyr BP we find only one, glacial, equilibrium state, while for the high summer insolation corresponding to 126 kyr BP only an interglacial state exists in the model.

We can get some sense of the simplification of the EMIC from the resolution:

The atmosphere, land- surface and terrestrial vegetation models employ the same grid with latitudinal resolution of 10° and longitudinal resolution of approximately 51°

Their ice sheet model has much more detail, with about 500 “cells” of the ice sheet fitting into 1 cell of the land surface model.

They also comment on the general problems (so far) with climate models trying to produce ice ages:

We speculate that the failure of some climate models to successfully simulate a glacial inception is due to their coarse spatial resolution or climate biases, that could shift their threshold values for the summer insolation, corresponding to the transition from interglacial to glacial climate state, beyond the realistic range of orbital parameters.

Another important factor determining the threshold value of the bifurcation transition is the albedo of snow.

In our model, a reduction of averaged snow albedo by only 10% prevents the rapid onset of glaciation on the Northern Hemisphere under any orbital configuration that occurred during the Quaternary. It is worth noting that the albedo of snow is parameterised in a rather crude way in many climate models, and might be underestimated. Moreover, as the albedo of snow strongly depends on temperature, the under-representation of high elevation areas in a coarse- scale climate model may additionally weaken the snow– albedo feedback.


So in this article we have reviewed a few papers from a decade or so ago that have turned the earlier problems (see Part Seven)  into apparent (preliminary) successes.

We have seen two papers using models of “intermediate complexity” and coarse spatial resolution that simulated the beginnings of the last ice age. And we have seen two papers which used atmospheric GCMs linked to prescribed ocean conditions that simulated perennial snow cover in critical regions 116 kyrs ago.

Definitely some progress.

But remember the note that the early energy balance models had concluded that perennial snow cover could occur due to the reduction in high latitude summer insolation – support for the “Milankovitch” hypothesis. But then the much improved – but still rudimentary – models of Rind et al 1989 and Phillipps & Held 1994 found that with the better physics and better resolution they were unable to reproduce this case. And many later models likewise.

We’ve yet to review a fully coupled GCM (atmosphere and ocean) attempting to produce the start of an ice age. In the next article we will take a look at a number of very recent papers, including Jochum et al (2012):

So far, however, fully coupled, nonflux-corrected primitive equation general circulation models (GCMs) have failed to reproduce glacial inception, the cooling and increase in snow and ice cover that leads from the warm interglacials to the cold glacial periods..

..The GCMs failure to recreate glacial inception [see Otieno and Bromwich (2009) for a summary], which indicates a failure of either the GCMs or of Milankovitch’s hypothesis. Of course, if the hypothesis would be the culprit, one would have to wonder if climate is sufficiently understood to assemble a GCM in the first place.

We will also see that the strength of feedback mechanisms that contribute to perennial snow cover varies significantly for different papers.

And one of the biggest problems still being run into is the computing power necessary. From Jochum (2012) again:

This experimental setup is not optimal, of course. Ideally one would like to integrate the model from the last interglacial, approximately 126 kya ago, for 10,000 years into the glacial with slowly changing orbital forcing. However, this is not affordable; a 100-yr integration of CCSM on the NCAR supercomputers takes approximately 1 month and a substantial fraction of the climate group’s computing allocation.

More on this fascinating topic very soon.

Articles in the Series

Part One – An introduction

Part Two – Lorenz – one point of view from the exceptional E.N. Lorenz

Part Three – Hays, Imbrie & Shackleton – how everyone got onto the Milankovitch theory

Part Four – Understanding Orbits, Seasons and Stuff – how the wobbles and movements of the earth’s orbit affect incoming solar radiation

Part Five – Obliquity & Precession Changes – and in a bit more detail

Part Six – “Hypotheses Abound” – lots of different theories that confusingly go by the same name

Part Seven – GCM I – early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Seven and a Half – Mindmap – my mind map at that time, with many of the papers I have been reviewing and categorizing plus key extracts from those papers

Part Nine – GCM III – very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age – a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Eleven – End of the Last Ice age – latest data showing relationship between Southern Hemisphere temperatures, global temperatures and CO2

Twelve – GCM V – Ice Age Termination – very recent work from He et al 2013, using a high resolution GCM (CCSM3) to analyze the end of the last ice age and the complex link between Antarctic and Greenland

Thirteen – Terminator II – looking at the date of Termination II, the end of the penultimate ice age – and implications for the cause of Termination II

Fourteen – Concepts & HD Data – getting a conceptual feel for the impacts of obliquity and precession, and some ice age datasets in high resolution

Fifteen – Roe vs Huybers – reviewing In Defence of Milankovitch, by Gerard Roe

Sixteen – Roe vs Huybers II – remapping a deep ocean core dataset and updating the previous article

Seventeen – Proxies under Water I – explaining the isotopic proxies and what they actually measure

Eighteen – “Probably Nonlinearity” of Unknown Origin – what is believed and what is put forward as evidence for the theory that ice age terminations were caused by orbital changes

Nineteen – Ice Sheet Models I – looking at the state of ice sheet models


On the causes of glacial inception at 116 kaBP, Yoshimori, Reader, Weaver & McFarlane, Climate Dynamics (2002) – paywall paper – free paper

Sensitivity of glacial inception to orbital and greenhouse gas climate forcing, Vettoretti & Peltier, Quaternary Science Reviews (2004) – paywall paper

Post-Eemian glacial inception. Part I: the impact of summer seasonal temperature bias, Vettoretti & Peltier, Journal of Climate (2003) – free paper

Post-Eemian Glacial Inception. Part II: Elements of a Cryospheric Moisture Pump, Vettoretti & Peltier, Journal of Climate (2003)

Transient simulations over the last interglacial period (126–115 kyr BP): feedback and forcing analysis, Crucifix & Loutre 2002, Climate Dynamics (2002) – paywall paper with first 2 pages viewable for free

Transient simulation of the last glacial inception. Part I: glacial inception as a bifurcation in the climate system, Calov, Ganopolski, Claussen, Petoukhov & Greve, Climate Dynamics (2005) – paywall paper with first 2 pages viewable for free

True to Milankovitch: Glacial Inception in the New Community Climate System Model, Jochum et al, Journal of Climate (2012) – free paper


1. MOC = meridional overturning current. The MOC is the “Atlantic heat conveyor belt” where the cold salty water in the polar region of the Atlantic sinks rapidly and forms a circulation which pulls (warmer) surface equatorial waters towards the poles.

2. Some specifics on MoBidiC from the paper to give some idea of the compromises:

MoBidiC links a zonally averaged atmosphere to a sectorial representation of the surface, i.e. each zonal band (5° in latitude) is divided into different sectors representing the main continents (Eurasia–Africa and America) and oceans (Atlantic, Pacific and Indian). Each continental sector can be partly covered by snow and similarly, each oceanic sector can be partly covered by sea ice (with possibly a covering snow layer). The atmospheric component has been described by Galle ́e et al. (1991), with some improvements given in Crucifix et al. (2001). It is based on a zonally averaged quasi-geostrophic formalism with two layers in the vertical and 5° resolution in latitude. The radiative transfer is computed by dividing the atmosphere into up to 15 layers.

The ocean component is based on the sectorially averaged form of the multi-level, primitive equation ocean model of Bryan (1969). This model is extensively described in Hovine and Fichefet (1994) except for some minor modifications detailed in Crucifix et al. (2001). A simple thermodynamic–dynamic sea-ice component is coupled to the ocean model. It is based on the 0-layer thermodynamic model of Semtner (1976), with modifications introduced by Harvey (1988a, 1992). A one-dimensional meridional advection scheme is used with ice velocities prescribed as in Harvey (1988a). Finally, MoBidiC includes the dynamical vegetation model VE- CODE developed by Brovkin et al. (1997). It is based on a continuous bioclimatic classification which describes vegetation as a composition of simple plant functional types (trees and grass). Equilibrium tree and grass fractions are parameterised as a function of climate expressed as the GDD0 index and annual precipitation. The GDD0 (growing degree days above 0) index is defined as the cumulate sum of the continental temperature for all days during which the mean temperature, expressed in degrees, is positive.

MoBidiC’s simulation of the present-day climate has been discussed at length in (Crucifix et al. 2002). We recall its main features. The seasonal cycle of sea ice is reasonably reproduced with an Arctic sea-ice area ranging from 5 · 106 (summer) to 15 · 106 km2 (winter), which compares favourably with present-day observations (6.2 · 106 to 13.9 · 106 km2, respectively, Gloersen et al. 1992). Nevertheless, sea ice tends to persist too long in spring, and most of its melting occurs between June and August, which is faster than in the observations. In the Atlantic Ocean, North Atlantic Deep Water forms mainly between 45 and 60°N and is exported at a rate of 12.4 Sv to the Southern Ocean. This export rate is compatible with most estimates (e.g. Schmitz 1995). Furthermore, the main water masses of the ocean are well reproduced, with recirculation of Antarctic Bottom Water below the North Atlantic Deep Water and formation of Antarctic Intermediate Water. However no convection occurs in the Atlantic north of 60°N, contrary to the real world. As a consequence, continental high latitudes suffer of a cold bias, up to 5 °C in winter. Finally, the treeline is around about 65°N, which is roughly comparable to zonally averaged observations (e.g. MacDonald et al. 2000) but experiments made with this model to study the Holocene climate revealed its tendency to overestimate the amplitude of the treeline shift in response to the astronomical forcing (Crucifix et al. 2002).

Read Full Post »

Older Posts »


Get every new post delivered to your Inbox.

Join 283 other followers