Archive for the ‘Climate History’ Category

In Part Three we looked at attribution in the early work on this topic by Hegerl et al 1996. I started to write Part Four as the follow up on Attribution as explained in the 5th IPCC report (AR5), but got caught up in the many volumes of AR5.

And instead for this article I decided to focus on what might seem like an obscure point. I hope readers stay with me because it is important.

Here is a graphic from chapter 11 of IPCC AR5:

From IPCC AR5 Chapter 11

From IPCC AR5 Chapter 11

Figure 1

And in the introduction, chapter 1:

Climate in a narrow sense is usually defined as the average weather, or more rigorously, as the statistical description in terms of the mean and variability of relevant quantities over a period of time ranging from months to thousands or millions of years. The relevant quantities are most often surface variables such as temperature, precipitation and wind.

Classically the period for averaging these variables is 30 years, as defined by the World Meteorological Organization.

Climate in a wider sense also includes not just the mean conditions, but also the associated statistics (frequency, magnitude, persistence, trends, etc.), often combining parameters to describe phenomena such as droughts. Climate change refers to a change in the state of the climate that can be identified (e.g., by using statistical tests) by changes in the mean and/or the variability of its properties, and that persists for an extended period, typically decades or longer.

[Emphasis added].

Weather is an Initial Value Problem, Climate is a Boundary Value Problem

The idea is fundamental, the implementation is problematic.

As explained in Natural Variability and Chaos – Two – Lorenz 1963, there are two key points about a chaotic system:

  1. With even a minute uncertainty in the initial starting condition, the predictability of future states is very limited
  2. Over a long time period the statistics of the system are well-defined

(Being technical, the statistics are well-defined in a transitive system).

So in essence, we can’t predict the exact state of the future – from the current conditions – beyond a certain timescale which might be quite small. In fact, in current weather prediction this time period is about one week.

After a week we might as well say either “the weather on that day will be the same as now” or “the weather on that day will be the climatological average” – and either of these will be better than trying to predict the weather based on the initial state.

No one disagrees on this first point.

In current climate science and meteorology the term used is the skill of the forecast. Skill means, not how good is the forecast, but how much better is it than a naive approach like, “it’s July in New York City so the maximum air temperature today will be 28ºC”.

What happens in practice, as can be seen in the simple Lorenz system shown in Part Two, is a tiny uncertainty about the starting condition gets amplified. Two almost identical starting conditions will diverge rapidly – the “butterfly effect”. Eventually these two conditions are no more alike than one of the conditions and a time chosen at random from the future.

The wide divergence doesn’t mean that the future state can be anything. Here’s an example from the simple Lorenz system for three slightly different initial conditions:


Figure 2

We can see that the three conditions that looked identical for the first 20 seconds (see figure 2 in Part Two) have diverged. The values are bounded but at any given time we can’t predict what the value will be.

On the second point – the statistics of the system, there is a tiny hiccup.

But first let’s review what is agreed upon. Climate is the statistics of weather. Weather is unpredictable more than a week ahead. Climate, as the statistics of weather, might be predictable. That is, just because weather is unpredictable, it doesn’t mean (or prove) that climate is also unpredictable.

This is what we find with simple chaotic systems.

So in the endeavor of climate modeling the best we can hope for is a probabilistic forecast. We have to run “a lot” of simulations and review the statistics of the parameter we are trying to measure.

To give a concrete example, we might determine from model simulations that the mean sea surface temperature in the western Pacific (between a certain latitude and longitude) in July has a mean of 29ºC with a standard deviation of 0.5ºC, while for a certain part of the north Atlantic it is 6ºC with a standard deviation of 3ºC. In the first case the spread of results tells us – if we are confident in our predictions – that we know the western Pacific SST quite accurately, but the north Atlantic SST has a lot of uncertainty. We can’t do anything about the model spread. In the end, the statistics are knowable (in theory), but the actual value on a given day or month or year are not.

Now onto the hiccup.

With “simple” chaotic systems that we can perfectly model (note 1) we don’t know in advance the timescale of “predictable statistics”. We have to run lots of simulations over long time periods until the statistics converge on the same result. If we have parameter uncertainty (see Ensemble Forecasting) this means we also have to run simulations over the spread of parameters.

Here’s my suggested alternative of the initial value vs boundary value problem:

Suggested replacement for AR5, Box 11.1, Figure 2

Figure 3

So one body made an ad hoc definition of climate as the 30-year average of weather.

If this definition is correct and accepted then “climate” is not a “boundary value problem” at all. Climate is an initial value problem and therefore a massive problem given our ability to forecast only one week ahead.

Suppose, equally reasonably, that the statistics of weather (=climate), given constant forcing (note 2), are predictable over a 10,000 year period.

In that case we can be confident that, with near perfect models, we have the ability to be confident about the averages, standard deviations, skews, etc of the temperature at various locations on the globe over a 10,000 year period.


The fact that chaotic systems exhibit certain behavior doesn’t mean that 30-year statistics of weather can be reliably predicted.

30-year statistics might be just as dependent on the initial state as the weather three weeks from today.


Note 1: The climate system is obviously imperfectly modeled by GCMs, and this will always be the case. The advantage of a simple model is we can state that the model is a perfect representation of the system – it is just a definition for convenience. It allows us to evaluate how slight changes in initial conditions or parameters affect our ability to predict the future.

The IPCC report also has continual reminders that the model is not reality, for example, chapter 11, p. 982:

For the remaining projections in this chapter the spread among the CMIP5 models is used as a simple, but crude, measure of uncertainty. The extent of agreement between the CMIP5 projections provides rough guidance about the likelihood of a particular outcome. But — as partly illustrated by the discussion above — it must be kept firmly in mind that the real world could fall outside of the range spanned by these particular models.

[Emphasis added].

Chapter 1, p.138:

Model spread is often used as a measure of climate response uncertainty, but such a measure is crude as it takes no account of factors such as model quality (Chapter 9) or model independence (e.g., Masson and Knutti, 2011; Pennell and Reichler, 2011), and not all variables of interest are adequately simulated by global climate models..

..Climate varies naturally on nearly all time and space scales, and quantifying precisely the nature of this variability is challenging, and is characterized by considerable uncertainty.

I haven’t yet been able to determine how these firmly noted and challenging uncertainties have been factored into the quantification of 95-100%, 99-100%, etc, in the various chapters of the IPCC report.

Note 2:  There are some complications with defining exactly what system is under review. For example, do we take the current solar output, current obliquity,precession and eccentricity as fixed? If so, then any statistics will be calculated for a condition that will anyway be changing. Alternatively, we can take these values as changing inputs in so far as we know the changes – which is true for obliquity, precession and eccentricity but not for solar output.

The details don’t really alter the main point of this article.

Read Full Post »

I’ve been somewhat sidetracked on this series, mostly by starting up a company and having no time, but also by the voluminous distractions of IPCC AR5. The subject of attribution could be a series by itself but as I started the series Natural Variability and Chaos it makes sense to weave it into that story.

In Part One and Part Two we had a look at chaotic systems and what that might mean for weather and climate. I was planning to develop those ideas a lot more before discussing attribution, but anyway..

AR5, Chapter 10: Attribution is 85 pages on the idea that the changes over the last 50 or 100 years in mean surface temperature – and also some other climate variables – can be attributed primarily to anthropogenic greenhouse gases.

The technical side of the discussion fascinated me, but has a large statistical component. I’m a rookie with statistics, and maybe because of this, I’m often suspicious about statistical arguments.

Digression on Statistics

The foundation of a lot of statistics is the idea of independent events. For example, spin a roulette wheel and you get a number between 0 and 36 and a color that is red, black – or if you’ve landed on a zero, neither.

The statistics are simple – each spin of the roulette wheel is an independent event – that is, it has no relationship with the last spin of the roulette wheel. So, looking ahead, what is the chance of getting 5 two times in a row? The answer (with a 0 only and no “00” as found in some roulette tables) is 1/37 x 1/37 = 0.073%.

However, after you have spun the roulette wheel and got a 5, what is the chance of a second 5? It’s now just 1/37 = 2.7%. The past has no impact on the future statistics. Most of real life doesn’t correspond particularly well to this idea, apart from playing games of chance like poker and so on.

I was in the gym the other day and although I try and drown it out with music from my iPhone, the Travesty (aka “the News”) was on some of the screens in the gym – with text of the “high points” on the screen aimed at people trying to drown out the annoying travestyreaders. There was a report that a new study had found that autism was caused by “Cause X” – I have blanked it out to avoid any unpleasant feeling for parents of autistic kids – or people planning on having kids who might worry about “Cause X”.

It did get me thinking – if you have let’s say 10,000 potential candidates for causing autism, and you set the bar at 95% probability of rejecting the hypothesis that a given potential cause is a factor, what is the outcome? Well, if there is a random spread of autism among the population with no actual cause (let’s say it is caused by a random genetic mutation with no link to any parental behavior, parental genetics or the environment) then you will expect to find about 500 “statistically significant” factors for autism simply by testing at the 95% level. That’s 500, when none of them are actually the real cause. It’s just chance. Plenty of fodder for pundits though.

That’s one problem with statistics – the answer you get unavoidably depends on your frame of reference.

The questions I have about attribution are unrelated to this specific point about statistics, but there are statistical arguments in the attribution field that seem fatally flawed. Luckily I’m a statistical novice so no doubt readers will set me straight.

On another unrelated point about statistical independence, only slightly more relevant to the question at hand, Pirtle, Meyer & Hamilton (2010) said:

In short, we note that GCMs are commonly treated as independent from one another, when in fact there are many reasons to believe otherwise. The assumption of independence leads to increased confidence in the ‘‘robustness’’ of model results when multiple models agree. But GCM independence has not been evaluated by model builders and others in the climate science community. Until now the climate science literature has given only passing attention to this problem, and the field has not developed systematic approaches for assessing model independence.

.. end of digression

Attribution History

In my efforts to understand Chapter 10 of AR5 I followed up on a lot of references and ended up winding my way back to Hegerl et al 1996.

Gabriele Hegerl is one of the lead authors of Chapter 10 of AR5, was one of the two coordinating lead authors of the Attribution chapter of AR4, and one of four lead authors on the relevant chapter of AR3 – and of course has a lot of papers published on this subject.

As is often the case, I find that to understand a subject you have to start with a focus on the earlier papers because the later work doesn’t make a whole lot of sense without this background.

This paper by Hegerl and her colleagues use the work of one of the co-authors, Klaus Hasselmann – his 1993 paper “Optimal fingerprints for detection of time dependent climate change”.

Fingerprints, by the way, seems like a marketing term. Fingerprints evokes the idea that you can readily demonstrate that John G. Doe of 137 Smith St, Smithsville was at least present at the crime scene and there is no possibility of confusing his fingerprints with John G. Dode who lives next door even though their mothers could barely tell them apart.

This kind of attribution is more in the realm of “was it the 6ft bald white guy or the 5’5″ black guy”?

Well, let’s set aside questions of marketing and look at the details.

Detecting GHG Climate Change with Optimal Fingerprint Methods in 1996

The essence of the method is to compare observations (measurements) with:

  • model runs with GHG forcing
  • model runs with “other anthropogenic” and natural forcings
  • model runs with internal variability only

Then based on the fit you can distinguish one from the other. The statistical basis is covered in detail in Hasselmann 1993 and more briefly in this paper: Hegerl et al 1996 – both papers are linked below in the References.

At this point I make another digression.. as regular readers know I am fully convinced that the increases in CO2, CH4 and other GHGs over the past 100 years or more can be very well quantified into “radiative forcing” and am 100% in agreement with the IPCCs summary of the work of atmospheric physics over the last 50 years on this topic. That is, the increases in GHGs have led to something like a “radiative forcing” of 2.8 W/m² [corrected, thanks to niclewis].

And there isn’t any scientific basis for disputing this “pre-feedback” value. It’s simply the result of basic radiative transfer theory, well-established, and well-demonstrated in observations both in the lab and through the atmosphere. People confused about this topic are confused about science basics and comments to the contrary may be allowed or more likely will be capriciously removed due to the fact that there have been more than 50 posts on this topic (post your comments on those instead). See The “Greenhouse” Effect Explained in Simple Terms and On Uses of A 4 x 2: Arrhenius, The Last 15 years of Temperature History and Other Parodies.

Therefore, it’s “very likely” that the increases in GHGs over the last 100 years have contributed significantly to the temperature changes that we have seen.

To say otherwise – and still accept physics basics – means believing that the radiative forcing has been “mostly” cancelled out by feedbacks while internal variability has been amplified by feedbacks to cause a significant temperature change.

Yet this work on attribution seems to be fundamentally flawed.

Here was the conclusion:

We find that the latest observed 30-year trend pattern of near-surface temperature change can be distinguished from all estimates of natural climate variability with an estimated risk of less than 2.5% if the optimal fingerprint is applied.

With the caveats, that to me, eliminated the statistical basis of the previous statement:

The greatest uncertainty of our analysis is the estimate of the natural variability noise level..

..The shortcomings of the present estimates of natural climate variability cannot be readily overcome. However, the next generation of models should provide us with better simulations of natural variability. In the future, more observations and paleoclimatic information should yield more insight into natural variability, especially on longer timescales. This would enhance the credibility of the statistical test.

Earlier in the paper the authors said:

..However, it is generally believed that models reproduce the space-time statistics of natural variability on large space and long time scales (months to years) reasonably realistic. The verification of variability of CGMCs [coupled GCMs] on decadal to century timescales is relatively short, while paleoclimatic data are sparce and often of limited quality.

..We assume that the detection variable is Gaussian with zero mean, that is, that there is no long-term nonstationarity in the natural variability.

[Emphasis added].

The climate models used would be considered rudimentary by today’s standards. Three different coupled atmosphere-ocean GCMs were used. However, each of them required “flux corrections”.

This method was pretty much the standard until the post 2000 era. The climate models “drifted”, unless, in deity-like form, you topped up (or took out) heat and momentum from various grid boxes.

That is, the models themselves struggled (in 1996) to represent climate unless the climate modeler knew, and corrected for, the long term “drift” in the model.


In the next article we will look at more recent work in attribution and fingerprints and see whether the field has developed.

But in this article we see that the conclusion of an attribution study in 1996 was that there was only a “2.5% chance” that recent temperature changes could be attributed to natural variability. At the same time, the question of how accurate the models were in simulating natural variability was noted but never quantified. And the models were all “flux corrected”. This means that some aspects of the long term statistics of climate were considered to be known – in advance.

So I find it difficult to accept any statistical significance in the study at all.

If the finding instead was introduced with the caveat “assuming the accuracy of our estimates of long term natural variability of climate is correct..” then I would probably be quite happy with the finding. And that question is the key.

The question should be:

What is the likelihood that climate models accurately represent the long-term statistics of natural variability?

  • Virtually certain
  • Very likely
  • Likely
  • About as likely as not
  • Unlikely
  • Very unlikely
  • Exceptionally unlikely

So far I am yet to run across a study that poses this question.


Bindoff, N.L., et al, 2013: Detection and Attribution of Climate Change: from Global to Regional. In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change

Detecting greenhouse gas induced climate change with an optimal fingerprint method, Hegerl, von Storch, Hasselmann, Santer, Cubasch & Jones, Journal of Climate (1996)

What does it mean when climate models agree? A case for assessing independence among general circulation models, Zachary Pirtle, Ryan Meyer & Andrew Hamilton, Environ. Sci. Policy (2010)

Optimal fingerprints for detection of time dependent climate change, Klaus Hasselmann, Journal of Climate (1993)

Read Full Post »

In Part Seven – GCM I  through Part Ten – GCM IV we looked at GCM simulations of ice ages.

These were mostly attempts at “glacial inception”, that is, starting an ice age. But we also saw a simulation of the last 120 kyrs which attempted to model a complete ice age cycle including the last termination. As we saw, there were lots of limitations..

One condition for glacial inception, “perennial snow cover at high latitudes”, could be produced with a high-resolution coupled atmosphere-ocean GCM (AOGCM), but that model did suffer from the problem of having a cold bias at high latitudes.

The (reasonably accurate) simulation of a whole cycle including inception and termination came by virtue of having the internal feedbacks (ice sheet size & height and CO2 concentration) prescribed.

Just to be clear to new readers, these comments shouldn’t indicate that I’ve uncovered some secret that climate scientists are trying to hide, these points are all out in the open and usually highlighted by the authors of the papers.

In Part Nine – GCM III, one commenter highlighted a 2013 paper by Ayako Abe-Ouchi and co-workers, where the journal in question, Nature, had quite a marketing pitch on the paper. I made brief comment on it in a later article in response to another question, including that I had emailed the lead author asking a question about the modeling work (how was a 120 kyr cycle actually simulated?).

Most recently, in Eighteen – “Probably Nonlinearity” of Unknown Origin, another commented highlighted it, which rekindled my enthusiasm, and I went back and read the paper again. It turns out that my understanding of the paper had been wrong. It wasn’t really a GCM paper at all. It was an ice sheet paper.

There is a whole field of papers on ice sheet models deserving attention.

GCM review

Let’s review GCMs first of all to help us understand where ice sheet models fit in the hierarchy of climate simulations.

GCMs consist of a number of different modules coupled together. The first GCMs were mostly “atmospheric GCMs” = AGCMs, and either they had a “swamp ocean” = a mixed layer of fixed depth, or had prescribed ocean boundary conditions set from an ocean model or from an ocean reconstruction.

Less commonly, unless you worked just with oceans, there were ocean GCMs with prescribed atmospheric boundary conditions (prescribed heat and momentum flux from the atmosphere).

Then coupled atmosphere-ocean GCMs came along = AOGCMs. It was a while before these two parts matched up to the point where there was no “flux drift”, that is, no disappearing heat flux from one part of the model.

Why so difficult to get these two models working together? One important reason comes down to the time-scales involved, which result from the difference in heat capacity and momentum of the two parts of the climate system. The heat capacity and momentum of the ocean is much much higher than that of the atmosphere.

And when we add ice sheets models – ISMs – we have yet another time scale to consider.

  • the atmosphere changes in days, weeks and months
  • the ocean changes in years, decades and centuries
  • the ice sheets changes in centuries, millennia and tens of millenia

This creates a problem for climate scientists who want to apply the fundamental equations of heat, mass & momentum conservation along with parameterizations for “stuff not well understood” and “stuff quite-well-understood but whose parameters are sub-grid”. To run a high resolution AOGCM for a 1,000 years simulation might consume 1 year of supercomputer time and the ice sheet has barely moved during that period.

Ice Sheet Models

Scientists who study ice sheets have a whole bunch of different questions. They want to understand how the ice sheets developed.

What makes them grow, shrink, move, slide, melt.. What parameters are important? What parameters are well understood? What research questions are most deserving of attention? And:

Does our understanding of ice sheet dynamics allow us to model the last glacial cycle?

To answer that question we need a model for ice sheet dynamics, and to that we need to apply some boundary conditions from some other “less interesting” models, like GCMs. As a result, there are a few approaches to setting the boundary conditions so we can do our interesting work of modeling ice sheets.

Before we look at that, let’s look at the dynamics of ice sheets themselves.

Ice Sheet Dynamics

First, in the theme of the last paper, Eighteen – “Probably Nonlinearity” of Unknown Origin, here is Marshall & Clark 2002:

The origin of the dominant 100-kyr ice-volume cycle in the absence of substantive radiation forcing remains one of the most vexing questions in climate dynamics

We can add that to the 34 papers reviewed in that previous article. This paper by Marshall & Clark is definitely a good quick read for people who want to understand ice sheets a little more.

Ice doesn’t conduct a lot of heat – it is a very good insulator. So the important things with ice sheets happen at the top and the bottom.

At the top, ice melts, and the water refreezes, runs off or evaporates. In combination, the loss is called ablation. Then we have precipitation that adds to the ice sheet. So the net effect determines what happens at the top of the ice sheet.

At the bottom, when the ice sheet is very thin, heat can be conducted through from the atmosphere to the base and make it melt – if the atmosphere is warm enough. As the ice sheet gets thicker, very little heat is conducted through. However, there are two important sources of heat for surface heating which results in “basal sliding”. One source is geothermal energy. This is around 0.1 W/m² which is very small unless we are dealing with an insulating material (like ice) and lots of time (like ice sheets). The other source is the shear stress in the ice sheet which can create a lot of heat via the mechanics of deformation.

Once the ice sheet is able to start sliding, the dynamics create a completely different result compared to an ice sheet “cold-pinned” to the rock underneath.

Some comments from Marshall and Clark:

Ice sheet deglaciation involves an amount of energy larger than that provided directly from high-latitude radiation forcing associated with orbital variations. Internal glaciologic, isostatic, and climatic feedbacks are thus essential to explain the deglaciation.

..Moreover, our results suggest that thermal enabling of basal flow does not occur in response to surface warming, which may explain why the timing of the Termination II occurred earlier than predicted by orbital forcing [Gallup et al., 2002].

Results suggest that basal temperature evolution plays an important role in setting the stage for glacial termination. To confirm this hypothesis, model studies need improved basal process physics to incorporate the glaciological mechanisms associated with ice sheet instability (surging, streaming flow).

..Our simulations suggest that a substantial fraction (60% to 80%) of the ice sheet was frozen to the bed for the first 75 kyr of the glacial cycle, thus strongly limiting basal flow. Subsequent doubling of the area of warm-based ice in response to ice sheet thickening and expansion and to the reduction in downward advection of cold ice may have enabled broad increases in geologically- and hydrologically-mediated fast ice flow during the last deglaciation.

Increased dynamical activity of the ice sheet would lead to net thinning of the ice sheet interior and the transport of large amounts of ice into regions of intense ablation both south of the ice sheet and at the marine margins (via calving). This has the potential to provide a strong positive feedback on deglaciation.

The timescale of basal temperature evolution is of the same order as the 100-kyr glacial cycle, suggesting that the establishment of warm-based ice over a large enough area of the ice sheet bed may have influenced the timing of deglaciation. Our results thus reinforce the notion that at a mature point in their life cycle, 100-kyr ice sheets become independent of orbital forcing and affect their own demise through internal feedbacks.

[Emphasis added]

In this article we will focus on a 2007 paper by Ayako Abe-Ouchi, T Segawa & Fuyuki Saito. This paper is essentially the same modeling approach used in Abe-Ouchi’s 2013 Nature paper.

The Ice Model

The ice sheet model has a time step of 2 years, with 1° grid from 30°N to the north pole, 1° longitude and 20 vertical levels.

Equations for the ice sheet include sliding velocity, ice sheet deformation, the heat transfer through the lithosphere, the bedrock elevation and the accumulation rate on the ice sheet.

Note, there is a reference that some of the model is based on work described in Sensitivity of Greenland ice sheet simulation to the numerical procedure employed for ice sheet dynamics, F Saito & A Abe-Ouchi, Ann. Glaciol., (2005) – but I don’t have access to this journal. (If anyone does, please email the paper to me at scienceofdoom – you know what goes here – gmail.com).

How did they calculate the accumulation on the ice sheet? There is an equation:


Ts is the surface temperature, dP is a measure of aridity and Aref is a reference value for accumulation. This is a highly parameterized method of calculating how much thicker or thinner the ice sheet is growing. The authors reference Marshall et al 2002 for this equation, and that paper is very instructive in how poorly understood ice sheet dynamics actually are.

Here is one part of the relevant section in Marshall et al 2002:

..For completeness here, note that we have also experimented with spatial precipitation patterns that are based on present-day distributions.

Under this treatment, local precipitation rates diminish exponentially with local atmospheric cooling, reflecting the increased aridity that can be expected under glacial conditions (Tarasov and Peltier, 1999).

Paleo-precipitation under this parameterization has the form:

P(λ,θ,t) = Pobs(λ,θ)(1+dp)ΔT(λ,θ,t) x exp[βp.max[hs(λ,θ,t)-ht,0]]       (18)

The parameter dP in this equation represents the percentage of drying per 1C; Tarasov and Peltier (1999) choose a value of 3% per °C; dp = 0:03.

[Emphasis added, color added to highlight the relevant part of the equation]

So dp is a parameter that attempts to account for increasing aridity in colder glacial conditions, and in their 2002 paper Marshall et al describe it as 1 of 4 “free parameters” that are investigated to see what effect they have on ice sheet development around the LGM.

Abe-Ouchi and co-authors took a slightly different approach that certainly seems like an improvement over Marshall et al 2002:


So their value of aridity is just a linear function of ice sheet area – from zero to a fixed value, rather than a fixed value no matter the ice sheet size.

How is Ts calculated? That comes, in a way, from the atmospheric GCM, but probably not in a way that readers might expect. So let’s have a look at the GCM then come back to this calculation of Ts.

Atmospheric GCM Simulations

There were three groups of atmospheric GCM simulations, with parameters selected to try and tease out which factors have the most impact.

Group One: high resolution GCM – 1.1º latitude and longitude and 20 atmospheric vertical levels with fixed sea surface temperature. So there is no ocean model, the ocean temperature are prescribed. Within this group, four experiments:

  • A control experiment – modern day values
  • LGM (last glacial maximum) conditions for CO2 (note 1) and orbital parameters with
    • no ice
    • LGM ice extent but zero thickness
    • LGM ice extent and LGM thickness

So the idea is to compare results with and without the actual ice sheet so see how much impact orbital and CO2 values have vs the effect of the ice sheet itself – and then for the ice sheet to see whether the albedo or the elevation has the most impact. Why the elevation? Well, if an ice sheet is 1km thick then the surface temperature will be something like 6ºC colder. (Exactly how much colder is an interesting question because we don’t know what the lapse rate actually was). There will also be an effect on atmospheric circulation – you’ve stuck a “mountain range” in the path of wind so this changes the circulation.

Each of the four simulations was run for 11 or 13 years and the last 10 years’ results used:

From Abe-Ouchi et al 2007

From Abe-Ouchi et al 2007

Figure 1

It’s clear from this simulation that the full result (left graphic) is mostly caused by the ice sheet (right graphic) rather than CO2, orbital parameters and the SSTs (middle graphic). And the next figure in the paper shows the breakdown between the albedo effect and the height of the ice sheet:

From Abe-Ouchi et al 2007

From Abe-Ouchi et al 2007

Figure 2 – same color legend as figure 1

Now a lapse rate of 5K/km was used. What happens if the lapse rate of 9K/km was used instead? There were no simulations done with different lapse rates.

..Other lapse rates could be used which vary depending on the altitude or location, while a lapse rate larger than 7 K/km or smaller than 4 K/km is inconsistent with the overall feature. This is consistent with the finding of Krinner and Genthon (1999), who suggest a lapse rate of 5.5 K/km, but is in contrast with other studies which have conventionally used lapse rates of 8 K/km or 6.5 K/km to drive the ice sheet models..

Group Two – medium resolution GCM 2.8º latitude and longitude and 11 atmospheric vertical levels, with a “slab ocean” – this means the ocean is treated as one temperature through the depth of some fixed layer, like 50m. So it is allowing the ocean to be there as a heat sink/source responding to climate, but no heat transfer through to a deeper ocean.

There were five simulations in this group, one control (modern day everything) and four with CO2 & orbital parameters at the LGM:

  • no ice sheet
  • LGM ice extent, but flat
  • 12 kyrs ago ice extent, but flat
  • 12 kyrs ago ice extent and height

So this group takes a slightly more detailed look at ice sheet impact. Not surprisingly the simulation results give intermediate values for the ice sheet extent at 12 kyrs ago.

Group Three – medium resolution GCM as in group two, and ice sheets either at present day or LGM, with nine simulations covering different orbital values, different CO2 values of present day, 280 or 200 ppm.

There was also some discussion of the impact of different climate models. I found this fascinating because the difference between CCSM and the other models appears to be as great as the difference in figure 2 (above) which identifies the albedo effect as more significant than the lapse rate effect:

From Abe-Ouchi et al 2007

From Abe-Ouchi et al 2007

Figure 3

And this naturally has me wondering about how much significance to put on the GCM simulation results shown in the paper. The authors also comment:

Based on these GCM results we conclude there remains considerable uncertainty over the actual size of the albedo effect.

Given there is also uncertainty over the lapse rate that actually occurred, it seems there is considerable uncertainty over everything.

Now let’s return to the ice sheet model, because so far we haven’t seen any output from the ice sheet model.

GCM Inputs into the Ice Sheet Model

The equation which calculates the change in accumulation on the ice sheet used a fairly arbitrary parameter dp, with (1+dp) raised to the power of Ts.

The ice sheet model has a 2 year time step. The GCM results don’t provide Ts across the surface grid every 2 years, they are snapshots for certain conditions. The ice sheet model uses this calculation for Ts:

Ts = Tref + ΔTice + ΔTco2 + ΔTinsol + ΔTnonlinear

Tref is the reference temperature which is present day climatology. The other ΔT (change in temperature) values are basically a linear interpolation from two values of the GCM simulations. Here is the ΔTCo2 value:



So think of it like this – we have found Ts at one value of CO2 higher and one value of CO2 lower from some snapshot GCM simulations. We plot a graph with Co2 on the x-axis and Ts on the y-axis with just two points on the graph from these two experiments and we draw a straight line between the two points.

To calculate Ts at say 50 kyrs ago we look up the CO2 value at 50 kyrs from ice core data, and read the value of TCO2 from the straight line on the graph.

Likewise for the other parameters. Here is ΔTinsol:



So the method is extremely basic. Of course the model needs something..

Now, given that we have inputs for accumulation on the ice sheet, the ice sheet model can run. Here are the results. The third graph (3) is the sea level from proxy results so is our best estimate of reality, with (4) providing model outputs for different parameters of d0 (“desertification” or aridity) and lapse rate, and (5) providing outputs for different parameters of albedo and lapse rate:

From Abe-Ouchi et al 2007

From Abe-Ouchi et al 2007

Figure 4

There are three main points of interest.

Firstly, small changes in the parameters cause huge changes in the final results. The idea of aridity over ice sheets as just linear function of ice sheet size is very questionable itself. The idea of a constant lapse rate is extremely questionable. Together, using values that appear realistic, we can model much less ice sheet growth (sea level drop) or many times greater ice sheet growth than actually occurred.

Secondly, notice that the time of maximum ice sheet (lowest sea level) for realistic results show sea level starting to rise around 12 kyrs, rather than the actual 18 kyrs. This might be due to the impact of orbital factors which were at quite a low level (i.e., high latitude summer insolation was at quite a low level) when the last ice age finished, but have quite an impact in the model. Of course, we have covered this “problem” in a few previous articles in this series. In the context of this model it might be that the impact of the southern hemisphere leading the globe out of the last ice age is completely missing.

Thirdly – while this might be clear to some people, but for many new to this kind of model it won’t be obvious – the inputs for the model are some limits of the actual history. The model doesn’t simulate the actual start and end of the last ice age “by itself”. We feed into the GCM model a few CO2 values. We feed into the GCM model a few ice sheet extent and heights that (as best as can be reconstructed) actually occurred. The GCM gives us some temperature values for these snapshot conditions.

In the case of this ice sheet model, every 2 years (each time step of the ice sheet model) we “look up” the actual value of ice sheet extent and atmospheric CO2 and we linearly interpolate the GCM output temperatures for the current year. And then we crudely parameterize these values into some accumulation rate on the ice sheet.


This is our first foray into ice sheet models. It should be clear that the results are interesting but we are at a very early stage in modeling ice sheets.

The problems are:

  • the computational load required to run a GCM coupled with an ice sheet model over 120 kyrs is much too high, so it can’t be done
  • the resulting tradeoff uses a few GCM snapshot values to feed linearly interpolated temperatures into a parameterized accumulation equation
  • the effect of lapse rate on the results is extremely large and the actual value for lapse rate over ice sheets is very unlikely to be a constant and is also not known
  • our understanding of ice sheet fundamental equations are still at an early stage, as readers can see by reviewing the first two papers below, especially the second one

 Articles in this Series

Part One – An introduction

Part Two – Lorenz – one point of view from the exceptional E.N. Lorenz

Part Three – Hays, Imbrie & Shackleton – how everyone got onto the Milankovitch theory

Part Four – Understanding Orbits, Seasons and Stuff – how the wobbles and movements of the earth’s orbit affect incoming solar radiation

Part Five – Obliquity & Precession Changes – and in a bit more detail

Part Six – “Hypotheses Abound” – lots of different theories that confusingly go by the same name

Part Seven – GCM I – early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Seven and a Half – Mindmap – my mind map at that time, with many of the papers I have been reviewing and categorizing plus key extracts from those papers

Part Eight – GCM II – more recent work from the “noughties” – GCM results plus EMIC (earth models of intermediate complexity) again trying to produce perennial snow cover

Part Nine – GCM III – very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age – a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Eleven – End of the Last Ice age – latest data showing relationship between Southern Hemisphere temperatures, global temperatures and CO2

Twelve – GCM V – Ice Age Termination – very recent work from He et al 2013, using a high resolution GCM (CCSM3) to analyze the end of the last ice age and the complex link between Antarctic and Greenland

Thirteen – Terminator II – looking at the date of Termination II, the end of the penultimate ice age – and implications for the cause of Termination II

Fourteen – Concepts & HD Data – getting a conceptual feel for the impacts of obliquity and precession, and some ice age datasets in high resolution

Fifteen – Roe vs Huybers – reviewing In Defence of Milankovitch, by Gerard Roe

Sixteen – Roe vs Huybers II – remapping a deep ocean core dataset and updating the previous article

Seventeen – Proxies under Water I – explaining the isotopic proxies and what they actually measure

Eighteen – “Probably Nonlinearity” of Unknown Origin – what is believed and what is put forward as evidence for the theory that ice age terminations were caused by orbital changes


Basal temperature evolution of North American ice sheets and implications for the 100-kyr cycle, SJ Marshall & PU Clark, GRL (2002) – free paper

North American Ice Sheet reconstructions at the Last Glacial Maximum, SJ Marshall, TS James, GKC Clarke, Quaternary Science Reviews (2002) – free paper

Climatic Conditions for modelling the Northern Hemisphere ice sheets throughout the ice age cycle, A Abe-Ouchi, T Segawa, and F Saito, Climate of the Past (2007) – free paper

Insolation-driven 100,000-year glacial cycles and hysteresis of ice-sheet volume, Ayako Abe-Ouchi, Fuyuki Saito, Kenji Kawamura, Maureen E. Raymo, Jun’ichi Okuno, Kunio Takahashi & Heinz Blatter, Nature (2013) – paywall paper


Note 1 – the value of CO2 used in these simulations was 200 ppm, while CO2 at the LGM was actually 180 ppm. Apparently this value of 200 ppm was used in a major inter-comparison project (the PMIP), but I don’t know the reason why. PMIP = Paleoclimate Modelling Intercomparison Project, Joussaume and Taylor, 1995.

Read Full Post »

A while ago, in Part Three – Hays, Imbrie & Shackleton we looked at a seminal paper from 1976.

In that paper, the data now stretched back far enough in time for the authors to demonstrate something of great importance. They showed that changes in ice volume recorded by isotopes in deep ocean cores (see Seventeen – Proxies under Water I) had significant signals at the frequencies of obliquity, precession and one of the frequencies of eccentricity.

Obliquity is the changes in the tilt of the earth’s axis, on a period around 40 kyrs. Precession is the change in the closest approach to the sun through the year (right now the closest approach is in NH winter), on a period around 20 kyrs (see Four – Understanding Orbits, Seasons and Stuff).

Both of these involve significant redistributions of solar energy. Obliquity changes the amount of solar insolation received by the poles versus the tropics. Precession changes the amount of solar insolation at high latitudes in summer versus winter. (Neither changes total solar insolation). This was nicely in line with Milankovitch’s theory – for a recap see Part Three.

I’m going to call this part Theory A, and paraphrase it like this:

The waxing and waning of the ice sheets has 40 kyr and 20 kyr periods which is caused by the changing distribution of solar insolation due to obliquity and precession.

The largest signal in ocean cores over the last 800 kyrs has a component of about 100 kyrs (with some variability). That is, the ice ages start and end with a period of about 100 kyrs. Eccentricity varies on time periods of 100 kyrs and 400 kyrs, but with a very small change in total insolation (see Part Four).

Hays et al produced a completely separate theory, which I’m going to call Theory B, and paraphrase it like this:

The start and end of the ice ages has 100 kyr periods which is caused by the changing eccentricity of the earth’s orbit.

Theory A and Theory B are both in the same paper and are both theories that “link ice ages to orbital changes”. In their paper they demonstrated Theory A but did not prove or demonstrate Theory B. Unfortunately, Theory B is the much more important one.

Here is what they said:

The dominant 100,000 year climatic component has an average period close to, and is in phase with, orbital eccentricity. Unlike the correlations between climate and the higher frequency orbital variations (which can be explained on the assumption that the climate system responds linearly to orbital forcing) an explanation of the correlations between climate and eccentricity probably requires an assumption of non-linearity.

[Emphasis added].

The only quibble I have with the above paragraph is the word “probably”. This word should have been removed. There is no doubt. An assumption of non-linearity is required as a minimum.

Now why does it “probably” or “definitely” require an assumption of non-linearity? And what does that mean?

A linearity assumption is one where the output is proportional to the input. For example: double the weight of a vehicle and the acceleration halves. Most things in the real world, and most things in climate are non-linear. So for example, double the temperature (absolute temperature) and the emitted radiation goes up by a factor of 16.

However, there isn’t a principle, an energy balance equation or even a climate model that can take this tiny change in incoming solar insolation over a 100 kyr period and cause the end of an ice age.

In fact, their statement wasn’t so much “an assumption of non-linearity” but “some non-linearity relationship that we are not currently able to model or demonstrate, some non-linearity relationship we have yet to discover”.

There is nothing wrong with their original statement as such (apart from “probably”), but an alternative way of writing from the available evidence could be:

The dominant 100,000 year climatic component has an average period close to, and is in phase with, orbital eccentricity. Unlike the correlations between climate and the higher frequency orbital variations.. an explanation of the correlations between climate and eccentricity is as yet unknown, remains to be demonstrated and there may in fact be no relationship at all.

Unfortunately, because Theory A and Theory B were in the same paper and because Theory A is well demonstrated and because there is no accepted alternative on the cause of the start and end of ice ages (there are alternative hypotheses around natural resonance) Theory B has become “well accepted”.

And because everyone familiar with climate science knows that Theory A is almost certainly true, when you point out that Theory B doesn’t have any evidence, many people are confused and wonder why you are rejecting well-proven theories.

In the series so far, except in occasional comments, I haven’t properly explained the separation between the two theories and this article is an attempt to clear that up.

Now I will produce a sufficient quantity of papers and quote their “summary of the situation so far” to demonstrate that there isn’t any support for Theory B. The only support is the fact that one component frequency of eccentricity is “similar” to the frequency of the ice age terminations/inceptions, plus the safety in numbers support of everyone else believing it.

One other comment on paleoclimate papers attempts to explain the 100 kyr period. It is the norm for published papers to introduce a new hypothesis. That doesn’t make the new hypothesis correct.

So if I produce a paper, and quote the author’s summary of “the state of work up to now” and that paper then introduces their new hypothesis which claims to perhaps solve the mystery, I haven’t quoted the author’s summary out of context.

Let’s take it as read that lots of climate scientists think they have come up with something new. What we are interesting in is their review of the current state of the field and their evidence cited in support of Theory B.

Before producing the papers I also want to explain why I think the idea behind Theory B is so obviously flawed, and not just because 38 years after Hays, Imbrie & Shackleton the mechanism is still a mystery.

Why Theory B is Unsupportable

If a non-linear relationship can be established between a 0.1% change in insolation over a long period, it must also explain why significant temperature fluctuations in high latitude regions during glacials do not cause a termination.

Here are two high resolution examples from a Greenland ice core (NGRIP) during the last glaciation:

From Wolff et al 2010

From Wolff et al 2010

The “non-linearity” hypothesis has more than one hill to climb. This second challenge is even more difficult than the first.

A tiny change in total insolation causes, via a yet to be determined non-linear effect, the end of each ice age, but this same effect does not amplify frequent large temperature changes of long duration to end an ice age (note 1).

Food for thought.

Theory C Family

Many papers which propose orbital reasons for ice age terminations do not propose eccentricity variations as the cause. Instead, they attribute terminations to specific insolation changes at specific latitudes, or various combinations of orbital factors completely unrelated to eccentricity variations. See Part Six – “Hypotheses Abound”.

Of course, one of these might be right. For now I will call them the family, so we remember that Theory C is not one theory, but a whole range of mostly incompatible theories.

But remember where the orbital hypothesis for ice age termination came from – the 100,000 year period of eccentricity variation “matching” (kind of matching) the 100,000 year period of the ice ages.

The Theory C Family does not have that starting point.


So let’s move onto papers. I started by picking off papers from the right category in my mind map that might have something to say, then I opened up every one of about 300 papers in my ice ages folder (alphabetical by author) and checked to see whether they had something to say on the cause of ice ages in the abstract or introduction. Most papers don’t have a comment because they are about details like d18O proxies, or the CO2 concentration in the Vostok ice core, etc. That’s why there aren’t 300 citations here.

And bold text within a citation is added by me for emphasis.

I looked for their citations (evidence) to back up any claim that orbital variations caused ice age terminations. In some cases I pull up what the citations said.


Last Interglacial Climates, Kukla et al (2002), by a cast of many including the famous Wallace S. Broecker, John Imbrie and Nicholas J. Shackleton:

At the end of the last interglacial period, over 100,000 yr ago, the Earth’s environments, similar to those of today, switched into a profoundly colder glacial mode. Glaciers grew, sea level dropped, and deserts expanded. The same transition occurred many times earlier, linked to periodic shifts of the Earth’s orbit around the Sun. The mechanism of this change, the most important puzzle of climatology, remains unsolved.

Note that “linked to periodic shifts of the Earth’s orbit” is followed by an “unknown mechanism”. Two of the authors were the coauthors of the classic 1976 paper that is most commonly cited as evidence for Theory B.


Millennial-scale variability during the last glacial: The ice core record, Wolff, Chappellaz, Blunier, Rasmussen & Svensson (2010)

The most significant climate variability in the Quaternary record is the alternation between glacial and interglacial, occurring at approximately 100 ka periodicity in the most recent 800 ka. This signal is of global scale, and observed in all climate records, including the long Antarctic ice cores (Jouzel et al., 2007a) and marine sediments (Lisiecki and Raymo, 2005). There is a strong consensus that the underlying cause of these changes is orbital (i.e. due to external forcing from changes in the seasonal and latitudinal pattern of insolation), but amplified by a whole range of internal factors (such as changes in greenhouse gas concentration and in ice extent).

Note the lack of citation for the underlying causes being orbital. However, as we will see, there is “strong consensus”. In this specific paper from the words used I believe the authors are supporting the Theory C Family, not Theory B.


The last glacial cycle: transient simulations with an AOGCM, Robin Smith & Jonathan Gregory (2012)

It is generally accepted that the timing of glacials is linked to variations in solar insolation that result from the Earth’s orbit around the sun (Hays et al. 1976; Huybers and Wunsch 2005). These solar radiative anomalies must have been amplified by feedback processes within the climate system, including changes in atmospheric greenhouse gas (GHG) concentrations (Archer et al. 2000) and ice-sheet growth (Clark et al. 1999), and whilst hypotheses abound as to the details of these feedbacks, none is without its detractors and we cannot yet claim to know how the Earth system produced the climate we see recorded in numerous proxy records.

I think I will classify this one as “Still a mystery”.

Note that support for “linkage to variations in solar insolation” consists of Hays et al 1976 – Theory B – and Huybers and Wunsch 2005 who propose a contradictory theory (obliquity) – Theory C Family. In this case they absolve themselves by pointing out that all the theories have flaws.


The timing of major climate terminations, ME Raymo (1997)

For the past 20 years, the Milankovitch hypothesis, which holds that the Earth’s climate is controlled by variations in incoming solar radiation tied to subtle yet predictable changes in the Earth’s orbit around the Sun [Hays et al., 1976], has been widely accepted by the scientific community. However, the degree to which and the mechanisms by which insolation variations control regional and global climate are poorly understood. In particular, the “100-kyr” climate cycle, the dominant feature of nearly all climate records of the last 900,000 years, has always posed a problem to the Milankovitch hypothesis..

..time interval between terminations is not constant; it varies from 84 kyr between Terminations IV and V to 120 kyr between Terminations III and II.

“Still a mystery”. (Maureen Raymo has written many papers on ice ages, is the coauthor of the LR04 ocean core database and cannot be considered an outlier). Her paper claims she solves the problem:

In conclusion, it is proposed that the interaction between obliquity and the eccentricity-modulation of precession as it controls northern hemisphere summer radiation is responsible for the pattern of ice volume growth and decay observed in the late Quaternary.

Solution was unknown, but new proposed solution is from the Theory C Family.


Glacial termination: sensitivity to orbital and CO2 forcing in a coupled climate system model, Yoshimori, Weaver, Marshall & Clarke (2001)

Glaciation (deglaciation) is one of the most extreme and fundamental climatic events in Earth’s history.. As a result, fluctuations in orbital forcing (e.g. Berger 1978; Berger and Loutre 1991) have been widely recognised as the primary triggers responsible for the glacial-interglacial cycles (Berger 1988; Bradley 1999; Broecker and Denton 1990; Crowley and North 1991; Imbrie and Imbrie 1979). At the same time, these studies revealed the complexity of the climate system, and produced several paradoxes which cannot be explained by a simple linear response of the climate system to orbital forcing.

At this point I was interested to find out how well these 4 papers cited (Berger 1988; Bradley 1999; Broecker and Denton 1990; Crowley and North 1991; Imbrie and Imbrie 1979) backed up the evidence for orbital forcing being the primary triggers for glacial cycles.

Broecker & Denton (1990) is in Scientific American which I don’t think counts as a peer-reviewed journal (even though a long time ago I subscribed to it and thought it was a great magazine). I was able to find the abstract only, which coincides with their peer-reviewed paper The Role of Ocean-Atmosphere Reorganization in Glacial Cycles the same year in Quaternary Science Reviews, so I’ll assume they are media hounds promoting their peer-reviewed paper for a wider audience and look at the peer-reviewed paper. After commenting on the problems:

Such a linkage cannot explain synchronous climate changes of similar severity in both polar hemispheres. Also, it cannot account for the rapidity of the transition from full glacial toward full interglacial conditions. If glacial climates are driven by changes in seasonality, then another linkage must exist.

they state:

We propose that Quaternary glacial cycles were dominated by abrupt reorganizations of the ocean- atmosphere system driven by orbitally induced changes in fresh water transports which impact salt structure in the sea. These reorganizations mark switches between stable modes of operation of the ocean-atmosphere system. Although we think that glacial cycles were driven by orbital change, we see no basis for rejecting the possibility that the mode changes are part of a self- sustained internal oscillation that would operate even in the absence of changes in the Earth’s orbital parameters. If so, as pointed out by Saltzman et al. (1984), orbital cycles can merely modulate and pace a self-oscillating climate system.

So this paper is evidence for Theory B or Theory C Family? “..we think that..” “..we see no basis for rejecting the possibility ..self-sustained internal oscillation”. This is evidence for the astronomical theory?

I can’t access Milankovitch theory and climate, Berger 1988 (thanks, Reviews of Geophysics!). If someone has it, please email it to me at scienceofdoom – you know what goes here – gmail.com. The other two references are books, so I can’t access them. Crowley & North 1991 is Paleoclimatology. Vol 16 of Oxford Monograph on Geology and Geophysics, OUP. Imbrie & Imbrie 1979 is Ice Ages: solving the mystery.


Glacial terminations as southern warmings without northern control, E. W. Wolff, H. Fischer and R. Röthlisberger (2009)

However, the reason for the spacing and timing of interglacials, and the sequence of events at major warmings, remains obscure.

“Still a mystery”. This is a little different from Wolff’s comment in the paper above. Elsewhere (see his comments cited in Eleven – End of the Last Ice age) he has stated that ice age terminations are not understood:

Between about 19,000 and 10,000 years ago, Earth emerged from the last glacial period. The whole globe warmed, ice sheets retreated from Northern Hemisphere continents and atmospheric composition changed significantly. Many theories try to explain what triggered and sustained this transformation (known as the glacial termination), but crucial evidence to validate them is lacking.


The Last Glacial Termination, Denton, Anderson, Toggweiler, Edwards, Schaefer & Putnam (2009)

A major puzzle of paleoclimatology is why, after a long interval of cooling climate, each late Quaternary ice age ended with a relatively short warming leg called a termination. We here offer a comprehensive hypothesis of how Earth emerged from the last global ice age..

“Still a mystery”


Global warming preceded by increasing carbon dioxide concentrations during the last deglaciation, Shakun, Clark, He, Marcott, Mix, Zhengyu Liu, Otto-Bliesner,  Schmittner & Bard (2012)

Understanding the causes of the Pleistocene ice ages has been a significant question in climate dynamics since they were discovered in the mid-nineteenth century. The identification of orbital frequencies in the marine 18O/16O record, a proxy for global ice volume, in the 1970s demonstrated that glacial cycles are ultimately paced by astronomical forcing.

The citation is Hays, Imbrie & Shackleton 1976. Theory B with no support.


Northern Hemisphere forcing of Southern Hemisphere climate during the last deglaciation, He, Shakun, Clark, Carlson, Liu, Otto-Bliesner & Kutzbach (2013)

According to the Milankovitch theory, changes in summer insolation in the high-latitude Northern Hemisphere caused glacial cycles through their impact on ice-sheet mass balance. Statistical analyses of long climate records supported this theory, but they also posed a substantial challenge by showing that changes in Southern Hemisphere climate were in phase with or led those in the north.

The citation is Hays, Imbrie & Shackleton 1976. (Many of the same authors in this and the paper above).


Eight glacial cycles from an Antarctic ice core, EPICA Community Members (2004)

The climate of the last 500,000 years (500 kyr) was characterized by extremely strong 100-kyr cyclicity, as seen particularly in ice-core and marine-sediment records. During the earlier part of the Quaternary (before 1 million years ago; 1 Myr BP), cycles of 41 kyr dominated. The period in between shows intermediate behaviour, with marine records showing both frequencies and a lower amplitude of the climate signal. However, the reasons for the dominance of the 100-kyr (eccentricity) over the 41-kyr (obliquity) band in the later part of the record, and the amplifiers that allow small changes in radiation to cause large changes in global climate, are not well understood.

Is this accepting Theory B or not?


Now onto the alphabetical order..

Climatic Conditions for modelling the Northern Hemisphere ice sheets throughout the ice age cycle, Abe-Ouchi, Segawa & Saito (2007)

To explain why the ice sheets in the Northern Hemisphere grew to the size and extent that has been observed, and why they retreated quickly at the termination of each 100 kyr cycle is still a challenge (Tarasov and Peltier, 1997a; Berger et al., 1998; Paillard, 1998; Paillard and Parrenin, 2004). Although it is now broadly accepted that the orbital variations of the Earth influence climate changes (Milankovitch, 1930; Hays et al., 1976; Berger, 1978), the large amplitude of the ice volume changes and the geographical extent need to be reproduced by comprehensive models which include nonlinear mechanisms of ice sheet dynamics (Raymo, 1997; Tarasov and Peltier, 1997b; Paillard, 2001; Raymo et al., 2006).

The papers cited for this broad agreement are Hays et al 1976 once again. And Berger 1978 who says:

It is not the aim of this paper to draw definitive conclusions about the astronomical theory of paleoclimates but simply to provide geologists with accurate theoretical values of the earth’s orbital elements and insolation..

Berger does go on to comment on eccentricity:

Berger 1978

Berger 1978

And this is simply again noting that the period for eccentricity is “similar” to the period for the ice age terminations.

Theory B with no support.


Insolation-driven 100,000-year glacial cycles and hysteresis of ice-sheet volume, Abe-Ouchi, Saito, Kawamura, Raymo, Okuno, Takahashi & Blatter (2013)

Milankovitch theory proposes that summer insolation at high northern latitudes drives the glacial cycles, and statistical tests have demonstrated that the glacial cycles are indeed linked to eccentricity, obliquity and precession cycles. Yet insolation alone cannot explain the strong 100,000-year cycle, suggesting that internal climatic feedbacks may also be at work. Earlier conceptual models, for example, showed that glacial terminations are associated with the build-up of Northern Hemisphere ‘excess ice’, but the physical mechanisms underpinning the 100,000-year cycle remain unclear.

The citations for the statistical tests are Lisiecki 2010 and Huybers 2011.

Huybers 2011 claims that obliquity and precession (not eccentricity) are linked to deglaciations. This is development of his earlier, very interesting 2007 hypothesis (Glacial variability over the last two million years: an extended depth-derived agemodel, continuous obliquity pacing, and the Pleistocene progression – to which we will return) that obliquity is the prime factor (not necessarily the cause) in deglaciations.

Here is what Huybers says in his 2011 paper, Combined obliquity and precession pacing of late Pleistocene deglaciations:

The cause of these massive shifts in climate remains unclear not for lack of models, of which there are now over thirty, but for want of means to choose among them. Previous statistical tests have demonstrated that obliquity paces the 100-kyr glacial cycles [citations are his 2005 paper with Carl Wunsch and his 2007 paper], helping narrow the list of viable mechanisms, but have been inconclusive with respect to precession (that is, P > 0.05) because of small sample sizes and uncertain timing..

In Links between eccentricity forcing and the 100,000-year glacial cycle, (2010), Lisiecki says:

Variations in the eccentricity (100,000 yr), obliquity (41,000 yr) and precession (23,000 yr) of Earth’s orbit have been linked to glacial–interglacial climate cycles. It is generally thought that the 100,000-yr glacial cycles of the past 800,000 yr are a result of orbital eccentricity [1–4] . However, the eccentricity cycle produces negligible 100-kyr power in seasonal or mean annual insolation, although it does modulate the amplitude of the precession cycle.

Alternatively, it has been suggested that the recent glacial cycles are driven purely by the obliquity cycle [5–7]. Here I use statistical analyses of insolation and the climate of the past five million years to characterize the link between eccentricity and the 100,000-yr glacial cycles. Using cross-wavelet phase analysis, I show that the relative phase of eccentricity and glacial cycles has been stable since 1.2 Myr ago, supporting the hypothesis that 100,000-yr glacial cycles are paced [8–10] by eccentricity [4,11]. However, I find that the time-dependent 100,000-yr power of eccentricity has been anticorrelated with that of climate since 5 Myr ago, with strong eccentricity forcing associated with weaker power in the 100,000-yr glacial cycle.

I propose that the anticorrelation arises from the strong precession forcing associated with strong eccentricity forcing, which disrupts the internal climate feedbacks that drive the 100,000-yr glacial cycle. This supports the hypothesis that internally driven climate feedbacks are the source of the 100,000-yr climate variations.

So she accepts that Theory B is generally accepted, although some Theory C Family advocates are out there, but provides a new hybrid solution of her own.

References for the orbital eccentricity hypothesis [1-4] include Hays et al 1976 and Raymo 1997 cited above. However, Raymo didn’t think it had been demonstrated prior to her 1997 paper and in her 1997 paper introduces the hypothesis that is primarily ice sheet size, obliquity and precession modulated by eccentricity.

References for the obliquity hypothesis [5-7] include the Huybers & Wunsch 2005 and Huybers 2007 covered just before this reference.

So in summary – going back to how we dragged up these references – Abe-Ouchi and co-authors provide two citations in support of the statistical link between orbital variations and deglaciation. One citation claims primarily obliquity with maybe a place for precession – no link to eccentricity. Another citation claims a new theory for eccentricity as a phase-locking mechanism to an internal climate process.

These are two mutually exclusive ideas. But at least both papers attempted to prove their (exclusive) ideas.


Equatorial insolation: from precession harmonics to eccentricity frequencies, Berger, Loutre, & Mélice (2006):

Since the paper by Hays et al. (1976), spectral analyses of climate proxy records provide substantial evidence that a fraction of the climatic variance is driven by insolation changes in the frequency ranges of obliquity and precession variations. However, it is the variance components centered near 100 kyr which dominate most Upper Pleistocene climatic records, although the amount of insolation perturbation at the eccentricity driven periods close to 100-kyr (mainly the 95 kyr- and 123 kyr-periods) is much too small to cause directly a climate change of ice-age amplitude. Many attempts to find an explanation to this 100-kyr cycle in climatic records have been made over the last decades.

“Still a mystery”.


Multistability and hysteresis in the climate-cryosphere system under orbital forcing, Calov & Ganopolski (2005)

In spite of considerable progress in studies of past climate changes, the nature of vigorous climate variations observed during the past several million years remains elusive. A variety of different astronomical theories, among which the Milankovitch theory [Milankovitch, 1941] is the best known, suggest changes in Earth’s orbital parameters as a driver or, at least, a pacemaker of glacial-interglacial climate transitions. However, the mechanisms which translate seasonal and strongly latitude-dependent variations in the insolation into the global-scale climate shifts between glacial and interglacial climate states are the subject of debate.

“Still a mystery”


Ice Age Terminations, Cheng, Edwards, Broecker, Denton, Kong, Wang, Zhang, Wang (2009)

The ice-age cycles have been linked to changes in Earth’s orbital geometry (the Milankovitch or Astronomical theory) through spectral analysis of marine oxygen-isotope records (3), which demonstrate power in the ice-age record at the same three spectral periods as orbitally driven changes in insolation. However, explaining the 100 thousand- year (ky)–recurrence period of ice ages has proved to be problematic because although the 100-ky cycle dominates the ice-volume power spectrum, it is small in the insolation spectrum. In order to understand what factors control ice age cycles, we must know the extent to which terminations are systematically linked to insolation and how any such linkage can produce a non- linear response by the climate system at the end of ice ages.

“Still a mystery”. This paper claims (their new work) that terminations are all about high latitude NH insolation. They state, for the hypothesis of the paper:

In all four cases, observations are consistent with a classic Northern Hemisphere summer insolation intensity trigger for an initial retreat of northern ice sheets.

This is similar to Northern Hemisphere forcing of climatic cycles in Antarctica over the past 360,000 years, Kawamura et al (2007) – not cited here because they didn’t make a statement about “the problem so far”.


Orbital forcing and role of the latitudinal insolation/temperature gradient, Basil Davis & Simon Brewer (2009)

Orbital forcing of the climate system is clearly shown in the Earths record of glacial–interglacial cycles, but the mechanism underlying this forcing is poorly understood.

Not sure whether this is classified as “Still a mystery” or Theory B or Theory C Family.


Evidence for Obliquity Forcing of Glacial Termination II, Drysdale, Hellstrom, Zanchetta, Fallick, Sánchez Goñi, Couchoud, McDonald, Maas, Lohmann & Isola (2009)

During the Late Pleistocene, the period of glacial-to-interglacial transitions (or terminations) has increased relative to the Early Pleistocene [~100 thousand years (ky) versus 40 ky]. A coherent explanation for this shift still eludes paleoclimatologists (3). Although many different models have been proposed (4), the most widely accepted one invokes changes in the intensity of high-latitude Northern Hemisphere summer insolation (NHSI). These changes are driven largely by the precession of the equinoxes (5), which produces relatively large seasonal and hemispheric insolation intensity anomalies as the month of perihelion shifts through its ~23-ky cycle.

Their “widely accepted” theory is from the Theory C Family. This is a different theory from the “widely accepted” theory B. Perhaps both are “widely accepted”, hopefully by different groups of scientists.


The role of orbital forcing, carbon dioxide and regolith in 100 kyr glacial cycles, Ganopolski & Calov (2011)

The origin of the 100 kyr cyclicity, which dominates ice volume variations and other climate records over the past million years, remains debatable..

..One of the major challenges to the classical Milankovitch theory is the presence of 100 kyr cycles that dominate global ice volume and climate variability over the past million years (Hays et al., 1976; Imbrie et al., 1993; Paillard, 2001).

This periodicity is practically absent in the principal “Milankovitch forcing” – variations of summer insolation at high latitudes of the Northern Hemisphere (NH).

The eccentricity of Earth’s orbit does contain periodicities close to 100 kyr and the robust phase relationship between glacial cycles and 100-kyr eccentricity cycles has been found in the paleoclimate records (Hays et al., 1976; Berger et al., 2005; Lisiecki, 2010). However, the direct effect of the eccentricity on Earth’s global energy balance is very small.

Moreover, eccentricity variations are dominated by a 400 kyr cycle which is also seen in some older geological records (e.g. Zachos et al., 1997), but is practically absent in the frequency spectrum of the ice volume variations for the last million years.

In view of this long-standing problem, it was proposed that the 100 kyr cycles do not originate directly from the orbital forcing but rather represent internal oscillations in the climate-cryosphere (Gildor and Tziperman, 2001) or climate-cryosphere-carbonosphere system (e.g. Saltzman and Maasch, 1988; Paillard and Parrenin, 2004), which can be synchronized (phase locked) to the orbital forcing (Tziperman et al., 2006).

Alternatively, it was proposed that the 100 kyr cycles result from the terminations of ice sheet buildup by each second or third obliquity cycle (Huybers and Wunsch, 2005) or each fourth or fifth precessional cycle (Ridgwell et al., 1999) or they originate directly from a strong, nonlinear, climate-cryosphere system response to a combination of precessional and obliquity components of the orbital forcing (Paillard, 1998).

“Still a mystery”.


Modeling the Climatic Response to Orbital Variations, Imbrie & Imbrie (1980)

This is not to say that all important questions have been answered. In fact, one purpose of this article is to contribute to the solution of one of the remaining major problems: the origin and history of the 100,000-year climatic cycle.

At least over the past 600,000 years, almost all climatic records are dominated by variance components in a narrow frequency band centered near a 100,000-year cycle (5-8, 12, 21, 38). Yet a climatic response at these frequencies is not predicted by the Milankovitch version of the astronomical theory – or any other version that involves a linear response (5, 6).

This paper was worth citing because the first author is the coathor of Hays et al 1976. For interest let’s look at what they attempt to demonstrate in their paper. They take the approach of producing different (simple) models with orbital forcing, to try to reproduce the geological record:

The goal of our modeling effort has been to simulate the climatic response to orbital variations over the past 500 kyrs. The resulting model fails to simulate four important aspects of this record. It fails to produce sufficient 100k power; it produces too much 23K and 19K power; it produces too much 413k power and it loses its match with the record ardoun the time of the last 413k eccentricity minimum..

All of these failures are related to a fundamental shortcoming in the generation of 100k power.. Indeed it is possible that no function will yield a good simulation of the entire 500 kyr record under consideration here, because nonorbitally forced high-frequency fluctuations may have caused the system to flip or flop in an unpredictable fashion. This would be an example of Lorenz’s concept of an almost intransitive system..

..Progress in this direction will indicate what long-term variations need to be explained within the framework of a stochastic model and provide a basis for estimating the degree of unpredictability in climate.


On the structure and origin of major glaciation cycles, Imbrie, Boyle, Clemens, Duffy, Howard, Kukla, Kutzbach, Martinson, McIntyre, Mix, Molfino, Morley, Peterson, Pisias, Prell, Raymo, Shackleton & Toggweiler (1992)

It is now widely believed that these astronomical influences, through their control of the seasonal and latitudinal distribution of incident solar radiation, either drive the major climate cycles externally or set the phase of oscillations that are driven internally..

..In this paper we concentrate on the 23-kyr and 41- kyr cycles of glaciation. These prove to be so strongly correlated with large changes in seasonal radiation that we regard them as continuous, essentially linear responses to the Milankovitch forcing. In a subsequent paper we will remove these linearly forced components from each time series and examine the residual response. The residual response is dominated by a 100-kyr cycle, which has twice the amplitude of the 23- and 41-kyr cycles combined. In the band of periods near 100 kyr, variations in radiation correlated with climate are so small, compared with variations correlated with the two shorter climatic cycles, that the strength of the 100-kyr climate cycle must result from the channeling of energy into this band by mechanisms operating within the climate system itself.

In Part 2, Imbrie et al (same authors) 1993 they highlight in more detail the problem of explaining the 100 kyr period:

1. One difficulty in finding a simple Milankovitch explanation is that the amplitudes of all 100-kyr radiation signals are very small [Hays et al., 1976]. As an example, the amplitude of the 100-kyr radiation cycle at June 65N (a signal often used as a forcing in Milankovitch theories) is only 2W/m² (Figure 1). This is 1 order of magnitude smaller than the same insolation signal in the 23- and 41- kyr bands, yet the system’s response in these two bands combined has about half the amplitude observed at 100 kyr.

2. Another fundamental difficulty is that variations in eccentricity are not confined to periods near 100 kyr. In fact, during the late Pleistocene, eccentricity variations at periods near 100 kyr are of the same order of magnitude as those at 413 kyr.. yet the d18O record for this time interval has no corresponding spectral peak near 400 kyr..

3. The high coherency observed between 100 kyr eccentricity and d18O signals is an average that hides significant mismatches, notably about 400 kyrs ago.

Their proposed solution:

In our model, the coupled system acts as a nonlinear amplifier that is particularly sensitive to eccentricity-driven modulations in the 23,000-year sea level cycle. During an interval when sea level is forced upward from a major low stand by a Milankovitch response acting either alone or in combination with an internally driven, higher-frequency process, ice sheets grounded on continental shelves become unstable, mass wasting accelerates, and the resulting deglaciation sets the phase of one wave in the train of 100 kyr oscillations.

This doesn’t really appear to be Theory B.


Orbital forcing of Arctic climate: mechanisms of climate response and implications for continental glaciation, Jackson & Broccoli (2003)

The growth and decay of terrestrial ice sheets during the Quaternary ultimately result from the effects of changes in Earth’s orbital geometry on climate system processes. This link is convincingly established by Hays et al. (1976) who find a correlation between variations of terrestrial ice volume and variations in Earth’s orbital eccentricity, obliquity, and longitude of the perihelion.

Hays et al 1976. Theory B with no support.


A causality problem for Milankovitch, Karner & Muller (2000)

We can conclude that the standard Milankovitch insolation theory does not account for the terminations of the ice ages. That is a serious and disturbing conclusion by itself. We can conclude that models that attribute the terminations to large insolation peaks (or, equivalently, to peaks in the precession parameter), such as the recent one by Raymo (23), are incompatible with the observations.

I’ll take this as “Still a mystery”.


Linear and non-linear response of late Neogene glacial cycles to obliquity forcing and implications for the Milankovitch theory, Lourens, Becker, Bintanja, Hilgen, Tuenter & van de Wal, Ziegler (2010)

Through the spectral analyses of marine oxygen isotope (d18O) records it has been shown that ice-sheets respond both linearly and non-linearly to astronomical forcing.

References in support of this statement include Imbrie et al 1992 & Imbrie et al 1993 that we reviewed above, and Pacemaking the Ice Ages by Frequency Modulation of Earth’s Orbital Eccentricity, JA Rial (1999):

The theory finds support in the fact that the spectra of the d18O records contain some of the same frequencies as the astronomical variations (2– 4), but a satisfactory explanation of how the changes in orbital eccentricity are transformed into the 100-ky quasi-periodic fluctuations in global ice volume indicated by the data has not yet been found (5).

For interest, the claim for the new work in this paper:

Evidence from power spectra of deep-sea oxygen isotope time series suggests that the climate system of Earth responds nonlinearly to astronomical forcing by frequency modulating eccentricity-related variations in insolation. With the help of a simple model, it is shown that frequency modulation of the approximate 100,000-year eccentricity cycles by the 413,000-year component accounts for the variable duration of the ice ages, the multiple-peak character of the time series spectra, and the notorious absence of significant spectral amplitude at the 413,000-year period. The observed spectra are consistent with the classic Milankovitch theories of insolation..

So if we consider the 3 references the provide in support of the “astronomical hypothesis”, the latest one says that a solution to the 100 kyr problem has not yet been found – of course this 1999 paper gives it their own best shot. Rial (1999) clearly doesn’t think that Imbrie et al 1992 / 1993 solved the problem.

And, of course, Rial (1999) proposes a different solution to Imbrie et al 1992/1993.


Dynamics between order and chaos in conceptual models of glacial cycles, Takahito Mitsui & Kazuyuki Aihara, Climate Dynamics (2013)

Hays et al. (1976) presented strong evidence for astronomical theories of ice ages. They found the primary frequencies of astronomical forcing in the geological spectra of marine sediment cores. However, the dominant frequency in geological spectra is approximately 1/100 kyr-1, although this frequency component is negligible in the astronomical forcing. This is referred to as the ‘100 kyr problem.’

However, the linear response cannot appropriately account for the 100 kyr periodicity (Hays et al. 1976).

Ghil (1994) explained the appearance of the 100 kyr periodicity as a nonlinear resonance to the combination tone 1/109 kyr-1 between precessional frequencies 1/19 and 1/23 kyr-1. Contrary to the linear resonance, the nonlinear resonance can occur even if the forcing frequencies are far from the internal frequency of the response system.

Benzi et al. (1982) proposed stochastic resonance as a mechanism of the 100 kyr periodicity, where the response to small external forcing is amplified by the effect of noise.

Tziperman et al. (2006) proposed that the timing of deglaciations is set by the astronomical forcing via the phase- locking mechanism.. De Saedeleer et al. (2013) suggested generalized synchronization (GS) to describe the relation between the glacial cycles and the astronomical forcing. GS means that there is a functional relation between the climate state and the state of the astronomical forcing. They also showed that the functional relation may not be unique for a certain model.

However, the nature of the relation remains to be elucidated.

“Still a mystery”.


Glacial cycles and orbital inclination, Richard Muller & Gordon MacDonald, Nature (1995)

According to the Milankovitch theory, the 100 kyr glacial cycle is caused by changes in insolation (solar heating) brought about by variations in the eccentricity of the Earth’s orbit. There are serious difficulties with this theory: the insolation variations appear to be too small to drive the cycles and a strong 400 kyr modulation predicted by the theory is not present..

We suggest that a radical solution is necessary to solve these problems, and we propose that the 100 kyr glacial cycle is caused, not by eccentricity, but by a previously ignored parameter: the orbital inclination, the tilt of the Earth’s orbital plane..

“Still a mystery”, with the new solution of a member of the Theory C Family.


Terminations VI and VIII (∼ 530 and ∼ 720 kyr BP) tell us the importance of obliquity and precession in the triggering of deglaciations, F. Parrenin & D. Paillard (2012)

The main variations of ice volume of the last million years can be explained from orbital parameters by assuming climate oscillates between two states: glaciations and deglaciations (Parrenin and Paillard, 2003; Imbrie et al., 2011) (or terminations). An additional combination of ice volume and orbital parameters seems to form the trigger of a deglaciation, while only orbital parameters seem to play a role in the triggering of glaciations. Here we present an optimized conceptual model which realistically reproduce ice volume variations during the past million years and in partic- ular the timing of the 11 canonical terminations. We show that our model looses sensitivity to initial conditions only after ∼ 200 kyr at maximum: the ice volume observations form a strong attractor. Both obliquity and precession seem necessary to reproduce all 11 terminations and both seem to play approximately the same role.

Note that eccentricity variations are not cited as the cause.

The support for orbital parameters explaining the ice age glaciation/deglaciation are two papers. First, Parrenin & Paillard: Amplitude and phase of glacial cycles from a conceptual model (2003):

Although we find astronomical frequencies in almost all paleoclimatic records [1,2], it is clear that the climatic system does not respond linearly to insolation variations [3]. The first well-known paradox of the astronomical theory of climate is the ‘100 kyr problem’: the largest variations over the past million years occurred approximately every 100 kyr, but the amplitude of the insolation signal at this frequency is not significant. Although this problem remains puzzling in many respects, multiple equilibria and thresholds in the climate system seem to be key notions to explain this paradoxical frequency.

Their solution:

To explain these paradoxical amplitude and phase modulations, we suggest here that deglaciations started when a combination of insolation and ice volume was large enough. To illustrate this new idea, we present a simple conceptual model that simulates the sea level curve of the past million years with very realistic amplitude modulations, and with good phase modulations.

The other paper cited in support of an astronomical solution is A phase-space model for Pleistocene ice volume, Imbrie, Imbrie-Moore & Lisiecki, Earth and Planetary Science Letters (2011)

Numerous studies have demonstrated that Pleistocene glacial cycles are linked to cyclic changes in Earth’s orbital parameters (Hays et al., 1976; Imbrie et al., 1992; Lisiecki and Raymo, 2007); however, many questions remain about how orbital cycles in insolation produce the observed climate response. The most contentious problem is why late Pleistocene climate records are dominated by 100-kyr cyclicity.

Insolation changes are dominated by 41-kyr obliquity and 23-kyr precession cycles whereas the 100-kyr eccentricity cycle produces negligible 100-kyr power in seasonal or mean annual insolation. Thus, various studies have proposed that 100-kyr glacial cycles are a response to the eccentricity-driven modulation of precession (Raymo, 1997; Lisiecki, 2010b), bundling of obliquity cycles (Huybers and Wunsch, 2005; Liu et al., 2008), and/or internal oscillations (Saltzman et al., 1984; Gildor and Tziperman, 2000; Toggweiler, 2008).

Their new solution:

We present a new, phase-space model of Pleistocene ice volume that generates 100-kyr cycles in the Late Pleistocene as a response to obliquity and precession forcing. Like Parrenin and Paillard, (2003), we use a threshold for glacial terminations. However, ours is a phase-space threshold: a function of ice volume and its rate of change. Our model the first to produce an orbitally driven increase in 100-kyr power during the mid-Pleistocene transition without any change in model parameters.

Theory C Family – two (relatively) new papers (2003 & 2011) with similar theories are presented as support of the astronomical theory causing the ice ages. Note that the theory in Imbrie et al 2013 is not the 100 kyr eccentricity variation proposed by Hays, Imbrie and Shackleton 1976.


Coherence resonance and ice ages, Jon D. Pelletier, JGR (2003)

The processes and feedbacks responsible for the 100-kyr cycle of Late Pleistocene global climate change are still being debated. This paper presents a numerical model that integrates (1) long-wavelength outgoing radiation, (2) the ice-albedo feedback, and (3) lithospheric deflection within the simple conceptual framework of coherence resonance. Coherence resonance is a dynamical process that results in the amplification of internally generated variability at particular periods in a system with bistability and delay feedback..

..The 100-kyr cycle is a free oscillation in the model, present even in the absence of external forcing.

“Still a mystery” – with the new solution that is not astronomical forcing.


The 41 kyr world: Milankovitch’s other unsolved mystery, Maureen E. Raymo & Kerim Nisancioglu (2003)

All serious students of Earth’s climate history have heard of the ‘‘100 kyr problem’’ of Milankovitch orbital theory, namely the lack of an obvious explanation of the dominant 100 kyr periodicity in climate records of the last 800,000 years.

“Still a mystery” – except that Raymo thinks she has found the solution (see earlier)


Is the spectral signature of the 100 kyr glacial cycle consistent with a Milankovitch origin, Ridgwell, Watson & Raymo (1999)

Global ice volume proxy records obtained from deep-sea sediment cores, when analyzed in this way produce a narrow peak corresponding to a period of ~100 kyr that dominates the low frequency part of the spectrum. This contrasts with the spectrum of orbital eccentricity variation, often assumed to be the main candidate to pace the glaciations [Hays et al 1980], which shows two distinct peaks near 100 kyr and substantial power near the 413 kyr period.

Then their solution:

Milankovitch theory seeks to explain the Quaternary glaciations via changes in seasonal insolation caused by periodic changes in the Earth’s obliquity, orbital precession and eccentricity. However, recent high-resolution spectral analysis of d18O proxy climate records have cast doubt on the theory.. Here we show that the spectral signature of d18O records are entirely consistent with Milankovitch mechanisms in which deglaciations are triggered every fourth or fifth precessional cycle. Such mechanisms may involve the buildup of excess ice due to low summertime insolation at the previous precessional high.

So they don’t accept Theory B. They don’t claim the theory has been previously solved and they introduce a Theory C Family.


In defense of Milankovitch, Gerard Roe (2006) – we reviewed this paper in Fifteen – Roe vs Huybers:

The Milankovitch hypothesis is widely held to be one of the cornerstones of climate science. Surprisingly, the hypothesis remains not clearly defined despite an extensive body of research on the link between global ice volume and insolation changes arising from variations in the Earth’s orbit.

And despite his interesting efforts at solving the problem he states towards the end of his paper:

The Milankovitch hypothesis as formulated here does not explain the large rapid deglaciations that occurred at the end of some of the ice age cycles.

Was it still a mystery or just not well defined. And from his new work, I’m not sure whether that means he thinks he has solved the reason for some ice age terminations, or that terminations are still a mystery.


The 100,000-Year Ice-Age Cycle Identified and Found to Lag Temperature, Carbon Dioxide, and Orbital Eccentricity, Nicholas J. Shackleton (the Shackleton from Hays et al 1976), (2000)

It is generally accepted that this 100-ky cycle represents a major component of the record of changes in total Northern Hemisphere ice volume (3). It is difficult to explain this predominant cycle in terms of orbital eccentricity because “the 100,000-year radiation cycle (arising from eccentricity variations) is much too small in amplitude and too late in phase to produce the corresponding climatic cycle by direct forcing”

So the Hays, Imbrie & Shackleton 1976 Theory B is not correct.

He does state:

Hence, the 100,000-year cycle does not arise from ice sheet dynamics; instead, it is probably the response of the global carbon cycle that generates the eccentricity signal by causing changes in atmospheric carbon dioxide concentration.

Note that this is in opposition to the papers by Imbrie et al (2011) and Parrenin & Paillard (2003) that were cited by Parrenin & Paillard (2012) in support of the astronomical theory of the ice ages.


Consequences of pacing the Pleistocene 100 kyr ice ages by nonlinear phase locking to Milankovitch forcing, Tziperman, Raymo, Huybers & Wunsch (2006)

Hays et al. [1976] established that Milankovitch forcing (i.e., variations in orbital parameters and their effect on the insolation at the top of the atmosphere) plays a role in glacial cycle dynamics. However, precisely what that role is, and what is meant by ‘‘Milankovitch theories’’ remains unclear despite decades of work on the subject [e.g., Wunsch, 2004; Rial and Anaclerio, 2000]. Current views vary from the inference that Milankovitch variations in insolation drives the glacial cycle (i.e., the cycles would not exist without Milankovitch variations), to the Milankovitch forcing causing only weak climate perturbations superimposed on the glacial cycles. A further possibility is that the primary influence of the Milankovitch forcing is to set the frequency and phase of the cycles (e.g., controlling the timing of glacial terminations or of glacial inceptions). In the latter case, glacial cycles would exist even in the absence of the insolation changes, but with different timing.

“Still a mystery” – but now solved with a Theory C Family (in their paper).


Quantitative estimate of the Milankovitch-forced contribution to observed Quaternary climate change, Carl Wunsch (2004)

The so-called Milankovitch hypothesis, that much of inferred past climate change is a response to near- periodic variations in the earth’s position and orientation relative to the sun, has attracted a great deal of attention. Numerous textbooks (e.g., Bradley, 1999; Wilson et al., 2000; Ruddiman, 2001) of varying levels and sophistication all tell the reader that the insolation changes are a major element controlling climate on time scales beyond about 10,000 years.

A recent paper begins ‘‘It is widely accepted that climate variability on timescales of 10 kyrs to 10 kyrs is driven primarily by orbital, or so-called Milankovitch, forcing.’’ (McDermott et al., 2001). To a large extent, embrace of the Milankovitch hypothesis can be traced to the pioneering work of Hays et al. (1976), who showed, convincingly, that the expected astronomical periods were visible in deep-sea core records..

..The long-standing question of how the slight Milankovitch forcing could possibly force such an enormous glacial–interglacial change is then answered by concluding that it does not do so.

“Still a mystery” – Wunsch does not accept Theory B and in this year didn’t accept Theory C Family (later co-authors a Theory C Family paper with Huybers). I cited this before in Part Six – “Hypotheses Abound”.


Individual contribution of insolation and CO2 to the interglacial climates of the past 800,000 years, Qiu Zhen Yin & André Berger (2012)

Climate variations of the last 3 million years are characterized by glacial-interglacial cycles which are generally believed to be driven by astronomically induced insolation changes.

No citation for the claim. Of course I agree that it is “generally believed”. Is this theory B? Or theory C? Or not sure?


Summary of the Papers

Out of about 300 papers checked, I found 34 papers (I might have missed a few) with a statement on the major cause of the ice ages separate from what they attempted to prove in their paper. These 34 papers were reviewed, with a further handful of cited papers examined to see what support they offered for the claim of the paper in question.

In respect of “What has been demonstrated up until our paper” – I count:

  • 19 “still a mystery”
  • 9 propose theory B
  • 6 supporting theory C

I have question marks over my own classification of about 10 of these because they lack clarity on what they believe is the situation to date.

Of course, from the point of view of the papers reviewed each believes they have some solution for the mystery. That’s not primarily what I was interested in.

I wanted to see what all papers accept as the story so far, and what evidence they bring for this belief.

I found only one paper claiming theory B that attempted to produce any significant evidence in support.


Hays, Imbrie & Shackleton (1976) did not prove Theory B. They suggested it. Invoking “probably non-linearity” does not constitute proof for an apparent frequency correlation. Specifically, half an apparent frequency correlation – given that eccentricity has a 413 kyr component as well as a 100 kyr component.

Some physical mechanism is necessary. Of course, I’m certain Hays, Imbrie & Shackleton understood this (I’ve read many of their later papers).

Of the papers we reviewed, over half indicate that the solution is still a mystery. That is fine. I agree it is a mystery.

Some papers indicate that the theory is widely believed but not necessarily that they do. That’s probably fine. Although it is confusing for non-specialist readers of their paper.

Some papers cite Hays et al 1976 as support for theory B. This is amazing.

Some papers claim “astronomical forcing” and in support cite Hays et al 1976 plus a paper with a different theory from the Theory C Family. This is also amazing.

Some papers cite support for Theory C Family – an astronomical theory to explain the ice ages with a different theory than Hays et al 1976. Sometimes their cited papers align. However, between papers that accept something in the Theory C Family there is no consensus on which version of Theory C Family, and obviously therefore, on the papers which support it.

How can papers cite Hays et al for support of the astronomical theory of ice age inception/termination?

It is required to put forward citations for just about every claim in a paper even if the entire world has known it from childhood. It seems to be a journal convention/requirement:

The sun rises each day [see Kepler 1596; Newton 1687, Plato 370 BC]

Really? Newton didn’t actually prove it in his paper? Oh, you know what, I just had a quick look at the last few papers in my field and copied their citations so I could get on with putting forward my theory. Come on, we all know the sun rises every day, look out the window (unless you live in England). Anyway, so glad you called, let me explain my new theory, it solves all those other problems, I’ve really got something here..

Well, that might be part of the answer. It isn’t excusable, but introductions don’t have the focus they should have.

Why the Belief in Theory B?

This part I can’t answer. Lots of people have put forward theories, none is generally accepted. The reason for the ice age terminations is unknown. Or known by a few people and not yet accepted by the climate science community.

Is it ok to accept something that everyone else seems to believe even though they all actually have a different theory. Is it ok to accept something as proven that is not really proven because it is from a famous paper with 2500 citations?

Finally, the fact that most papers have some vague words at the start about the “orbital” or “astronomical” theory for the ice ages doesn’t mean that this theory has any support. Being scientific, being skeptical, means asking for evidence and definitely not accepting an idea just because “everyone else” appears to accept it.

I am sure people will take issue with me. In another blog I was told that scientists were just “dotting the i’s and crossing the t’s” and none of this was seriously in doubt. Apparently, I was following creationist tactics of selective and out-of-context quoting..

Well, I will be delighted and no doubt entertained to read these comments, but don’t forget to provide evidence for the astronomical theory of the ice ages.

Articles in this Series

Part One – An introduction

Part Two – Lorenz – one point of view from the exceptional E.N. Lorenz

Part Three – Hays, Imbrie & Shackleton – how everyone got onto the Milankovitch theory

Part Four – Understanding Orbits, Seasons and Stuff – how the wobbles and movements of the earth’s orbit affect incoming solar radiation

Part Five – Obliquity & Precession Changes – and in a bit more detail

Part Six – “Hypotheses Abound” – lots of different theories that confusingly go by the same name

Part Seven – GCM I – early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Seven and a Half – Mindmap – my mind map at that time, with many of the papers I have been reviewing and categorizing plus key extracts from those papers

Part Eight – GCM II – more recent work from the “noughties” – GCM results plus EMIC (earth models of intermediate complexity) again trying to produce perennial snow cover

Part Nine – GCM III – very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age – a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Eleven – End of the Last Ice age – latest data showing relationship between Southern Hemisphere temperatures, global temperatures and CO2

Twelve – GCM V – Ice Age Termination – very recent work from He et al 2013, using a high resolution GCM (CCSM3) to analyze the end of the last ice age and the complex link between Antarctic and Greenland

Thirteen – Terminator II – looking at the date of Termination II, the end of the penultimate ice age – and implications for the cause of Termination II

Fourteen – Concepts & HD Data – getting a conceptual feel for the impacts of obliquity and precession, and some ice age datasets in high resolution

Fifteen – Roe vs Huybers – reviewing In Defence of Milankovitch, by Gerard Roe

Sixteen – Roe vs Huybers II – remapping a deep ocean core dataset and updating the previous article

Seventeen – Proxies under Water I – explaining the isotopic proxies and what they actually measure

Nineteen – Ice Sheet Models I – looking at the state of ice sheet models


Note 1: The temperature fluctuations measured in Antarctica are a lot smaller than Greenland but still significant and still present for similar periods. There are also some technical challenges with calculating the temperature change in Antarctica (the relationship between d18O and local temperature) that have been better resolved in Greenland.

Read Full Post »

In Thirteen – Terminator II we had a cursory look at the different “proxies” for temperature and ice volume/sea level. And we’ve considered some issues around dating of proxies.

There are two main proxies we have used so far to take a look back into the ice ages:

  • δ18O in deep ocean cores in the shells of foraminifera – to measure ice volume
  • δ18O in the ice in ice cores (Greenland and Antarctica) – to measure temperature

Now we want to take a closer look at the proxies themselves. It’s a necessary subject if we want to understand ice ages, because the proxies don’t actually measure what they might be assumed to measure. This is a separate issue from the dating: of ice; of gas trapped in ice; and of sediments in deep ocean cores.

If we take samples of ocean water, H2O, and measure the proportion of the oxygen isotopes, we find (Ferronsky & Polyakov 2012):

  • 16O – 99.757 %
  • 17O –   0.038%
  • 18O –   0.205%

There is another significant water isotope, Deuterium – aka, “heavy hydrogen” – where the water molecule is HDO, also written as 1H2HO – instead of H2O.

The processes that affect ratios of HDO are similar to the processes that affect the ratios of H218O, and consequently either isotope ratio can provide a temperature proxy for ice cores. A value of δD equates, very roughly, to 10x a value of δ18O, so mentally you can use this ratio to convert from δ18O to δD (see note 1).

In Note 2 I’ve included some comments on the Dole effect, which is the relationship between the ocean isotopic composition and the atmospheric oxygen isotopic composition. It isn’t directly relevant to the discussion of proxies here, because the ocean is the massive reservoir of 18O and the amount in the atmosphere is very small in comparison (1/1000). However, it might be of interest to some readers and we will return to the atmospheric value later when looking at dating of Antarctic ice cores.

Terminology and Definitions

The isotope ratio, δ18O, of ocean water = 2.005 ‰, that is, 0.205 %. This is turned into a reference, known as Vienna Standard Mean Ocean Water. So with respect to VSMOW, δ18O, of ocean water = 0. It’s just a definition. The change is shown as δ, the Greek symbol for delta, very commonly used in maths and physics to mean “change”.

The values of isotopes are usually expressed in terms of changes from the norm, that is, from the absolute standard. And because the changes are quite small they are expressed as parts per thousand = per mil = ‰, instead of percent, %.

So as δ18O changes from 0 (ocean water) to -50‰ (typically the lowest value of ice in Antarctica), the proportion of 18O goes from 0.20% (2.0‰) to 0.19% (1.9‰).

If the terminology is confusing think of the above example as a 5% change. What is 5% of 20? Answer is 1; and 20 – 1 = 19. So the above example just says if we reduce the small amount, 2 parts per thousand of 18O by 5% we end up with 1.9 parts per thousand.

Here is a graph that links the values together:

From Hoef 2009

From Hoefs 2009

Figure 1

Fractionation, or Why Ice Sheets are So Light

We’ve seen this graph before – the δ18O (of ice) in Greenland (NGRIP) and Antarctica (EDML) ice sheets against time:

From EPICA 2006

From EPICA 2006

Figure 2

Note that the values of δ18O from Antarctica (EDML – top line) through the last 150 kyrs are from about -40 to -52 ‰. And the values from Greenland (NGRIP – black line in middle section) are from about -32 to -44 ‰.

There are some standard explanations around – like this link – but the I’m not sure the graphic alone quite explains it – unless you understand the subject already..

If we measure the 18O concentration of a body of water, then we measure the 18O concentration of the water vapor above it, we find that the water vapor value has 18O at about -10 ‰ compared with the body of water. We write this as δ18O = -10 ‰. That is, the water vapor is a little lighter, isotopically speaking, than the ocean water.

The processes (fractionation) that cause this are easy to reproduce in the lab:

  • during evaporation, the lighter isotopes evaporate preferentially
  • during precipitation, the heavier isotopes precipitate preferentially

(See note 3).

So let’s consider the journey of a parcel of water vapor evaporated somewhere near the equator. The water vapor is a little reduced in 18O (compared with the ocean) due to the evaporation process. As the parcel of air travels away from the equator it rises and cools and some of the water vapor condenses. The initial rain takes proportionately more 18O than is in the parcel – so the parcel of air gets depleted in 18O. It keeps moving away from the equator, the air gets progressively colder, it keeps raining out, and the further it goes the less the proportion of 18O remains in the parcel of air. By the time precipitation forms in polar regions the water or ice is very light isotopically, that is, δ18O is the most negative it can get.

As a very simplistic idea of water vapor transport, this explains why the ice sheets in Greenland and Antarctica have isotopic values that are very low in 18O. Let’s take a look at some data to see how well such a simplistic idea holds up..

The isotopic composition of precipitation:

From Gat 2010

From Gat 2010

Figure 3 – Click to Enlarge

We can see the broad result represented quite well – the further we are in the direction of the poles the lower the isotopic composition of precipitation.

In contrast, when we look at local results in some detail we don’t see such a tidy picture. Here are some results from Rindsberger et al (1990) from central and northern Israel:

From Rindsberger et al 1990

From Rindsberger et al 1990

Figure 4

From Rindsberger et al 1990

From Rindsberger et al 1990

Figure 5

The authors comment:

It is quite surprising that the seasonally averaged isotopic composition of precipitation converges to a rather well-defined value, in spite of the large differences in the δ value of the individual precipitation events which show a range of 12‰ in δ18O.. At Bet-Dagan.. from which we have a long history.. the amount weighted annual average is δ18O = 5.07 ‰ ± 0.62 ‰ for the 19 year period of 1965-86. Indeed the scatter of ± 0.6‰ in the 19-year long series is to a significant degree the result of a 4-year period with lower δ values, namely the years 1971-75 when the averaged values were δ18O = 5.7 ‰ ± 0.2 ‰. That period was one of worldwide climate anomalies. Evidently the synoptic pattern associated with the precipitation events controls both the mean isotopic values of the precipitation and its variability.

The seminal 1964 paper by Willi Dansgaard is well worth a read for a good overview of the subject:

As pointed out.. one cannot use the composition of the individual rain as a direct measure of the condensation temperature. Nevertheless, it has been possible to show a simple linear correlation between the annual mean values of the surface temperature and the δ18O content in high latitude, non-continental precipitation. The main reason is that the scattering of the individual precipitation compositions, caused by the influence of numerous meteorological parameters, is smoothed out when comparing average compositions at various locations over a sufficiently long period of time (a whole number of years).

The somewhat revised and extended correlation is shown in fig. 3..

From Dansgaard 1964

From Dansgaard 1964

Figure 6

So we appear to have a nice tidy picture when looking at annual means, a little bit like the (article) figure 3 from Gat’s 2010 textbook.

Before “muddying the waters” a little, let’s have a quick look at ocean values.

Ocean δ18O

We can see that the ocean, as we might expect, is much more homogenous, especially the deep ocean. Note that these results are δD (think, about 10x the value of δ18O):

From Ferronsky & Polyakov (2012)

From Ferronsky & Polyakov (2012)

Figure 7 – Click to enlarge

And some surface water values of δD (and also salinity), where we see a lot more variation, again as might expect:

From Ferronsky & Polyakov 2012

From Ferronsky & Polyakov 2012

Figure 8

If we do a quick back of the envelope calculation, using the fact that the sea level change between the last glacial maximum (LGM) and the current interglacial was about 120m, the average ocean depth is 3680m we expect a glacial-interglacial change in the ocean of about 1.5 ‰.

This is why the foraminifera near the bottom of the ocean, capturing 18O from the ocean, are recording ice volume, whereas the ice cores are recording atmospheric temperatures.

Note as well that during the glacial, with more ice locked up in ice sheets, the value of ocean δ18O will be higher. So colder atmospheric temperatures relate to lower values of δ18O in precipitation, but – due to the increase in ice, depleted in 18O – higher values of ocean δ18O.

Muddying the Waters

Hoefs 2009, gives a good summary of the different factors in isotopic precipitation:

The first detailed evaluation of the equilibrium and nonequilibrium factors that determine the isotopic composition of precipitation was published by Dansgaard (1964). He demonstrated that the observed geographic distribution in isotope composition is related to a number of environmental parameters that characterize a given sampling site, such as latitude, altitude, distance to the coast, amount of precipitation, and surface air temperature.

Out of these, two factors are of special significance: temperature and the amount of precipitation. The best temperature correlation is observed in continental regions nearer to the poles, whereas the correlation with amount of rainfall is most pronounced in tropical regions as shown in Fig. 3.15.

The apparent link between local surface air temperature and the isotope composition of precipitation is of special interest mainly because of the potential importance of stable isotopes as palaeoclimatic indicators. The amount effect is ascribed to gradual saturation of air below the cloud, which diminishes any shift to higher δ18O-values caused by evaporation during precipitation.

[Emphasis added]

From Hoefs 2009

From Hoefs 2009

Figure 9

The points that Hoefs make indicate some of the problems relating to using δ18O as the temperature proxy. We have competing influences that depend on the source and journey of the air parcel responsible for the precipitation. What if circulation changes?

For readers who have followed the past discussions here on water vapor (e.g., see Clouds & Water Vapor – Part Two) this is a similar kind of story. With water vapor, there is a very clear relationship between ocean temperature and absolute humidity, so long as we consider the boundary layer. But what happens when the air rises high above that – then the amount of water vapor at any location in the atmosphere is dependent on the past journey of air, and as a result the amount of water vapor in the atmosphere depends on large scale circulation and large scale circulation changes.

The same question arises with isotopes and precipitation.

The ubiquitous Jean Jouzel and his colleagues (including Willi Dansgaard) from their 1997 paper:

In Greenland there are significant differences between temperature records from the East coast and the West coast which are still evident in 30 yr smoothed records. The isotopic records from the interior of Greenland do not appear to follow consistently the temperature variations recorded at either the east coast or the west coast..

This behavior may reflect the alternating modes of the North Atlantic Oscillation..

They [simple models] are, however, limited to the study of idealized clouds and cannot account for the complexity of large convective systems, such as those occurring in tropical and equatorial regions. Despite such limitations, simple isotopic models are appropriate to explain the main characteristics of dD and d18O in precipitation, at least in middle and high latitudes where the precipitation is not predominantly produced by large convective systems.

Indeed, their ability to correctly simulate the present-day temperature-isotope relationships in those regions has been the main justification of the standard practice of using the present day spatial slope to interpret the isotopic data in terms of records of past temperature changes.

Notice that, at least for Antarctica, data and simple models agree only with respect to the temperature of formation of the precipitation, estimated by the temperature just above the inversion layer, and not with respect to the surface temperature, which owing to a strong inversion is much lower..

Thus one can easily see that using the spatial slope as a surrogate of the temporal slope strictly holds true only if the characteristics of the source have remained constant through time.

[Emphases added]

If all the precipitation occurs during warm summer months, for example, the “annual δ18O” will naturally reflect a temperature warmer than Ts [annual mean]..

If major changes in seasonality occur between climates, such as a shift from summer-dominated to winter- dominated precipitation, the impact on the isotope signal could be large..it is the temperature during the precipitation events that is imprinted in the isotopic signal.

Second, the formation of an inversion layer of cold air up to several hundred meters thick over polar ice sheets makes the temperature of formation of precipitation warmer than the temperature at the surface of the ice sheet. Inversion forms under a clear sky.. but even in winter it is destroyed rapidly if thick cloud moves over a site..

As a consequence of precipitation intermittancy and of the existence of an inversion layer, the isotope record is only a discrete and biased sampling of the surface temperature and even of the temperature at the atmospheric level where the precipitation forms. Current interpretation of paleodata implicitly assumes that this bias is not affected by climate change itself.

Now onto the oceans, surely much simpler, given the massive well-mixed reservoir of 18O?

Mix & Ruddiman (1984):

The oxygen-isotopic composition of calcite is dependent on both the temperature and the isotopic composition of the water in which it is precipitated

..Because he [Shackleton] analyzed benthonic, instead of planktonic, species he could assume minimal temperature change (limited by the freezing point of deep-ocean water). Using this constraint, he inferred that most of the oxygen-isotope signal in foraminifera must be caused by changes in the isotopic composition of seawater related to changing ice volume, that temperature changes are a secondary effect, and that the isotopic composition of mean glacier ice must have been about -30 ‰.

This estimate has generally been accepted, although other estimates of the isotopic composition have been made by Craig (-17‰); Eriksson (-25‰), Weyl (-40‰) and Dansgaard & Tauber (≤30‰)

..Although Shackleton’s interpretation of the benthonic isotope record as an ice-volume/sea- level proxy is widely quoted, there is considerable disagreement between ice-volume and sea- level estimates based on δ18O and those based on direct indicators of local sea level. A change in δ18O of 1.6‰ at δ(ice) = – 35‰ suggests a sea-level change of 165 m.

..In addition, the effect of deep-ocean temperature changes on benthonic isotope records is not well constrained. Benthonic δ18O curves with amplitudes up to 2.2 ‰ exist (Shackleton, 1977; Duplessy et al., 1980; Ruddiman and McIntyre, 1981) which seem to require both large ice- volume and temperature effects for their explanation.

Many other heavyweights in the field have explained similar problems.

We will return to both of these questions in the next article.


Understanding the basics of isotopic changes in water and water vapor is essential to understand the main proxies for past temperatures and past ice volumes. Previously we have looked at problems relating to dating of the proxies, in this article we have looked at the proxies themselves.

There is good evidence that current values of isotopes in precipitation and ocean values give us a consistent picture that we can largely understand. The question about the past is more problematic.

I started looking seriously at proxies as a means to perhaps understand the discrepancies for key dates of ice age terminations between radiometric dating and ocean cores (see Thirteen – Terminator II). Sometimes the more you know, the less you understand..

Articles in the Series

Part One – An introduction

Part Two – Lorenz – one point of view from the exceptional E.N. Lorenz

Part Three – Hays, Imbrie & Shackleton – how everyone got onto the Milankovitch theory

Part Four – Understanding Orbits, Seasons and Stuff – how the wobbles and movements of the earth’s orbit affect incoming solar radiation

Part Five – Obliquity & Precession Changes – and in a bit more detail

Part Six – “Hypotheses Abound” – lots of different theories that confusingly go by the same name

Part Seven – GCM I – early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Seven and a Half – Mindmap – my mind map at that time, with many of the papers I have been reviewing and categorizing plus key extracts from those papers

Part Eight – GCM II – more recent work from the “noughties” – GCM results plus EMIC (earth models of intermediate complexity) again trying to produce perennial snow cover

Part Nine – GCM III – very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age – a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Eleven – End of the Last Ice age – latest data showing relationship between Southern Hemisphere temperatures, global temperatures and CO2

Twelve – GCM V – Ice Age Termination – very recent work from He et al 2013, using a high resolution GCM (CCSM3) to analyze the end of the last ice age and the complex link between Antarctic and Greenland

Thirteen – Terminator II – looking at the date of Termination II, the end of the penultimate ice age – and implications for the cause of Termination II

Fourteen – Concepts & HD Data – getting a conceptual feel for the impacts of obliquity and precession, and some ice age datasets in high resolution

Fifteen – Roe vs Huybers – reviewing In Defence of Milankovitch, by Gerard Roe

Sixteen – Roe vs Huybers II – comparing the results if we take the Huybers dataset and tie the last termination to the date implied by various radiometric dating

Eighteen – “Probably Nonlinearity” of Unknown Origin – what is believed and what is put forward as evidence for the theory that ice age terminations were caused by orbital changes

Nineteen – Ice Sheet Models I – looking at the state of ice sheet models


Isotopes of the Earth’s Hydrosphere, VI Ferronsky & VA Polyakov, Springer (2012)

Isotope Hydrology – A Study of the Water Cycle, Joel R Gat, Imperial College Press (2010)

Stable Isotope Geochemistry, Jochen Hoefs, Springer (2009)

Patterns of the isotopic composition of precipitation in time and space: data from the Israeli storm water collection program, M Rindsberger, Sh Jaffe, Sh Rahamim and JR Gat, Tellus (1990) – free paper

Stable isotopes in precipitation, Willi Dansgaard, Tellus (1964) – free paper

Validity of the temperature reconstruction from water isotopes in ice cores, J Jouzel, RB Alley, KM Cuffey, W Dansgaard, P Grootes, G Hoffmann, SJ Johnsen, RD Koster, D Peel, CA Shuman, M Stievenard, M Stuiver, J White, Journal of Geophysical Research (1997) – free paper

Oxygen Isotope Analyses and Pleistocene Ice Volumes, Mix & Ruddiman, Quaternary Research (1984)  – free paper

- and on the Dole effect, only covered in Note 2:

The Dole effect and its variations during the last 130,000 years as measured in the Vostok ice core, Michael Bender, Todd Sowers, Laurent Labeyrie, Global Biogeochemical Cycles (1994) – free paper

A model of the Earth’s Dole effect, Georg Hoffmann, Matthias Cuntz, Christine Weber, Philippe Ciais, Pierre Friedlingstein, Martin Heimann, Jean Jouzel, Jörg Kaduk, Ernst Maier-Reimer, Ulrike Seibt & Katharina Six, Global Biogeochemical Cycles (2004) – free paper

The isotopic composition of atmospheric oxygen Boaz Luz & Eugeni Barkan, Global Biogeochemical Cycles (2011) – free paper


Note 1: There is a relationship between δ18O and δD which is linked to the difference in vapor pressures between H2O and HDO in one case and H216O and H218O in the other case.

δD = 8 δ18O + 10 – known as the Global Meteoric Water Line.

The equation is more of a guide and real values vary sufficiently that I’m not really clear about its value. There are lengthy discussions of it and the variations from it in Ferronsky & Polyakov.

Note 2: The Dole effect

When we measure atmospheric oxygen, we find that the δ18O = 23.5 ‰ with respect to the oceans (VSMOW) – this is the Dole effect

So, oxygen in the atmosphere has a greater proportion of 18O than the ocean


How do the atmosphere and ocean exchange oxygen? In essence, photosynthesis turns sunlight + water (H2O) + carbon dioxide (CO2) –> sugar + oxygen (O2).

Respiration turns sugar + oxygen –> water + carbon dioxide + energy

The isotopic composition of the water in photosynthesis affects the resulting isotopic composition in the atmospheric oxygen.

The reason the Dole effect exists is well understood, but the reason why the value comes out at 23.5‰ is still under investigation. This is because the result is the global aggregate of lots of different processes. So we might understand the individual processes quite well, but that doesn’t mean the global value can be calculated accurately.

It is also the case that δ18O of atmospheric O2 has varied in the past – as revealed first of all in the Vostok ice core from Antarctica.

Michael Bender and his colleagues had a go at calculating the value from first principles in 1994. As they explain (see below), although it might seem as though their result is quite close to the actual number it is not a very successful result at all. Basically due to the essential process you start at 20‰ and should get to 23.5‰, but they o to 20.8‰.

Bender et al 1994:

The δ18O of O2.. reflects the global responses of the land and marine biospheres to climate change, albeit in a complex manner.. The magnitude of the Dole effect mainly reflects the isotopic composition of O2 produced by marine and terrestrial photosynthesis, as well as the extent to while the heavy isotope is discriminated against during respiration..

..Over the time period of interest here, photosynthesis and respiration are the most important reactions producing and consuming O2. The isotopic composition of O2 in air must therefore be understood in terms of isotope fractionation associated with these reactions.

The δ18O of O2 produced by photosynthesis is similar to that of the source water. The δ18O of O2 produced by marine plants is thus 0‰. The δ18O of O2 produced on the continents has been estimated to lie between +4 and +8‰. These elevated δ18O values are the result of elevated leaf water δ18O values resulting from evapotranspiration.

..The calculated value for the Dole effect is then the productivity-weighted values of the terrestrial and marine Dole effects minus the stratospheric diminution: +20.8‰. This value is considerably less than observed (23.5‰). The difference between the expected value and the observed value reflects errors in our estimates and, conceivably, unrecognized processes.

Then they assess the Vostok record, where the main question is less about why the Dole effect varies apparently with precession (period of about 20 kyrs), than why the variation is so small. After all, if marine and terrestrial biosphere changes are significant from interglacial to glacial then surely those changes would reflect more strongly in the Dole effect:

Why has the Dole effect been so constant? Answering this question is impossible at the present time, but we can probably recognize the key influences..

They conclude:

Our ability to explain the magnitude of the contemporary Dole effect is a measure of our understanding of the global cycles of oxygen and water. A variety of recent studies have improved our understanding of many of the principles governing oxygen isotope fractionation during photosynthesis and respiration.. However, our attempt to quantitively account for the Dole effect in terms of these principles was not very successful.. The agreement is considerably worse than it might appear given the fact that respiratory isotope fractionation alone must account for ~20‰ of the stationary enrichment of the 18O of O2 compared with seawater..

..[On the Vostok record] Our results show that variation in the Dole effect have been relatively small during most of the last glacial-interglacial cycle. These small changes are not consistent with large glacial increases in global oceanic productivity.

[Emphasis added]

Georg Hoffmann and his colleagues had another bash 10 years later and did a fair bit better:

The Earth’s Dole effect describes the isotopic 18O/16O-enrichment of atmospheric oxygen with respect to ocean water, amounting under today’s conditions to 23.5‰. We have developed a model of the Earth’s Dole effect by combining the results of three- dimensional models of the oceanic and terrestrial carbon and oxygen cycles with results of atmospheric general circulation models (AGCMs) with built-in water isotope diagnostics.

We obtain a range from 22.4‰ to 23.3‰ for the isotopic enrichment of atmospheric oxygen. We estimate a stronger contribution to the global Dole effect by the terrestrial relative to the marine biosphere in contrast to previous studies. This is primarily caused by a modeled high leaf water enrichment of 5–6‰. Leaf water enrichment rises by ~1‰ to 6–7‰ when we use it to fit the observed 23.5‰ of the global Dole effect.

Very recently Luz & Barkan (2011), backed up by lots of new experimental work produced a slightly closer estimate with some revisions of the Hoffman et al results:

Based on the new information on the biogeochemical mechanisms involved in the global oxygen cycle, as well as new and more precise experimental data on oxygen isotopic fractionation in various processes obtained over the last 15 years, we have reevaluated the components of the Dole effect.Our new observations on marine oxygen isotope effects, as well as, new findings on photosynthetic fractionation by marine organisms lead to the important conclusion that the marine, terrestrial and the global Dole effects are of similar magnitudes.

This result allows answering a long‐standing unresolved question on why the magnitude of the Dole effect of the last glacial maximum is so similar to the present value despite enormous environmental differences between the two periods. The answer is simple: if DEmar [marine Dole effect] and DEterr [terrestrial Dole effect] are similar, there is no reason to expect considerable variations in the magnitude of the Dole effect as the result of variations in the ratio terrestrial to marine O2 production.

Finally, the widely accepted view that the magnitude of the Dole effect is controlled by the ratio of land‐to‐sea productivity must be changed. Instead of the land‐sea control, past variations in the Dole effect are more likely the result of changes in low‐latitude hydrology and, perhaps, in structure of marine phytoplankton communities.

[Emphasis added]

Note 3:

Jochen Hoefs (2009):

Under equilibrium conditions at 25ºC, the fractionation factors for evaporating water are 1.0092 for 18O and 1.074 for D. However under natural conditions, the actual isotopic composition of water is more negative than the predicted equilibrium values due to kinetic effects.

The discussion of kinetic effects gets a little involved and I don’t think is really necessary to understand – the values of isotopic fractionation during evaporation and condensation are well understood. The confounding factors around what the proxies really measure relate to the journey (i.e. temperature history) and mixing of the various air parcels as well as the temperature of air relating to the precipitation event – is the surface temperature, the inversion temperature, both?

Read Full Post »

In the last article – Fifteen – Roe vs Huybers – we had a look at the 2006 paper by Gerard Roe, In defense of Milankovitch.

We compared the rate of change of ice volume – as measured in the Huybers 2007 dataset – with summer insolation at 65ºN. The results were interesting, the results correlated very well for the first 200 kyrs, then drifted out of phase. As a result the (Pearson) correlation over 500 kyrs was very low, but quite decent for the first 200 kyrs.

Without any further data we might assume that the results demonstrated that the dataset without “orbital tuning” – and a lack of objective radiometric dating – was drifting away from reality as time went on, and an “orbitally tuned” dataset was the best approach. We would definitely expect that older dates have more uncertainty, as errors accumulate when we use any kind of model for time vs depth.

However, in an earlier article we looked at more objective dates for Termination II (and also in the comments, at some earlier terminations). These dates were obtained via radiometric dating from a variety of locations and methods.

So I wondered:

What happens if we take a dataset like Huybers 2007 and “remap” it using agemarkers?

This is basically how most of the ice core datasets are constructed, although the methods are more sophisticated (see note 1).

For my rough and ready approach I simply provided a set of termination dates (halfway point of ice volume from peak glacial to peak interglacial) from both Huybers and from Winograd et al 1992. Then I remapped the timebase for the existing Huybers proxy data between each set of agemarkers.

It’s probably easier to show the before and after comparison, rather than explain the method further. Note the low point between 100 and 150 kyrs BP. This corresponds to less ice, it is the interglacial:


Figure 1

The method is basically a linear remapping. I’m sure there are better ways, but I don’t expect they would have a material impact on the outcome.

One point that’s important (with my very simple method) is the oldest agemarker we consider can cause an inconsistency (as there is nothing to constrain the dates between the last agemarker and the end date), which is why the first set below uses 270 kyrs.

T- III is dated by Winograd 1992 at 253 kyrs. So I picked a date shortly after that.

Here is the comparison of rate of change of ice volume with insolation, with the same conventions as in the last article. We can see that everything is nicely anti-correlated:


Figure 2 – Click to Expand

For comparison, the result (in the last article) from 0-200 kyrs BP without remapping the proxy dataset. We can see that everything is nicely correlated:


Figure 3 – Click to Expand

For the remapped data: correlation  = -0.30. This is as negatively correlated to the insolation value as LR04 (an “orbitally-tuned” dataset) is positively correlated.

For interest I did the same exercise with a 0 – 200kyr BP timebase. This means everything from 140 kyrs – 200 yrs was not constrained by a revised T-III date. The result: correlation = 0. The interpretation is simple – the older data is not pulled out of alignment due to a later objective T-III date, so there is a better match of insolation with rate of change of ice volume for this older data.


Is there a conclusion? It’s surely staring us in the face so is left as an exercise for the interested student.

I have a headache.

Articles in the Series

Part One – An introduction

Part Two – Lorenz – one point of view from the exceptional E.N. Lorenz

Part Three – Hays, Imbrie & Shackleton – how everyone got onto the Milankovitch theory

Part Four – Understanding Orbits, Seasons and Stuff – how the wobbles and movements of the earth’s orbit affect incoming solar radiation

Part Five – Obliquity & Precession Changes – and in a bit more detail

Part Six – “Hypotheses Abound” – lots of different theories that confusingly go by the same name

Part Seven – GCM I – early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Seven and a Half – Mindmap – my mind map at that time, with many of the papers I have been reviewing and categorizing plus key extracts from those papers

Part Eight – GCM II – more recent work from the “noughties” – GCM results plus EMIC (earth models of intermediate complexity) again trying to produce perennial snow cover

Part Nine – GCM III – very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age – a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Eleven – End of the Last Ice age – latest data showing relationship between Southern Hemisphere temperatures, global temperatures and CO2

Twelve – GCM V – Ice Age Termination – very recent work from He et al 2013, using a high resolution GCM (CCSM3) to analyze the end of the last ice age and the complex link between Antarctic and Greenland

Thirteen – Terminator II – looking at the date of Termination II, the end of the penultimate ice age – and implications for the cause of Termination II

Fourteen – Concepts & HD Data – getting a conceptual feel for the impacts of obliquity and precession, and some ice age datasets in high resolution

Fifteen – Roe vs Huybers – reviewing In Defence of Milankovitch, by Gerard Roe

Seventeen – Proxies under Water I – explaining the isotopic proxies and what they actually measure

Eighteen – “Probably Nonlinearity” of Unknown Origin – what is believed and what is put forward as evidence for the theory that ice age terminations were caused by orbital changes

Nineteen – Ice Sheet Models I – looking at the state of ice sheet models


In defense of Milankovitch, Gerard Roe, Geophysical Research Letters (2006) – free paper

Glacial variability over the last two million years: an extended depth-derived agemodel, continuous obliquity pacing, and the Pleistocene progression, Peter Huybers, Quaternary Science Reviews (2007) – free paper

Datasets for Huybers 2007 are here:

Continuous 500,000-Year Climate Record from Vein Calcite in Devils Hole, Nevada, Winograd, Coplen, Landwehr, Riggs, Ludwig, Szabo, Kolesar & Revesz, Science (1992) – paywall, but might be available with a free Science registration

Insolation data calculated from Jonathan Levine’s MATLAB program


Note 1:

Here is an extract from Parennin et al 2007, The EDC3 chronology for the EPICA Dome C ice core:

In this article, we present EDC3, the new 800 kyr age scale of the EPICA Dome C ice core, which is generated using a combination of various age markers and a glaciological model. It is constructed in three steps.

First, an age scale is created by applying an ice flow model at Dome C. Independent age markers are used to control several poorly known parameters of this model (such as the conditions at the base of the glacier), through an inverse method.

Second, the age scale is synchronised onto the new Greenlandic GICC05 age scale over three time periods: the last 6 kyr, the last deglaciation, and the Laschamp event (around 41 kyr BP).

Third, the age scale is corrected in the bottom ∼500 m (corresponding to the time period 400–800 kyr BP), where the model is unable to capture the complex ice flow pattern..

From Parennin et al 2007

From Parennin et al 2007

Read Full Post »

A few people have asked about the fascinating 2006 paper by Gerard Roe, In defense of Milankovitch.

Roe’s paper appears to show an excellent match between the rate of change of ice volume and insolation at 65°N in June. I’ve been puzzled by the paper for a while, because if this value of insolation does successfully predict changes in ice volume then case closed. Except we struggle to match glacial terminations with insolation (see earlier posts like Part Thirteen, Twelve, Eleven – End of the Last Ice age).

And we should also expect to find a 100 kyr period in the 65°N insolation spectrum. But we don’t.

To be fair to Roe, he does state:

The Milankovitch hypothesis as formulated here does not explain the large rapid deglaciations that occurred at the end of some of the ice age cycles

[Emphasis added].

To be critical, it doesn’t seem like anyone is disputing that ice sheets wax and wane with at least some attachment to 40k (obliquity) and 20k (precession) cycles so what exactly does the paper demonstrate that is new? The missing bit of the puzzle is why ice ages start and end.

On the plus side, Roe points out:

Surprisingly, the [Milankovitch] hypothesis remains not clearly defined..

Which is the same point I made in Ghosts of Climates Past – Part Six – “Hypotheses Abound”.

One of the reasons I’ve spent quite a bit of time collecting and understanding datasets – see Part Fourteen – Concepts & HD Data – was for this kind of problem. Roe’s figure 2 spans half a page but covers 800,000 years. With the thick lines used I can’t actually tell if there is a match, and being poor at real statistics I want to see the data rather than just accept a correlation.

There’s not much point comparing SPECMAP (or LR04) with insolation because both of these datasets are “tuned” to summer 65°N insolation. If we find success then we accept that the producers of the dataset were competent in their objective. If we find lack of success we have to write to them with bad news. No one wants to do that.

Fortunately we have an interesting dataset from Peter Huybers (2007). This is an update of HW04 (Huybers & Wunsch 2004) which created a proxy for global ice volume from deep ocean cores without “orbital tuning”. It’s based on an autocorrelated sedimentation model, requiring that key turning points from many different cores all occur at the same time, and a key dateable event at around 800,000 years ago that shows up in most cores.

Some readers are wondering:

Why not use the ice cores you have been writing about?

Good question. The oxygen isotope (δ18O), or deuterium isotope (δD), in the ice is more a measure of local temperature than anything else (and it’s complicated). So Greenland and Antarctic ice cores provide lots of useful data, but not global ice volume. For that, we need to capture the δ18O stored in deep ocean sediments. The δ18O in deep ocean cores, to a first order, appears to be a measure of the amount of water locked up in global ice sheets. However, we have no easy way to objectively date the ocean cores, so some assumptions are needed.

Fortunately, Roe compared his theory with two datasets, the famous SPECMAP (warning, orbital tuning was used in the creation of this dataset) and HW04:


Figure 1

I downloaded the updated Huybers 2007 dataset. It is in 1 kyr intervals. I have calculated the insolation at all latitudes and all days for the last 500 kyrs using Jonathan Levine’s MATLAB program. This is also in 1 kyrs intervals. I used the values at 65N and June 21st (day 172 – thanks Climateer, for helping me with the basics of calendar days!).

I calculated change in ice volume in a very simple way – (value at time t+1 – value at time t) divided by time change. I scaled the resulting dataset to the same range as the insolation anomalies – so that they plot nicely. And I plotted insolation anomaly = mean(insolation) – insolation:


Figure 2 – Click to Expand

The two sets of data look very similar over the last 500 kyrs. I assume that some minor changes, e.g., at about 370 kyrs, are due to dataset updates. Note that insolation anomaly is effectively inverted to help match trends by eye – high insolation should lead to negative change in ice volume and vice-versa.

For reference, here is my calculation on its own (click to get the large version):


Figure 3 – Click to Expand

I did a Pearson correlation between the two datasets and obtained 0.08. That is, very little correlation. This just tells us what we can see from looking at the graph – the two key values are in phase to begin with then move out of phase and back into phase by the end.

Correlation between 0-100 kyrs:   0.66 (great)
Correlation between 101-200 kyrs:  0.51 (great)
Correlation between 201-300 kyrs:  -0.72 (wrong direction)
Correlation between 301-400 kyrs  -0.27 (wrong direction)
Correlation between 401-500 kyrs:  0.18 (wavering)

I also did a Spearman rank correlation (correlates the rank of the two datasets to make it resistance to outliers) = 0.09, and just because I could, a Kendall correlation as well = 0.07.

I’m a bit of a statistics amateur so comparing datasets except by looking is not my forte. Perhaps a rookie mistake somewhere.

Then I checked lag correlations. The physical reasoning is that deep ocean concentration of 18O will take a few thousand years at least to respond to ice volume changes, simply due to the slow circulation of the major ocean currents. The results show there is a better correlation with a lag of 35,000 years, but there is no physical reason for this, it is probably just a better fit to a dataset with an apparent slow phase drift across the period of record. At a meaningful large ocean current lag of a few thousand years the correlation is worse (anti-correlated):


Figure 4

On the plus side, the first 200 kyrs look quite impressive, including terminations:


Figure 5


Figure 6

This has got me wondering.

What do we notice from the data for the first 200 kyrs (figure 6)? Well, the last two terminations (check out the last few posts) are easily identified because the rate of change of ice volume in proportion to insolation is about four times its value when no termination takes place.

Forgetting about the small problem of the Southern Hemisphere lead in the last deglaciation (Part Eleven – End of the Last Ice age), there is something interesting going on here. Almost like a theory that is just missing one easily identified link, one piece of the jigsaw puzzle that just needs to be fitted in, and the new Nature paper is waiting..

Onto some details.. it seems that T-II, if marked by the various radiometric dating values we saw Part Thirteen – Terminator II, would cause the 100k-200k values to move out of phase (the big black dip at about 125 kyrs would move about 15 kyrs to the left). So my next objective (see Sixteen – Roe vs Huybers II) is to set an age marker for Termination II from the radiometric dating values and “slide” the Huybers 2007 dataset to this and the current T1 dating. Also, the ice core proxies recorded in deep ocean cores must lag real ice volume changes by some period like say 1 – 3 kyrs (see note 1). This helps the Roe hypothesis because the black curves move to the left.

Let’s see what happens with these changes.

And hopefully, sharp-eyed readers are going to identify opportunities for improvement in this article, as well as the missing piece of the puzzle that will lead to the coveted Nature paper..

Articles in the Series

Part One – An introduction

Part Two – Lorenz – one point of view from the exceptional E.N. Lorenz

Part Three – Hays, Imbrie & Shackleton – how everyone got onto the Milankovitch theory

Part Four – Understanding Orbits, Seasons and Stuff – how the wobbles and movements of the earth’s orbit affect incoming solar radiation

Part Five – Obliquity & Precession Changes – and in a bit more detail

Part Six – “Hypotheses Abound” – lots of different theories that confusingly go by the same name

Part Seven – GCM I – early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Seven and a Half – Mindmap – my mind map at that time, with many of the papers I have been reviewing and categorizing plus key extracts from those papers

Part Eight – GCM II – more recent work from the “noughties” – GCM results plus EMIC (earth models of intermediate complexity) again trying to produce perennial snow cover

Part Nine – GCM III – very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age – a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Eleven – End of the Last Ice age – latest data showing relationship between Southern Hemisphere temperatures, global temperatures and CO2

Twelve – GCM V – Ice Age Termination – very recent work from He et al 2013, using a high resolution GCM (CCSM3) to analyze the end of the last ice age and the complex link between Antarctic and Greenland

Thirteen – Terminator II – looking at the date of Termination II, the end of the penultimate ice age – and implications for the cause of Termination II

Fourteen – Concepts & HD Data – getting a conceptual feel for the impacts of obliquity and precession, and some ice age datasets in high resolution

Sixteen – Roe vs Huybers II – remapping a deep ocean core dataset and updating the previous article

Seventeen – Proxies under Water I – explaining the isotopic proxies and what they actually measure

Eighteen – “Probably Nonlinearity” of Unknown Origin – what is believed and what is put forward as evidence for the theory that ice age terminations were caused by orbital changes

Nineteen – Ice Sheet Models I – looking at the state of ice sheet models


In defense of Milankovitch, Gerard Roe, Geophysical Research Letters (2006) – free paper

Glacial variability over the last two million years: an extended depth-derived agemodel, continuous obliquity pacing, and the Pleistocene progression, Peter Huybers, Quaternary Science Reviews (2007) – free paper

How long to oceanic tracer and proxy equilibrium?, C Wunsch & P Heimbach, Quaternary Science Reviews (2008) – free paper

Datasets for Huybers 2007 are here:

Insolation data calculated from Jonathan Levine’s MATLAB program (just ask for this data in Excel or MATLAB format)


Note 1: See, for example, Wunsch & Heimbach 2008:

The various time scales for distribution of tracers and proxies in the global ocean are critical to the interpretation of data from deep- sea cores. To obtain some basic physical insight into their behavior, a global ocean circulation model, forced to least-square consistency with modern data, is used to find lower bounds for the time taken by surface-injected passive tracers to reach equilibrium. Depending upon the geographical scope of the injection, major gradients exist, laterally, between the abyssal North Atlantic and North Pacific, and vertically over much of the ocean, persisting for periods longer than 2000 years and with magnitudes bearing little or no relation to radiocarbon ages. The relative vigor of the North Atlantic convective process means that tracer events originating far from that location at the sea surface will tend to display abyssal signatures there first, possibly leading to misinterpretation of the event location. Ice volume (glacio-eustatic) corrections to deep-sea d18O values, involving fresh water addition or subtraction, regionally at the sea surface, cannot be assumed to be close to instantaneous in the global ocean, and must be determined quantitatively by modelling the flow and by including numerous more complex dynamical interactions.

Read Full Post »

Older Posts »


Get every new post delivered to your Inbox.

Join 306 other followers