Archive for the ‘Measurement’ Category

In previous posts we have seen – and critiqued – ideas about the causes of ice age inception and ice age termination being due to high latitude insolation. These ideas are known under the banner of “Milankovitch forcing”. Mostly I’ve presented the concept by plotting insolation data at particular latitudes, in one form or another. The insolation at different latitudes depends on obliquity and precession (as well as eccentricity).

Obliquity is the tilt of the earth’s axis – which varies over about 40,000 year cycles. Precession is the movement of point of closest approach (perihelion) and how it coincides with northern hemisphere summer – this varies over about a 20,000 year cycle. The effect of precession is modified by the eccentricity of the earth’s axis – which varies over a 100,000 year cycle.

If the earth’s orbit was a perfect circle (eccentricity = 0) then “precession” would have no effect, because the earth would be a constant distance from the sun. As eccentricity increases the impact of precession gets bigger.

How to understand these ideas better?

Peter Huybers has a nice explanation and presentation of obliquity and precession in his 2007 paper, along with some very interesting ideas that we will follow up in a later article.

The top graph shows the average insolation value by latitude and day of the year (over 2M years). The second graph shows the anomaly compared with the average at times of maximum obliquity. The third graph shows the anomaly compared with the average at times of maximum precession. The graphs to the right show the annual average of these values:

From Huybers (2007)

From Huybers (2007)

Figure 1

We can see immediately that times of maximum precession (bottom graph) have very little impact on annual averages (the right side graph). This is because the increase in, say, the summer/autumn, are cancelled out by the corresponding decreases in the spring.

But we can also see that times of maximum obliquity (middle graph) DO impact on annual averages (right side graph). Total energy is shifted to the poles from the tropics .

I was trying, not very effectively, to explain some of this in (too many graphs) in Part Five – Obliquity & Precession Changes.

Here is another way to look at this concept. For the last 500 kyrs, I plotted out obliquity and precession modified by eccentricity (e sin w) in the top graph, and in the bottom graph the annual anomaly by latitude and through time. WordPress kind of forces everything into 500 pixel wide graphs which doesn’t help too much. So click on it to get the HD version:


Figure 2 – Click to Expand

It is easy to see that the 40,000 year obliquity cycles correspond to high latitude (north & south) anomalies, which last for considerable periods. When obliquity is high, the northern and southern high latitude regions have an increase in annual average insolation. When obliquity is low, there is a decrease. If we look at the precession we don’t see a corresponding change in the annual average (because one season’s increase mostly cancels out the other season’s decrease).

Huybers’ paper has a lot more to it than that, and I recommend reading it. He has a 2M yr global proxy database, that isn’t dependent on “orbital tuning” (note 1) and an interesting explanation and demonstration for obliquity as the dominant factor in “pacing” the ice ages. We will come back to his ideas.

In the meantime, I’ve been collecting various data sources. One big challenge in understanding ice ages is that the graphs in the various papers don’t allow you to zoom in on the period of interest. I thought I could help to fix that by providing the data  – and comparing the data – in High Definition instead of snapshots of 800,000 years on half the width of a standard pdf. It’s a work in progress..

The top graph (below) has two versions of temperature proxy. One is Huyber’s global proxy for ice volume (δ18O) from deep ocean cores, while the other is the local proxy for temperature (δD) from Dome C core from Antarctica (75°S). This location is generally known as EDC, i.e., EPICA Dome C. The two datasets are laid out on their own timescales (more on timescales below):


Figure 3 – Click to Expand

The middle graph has CO2 and CH4 from Dome C. It’s amazing how tightly CO2 and CH4 are linked to the temperature proxies and to each other. (The CO2 data comes from Lüthi et al 2008, and the CH4 data from Loulergue et al 2008).

The bottom graph has obliquity and annual insolation anomaly area-averaged over 70ºS-90ºS. Because we are looking at annual insolation anomaly this value is completely in phase with obliquity.

Why are the two datasets on the top graph out of alignment? I don’t know the full answer to this yet. Obviously the lag from the atmosphere to the deep ocean is part of the explanation.

Here is a 500 kyr comparison of LR04 (Lisiecki & Raymo 2005) and Huybers’ dataset – both deep ocean cores – but LR04 uses ‘orbital tuning’. The second graph has obliquity & precession (modified by eccentricity). The third graph has EDC from Antarctica:


Figure 4 – Click to Expand

Now we zoom in on the last 150 kyrs with two Antarctic cores on the top graph and NGRIP (North Greenland) on the bottom graph:


Figure 5 – Click to Expand

Here we see EDML (high resolution Antarctic core) compared with NGRIP (Greenland) over the last 150 kyrs (NGRIP only goes back to 123 kyrs) plus CO2 & CH4 from EDC – once again, the tight correspondence of CO2 and CH4 with the temperature records from both polar regions is amazing:


Figure 6 – Click to Expand

The comparison and linking of “abrupt climate change” in Greenland and Antarctic has been covered in EPICA 2006 (note the timescale is in the opposite direction to the graphs above):

from EPICA 2006

from EPICA 2006

Figure 7 – Click to Expand


As most papers acknowledge, providing data on the most accurate “assumption free” timescales is the Holy Grail of ice age analysis. However, there are no assumption-free timescales. But lots of progress has been made.

Huybers’ timescale is based primarily on a) a sedimentation model, b) tying together the various identified inception & termination points for each of the proxies, c) the independently dated Brunhes- Matuyama reversal at 780,000 years ago.

The EDC (EPICA Dome ‘C’) timescale is based on a variety of age markers:

  • for the first 50 kyrs by tying the data to Greenland (via high resolution CH4 in both records) which can be layer counted because of much higher precipitation
  • volcanic eruptions
  • 10Be events which can be independently dated
  • ice flow models – how ice flows and compresses under pressure
  • finally, “orbital tuning”

EDC2 was the timescale on which the data was presented in the seminal 2004 EPICA paper. This 2004 paper presented the EDC core going back over 800 kyrs (previously the Vostok core was the longest, going back 400 kyrs). The EPICA 2006 paper was the Dronning Maud Land Core (EDML) which covered a shorter time (150 kyrs) but at higher resolution, allowing a better matchup between Antarctica and Greenland. This introduced the improved EDC3 timescale.

In a technical paper on dating, Parannin et al 2007 show the differences between EDC3 and EDC2 and also between EDC3 and LR04.


Figure 8 – Click to Expand

So if you have data, you need to understand the timescale it is plotted on.

I have the EDC3 timescale in terms of depth so next I’ll convert the EDC temperature proxy (δD) on EDC2 to EDC3 time. I also have dust vs depth for the EDC core – another fascinating variable that is about 25 times stronger during peak glacials compared with interglacials – this needs converting to the EDC3 timescale. Other data includes some other atmospheric chemical components. Then I have NGRIP data (North Greenland) going back 123,000 years but on the original 2004 timescale, and it has been relaid onto GICC05 timescale (still to find).

Very recently (mid 2013) a new Antarctic timescale was proposed – AICC2012 – which brings all of the Antarctic ice cores onto one common timescale.  See references below.


Calling Matlab gurus – plotting many items onto one graph has some benefits. Matlab is an excellent tool but I haven’t yet figured out how to plot lots of data onto the same graph. If multiple data sources have the same x-series data and a similar y-range there is no problem. If I have two data sources with similar x values (but different x-series data) and completely different y values I can use plotyy. How about if I have five datasources, each with different but similar x-series and different y-values. How do I plot them on one graph, and display the multiple y-axes (easily)?


This article was intended to highlight obliquity and precession in a different and hopefully more useful way. And to begin to present some data in high resolution.

Articles in the Series

Part One – An introduction

Part Two – Lorenz – one point of view from the exceptional E.N. Lorenz

Part Three – Hays, Imbrie & Shackleton – how everyone got onto the Milankovitch theory

Part Four – Understanding Orbits, Seasons and Stuff – how the wobbles and movements of the earth’s orbit affect incoming solar radiation

Part Five – Obliquity & Precession Changes – and in a bit more detail

Part Six – “Hypotheses Abound” – lots of different theories that confusingly go by the same name

Part Seven – GCM I – early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Seven and a Half – Mindmap – my mind map at that time, with many of the papers I have been reviewing and categorizing plus key extracts from those papers

Part Eight – GCM II – more recent work from the “noughties” – GCM results plus EMIC (earth models of intermediate complexity) again trying to produce perennial snow cover

Part Nine – GCM III – very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age – a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Eleven – End of the Last Ice age – latest data showing relationship between Southern Hemisphere temperatures, global temperatures and CO2

Twelve – GCM V – Ice Age Termination – very recent work from He et al 2013, using a high resolution GCM (CCSM3) to analyze the end of the last ice age and the complex link between Antarctic and Greenland

Thirteen – Terminator II – looking at the date of Termination II, the end of the penultimate ice age – and implications for the cause of Termination II

Fifteen – Roe vs Huybers – reviewing In Defence of Milankovitch, by Gerard Roe

Sixteen – Roe vs Huybers II – remapping a deep ocean core dataset and updating the previous article

Seventeen – Proxies under Water I – explaining the isotopic proxies and what they actually measure

Eighteen – “Probably Nonlinearity” of Unknown Origin – what is believed and what is put forward as evidence for the theory that ice age terminations were caused by orbital changes

Nineteen – Ice Sheet Models I – looking at the state of ice sheet models


Glacial variability over the last two million years: an extended depth-derived agemodel, continuous obliquity pacing, and the Pleistocene progression, Peter Huybers, Quaternary Science Reviews (2007) – free paper

Eight glacial cycles from an Antarctic ice core, EPICA community members, Nature (2004) – free paper

One-to-one coupling of glacial climate variability in Greenland and Antarctica,  EPICA Community Members, Nature (2006) – free paper

High-resolution carbon dioxide concentration record 650,000–800,000 years before present, Lüthi et al, Nature (2008)

Orbital and millennial-scale features of atmospheric CH4 over the past 800,000 years, Loulergue et al, Nature (2008)

A Pliocene-Pleistocene stack of 57 globally distributed benthic D18O records, Lorraine Lisiecki & Maureen E. Raymo, Paleoceanography (2005) – free paper

The EDC3 chronology for the EPICA Dome C ice core, Parennin et al, Climate of the Past (2007) – free paper

An optimized multi-proxy, multi-site Antarctic ice and gas orbital chronology (AICC2012): 120–800 ka, L. Bazin et al, Climate of the Past (2013) – free paper

The Antarctic ice core chronology (AICC2012): an optimized multi-parameter and multi-site dating approach for the last 120 thousand years, D. Veres et al, Climate of the Past (2013) – free paper


Note 1 – See for example Thirteen – Terminator II, under the heading What is the basis for the SPECMAP dating? 

It is important to understand the assumptions built into every ice age database.

Huybers 2007 continues the work of HW04 (Huybers & Wunsch 2004) which attempts to produce a global proxy datbase (a proxy for global ice volume) without any assumptions relating to the “Milankovitch theory”.

Read Full Post »

In Eleven – End of the Last Ice age we saw the sequence of events that led to the termination of the most recent ice age – Termination I.

The date of this event, the time when the termination began, was about 17.5-18.0 kyrs ago (note 1). We also saw that “rising solar insolation” couldn’t explain this. By way of illustration I produced some plots in Pop Quiz: End of An Ice Age – all from the last 100 yrs and asked readers to identify which one coincided with Termination I.

But this simple graph of insolation at 65ºN on July 1st summarizes the problem for the”classic version” (see Part Six – “Hypotheses Abound”) of the “Milankovitch theory” – in simple terms, if solar insolation at 18 kyrs ago caused the ice age to end, why didn’t the same or higher insolation at 100 kyrs, 80 kyrs, or from 60-30 kyrs cause the last ice age to end earlier:


Figure 1 

And for a more visual demonstration of solar insolation changes in time, take a look at the Hövmoller plots in the comments of Part Eleven.

The other problem for the Milankovitch theory of ice age termination is the fact that southern hemisphere temperatures rose in advance of global temperatures. So the South led the globe out of the ice age. This is hard to explain if the cause of the last termination was melting northern hemisphere ice sheets. Take a look at Eleven – End of the Last Ice age.

Now we’ve quickly reviewed Termination I, let’s take a look at Termination II. This is the end of the penultimate ice age.

The traditional Milankovitch theory says that peak high latitude solar insolation around 127 kyrs BP was the trigger for the massive deglaciation that ended that earlier ice age.

The well-known SPECMAP dating of sea-level/ice-volume vs time has Termination II at 128 ± 3 kyrs BP. All is good.

Or is it?

What is the basis for the SPECMAP dating?

The ice age records that have been most used and best known come from ocean sediments. These were the first long-dated climate records that went back hundreds of thousands of years.

How do they work and what do they measure?

Oxygen exists in the form of a few different stable isotopes. The most common is 16O with 18O the next most common, but much smaller in proportion. Water, aka H2O, also exists with both these isotopes and has the handy behavior of evaporating and precipitating H218O water at different rates to H216O. The measurement is expressed as the ratio (the delta, or δ) as δ18O in parts per thousand.

The journey of water vapor evaporating from the ocean, followed by precipitation, produces a measure of the starting ratio of the isotopes as well as the local precipitation temperature.

The complex end result of these different process rates is that deep sea benthic foraminifera take up 18O out of the deep ocean, and the δ18O ratio is mostly in proportion to the global ice volume.  The resulting deep sea sediments are therefore a time-series record of ice volume. However, sedimentation rates are not an exact science and are not necessarily constant in time.

As a result of lots of careful work by innumerable people over many decades, out popped a paper by Hays, Imbrie & Shackleton in 1976. This demonstrated that a lot of the recorded changes in ice volume happened at orbital frequencies of precession and obliquity (approximately every 20kyrs and every 40 kyrs). But there was an even stronger signal – the start and end of ice ages – at approximately every 100 kyrs. This coincides roughly with changes in eccentricity of the earth’s orbit, not that anyone has a (viable) theory that links this change to the start and end of ice ages.

Now the clear signal of obliquity and precession in the record gives us the option of “tuning up” the record so that peaks in the record match orbital frequencies of precession and obliquity. We discussed the method of tuning in some past comments on a similar, but much later, dataset – the LR04 stack (thanks to Clive Best for highlighting it).

The method isn’t wrong, but we can’t confirm the timing of key events with a dataset where dates have been tuned to a specific theory.

Luckily, some new methods have come along.

Ice Core Dating

It’s been exciting times for the last twenty plus years in climate science for people who want to wear thick warm clothing and “get away from it all”.

Greenland and Antarctica have yielded a number of ice cores. Greenland now has a record that goes back 123,000 years (NGRIP). Antarctica now has a record that goes back 800,000 years (EDC, aka, “Dome C”). Antarctica also has the Voskok ice core that goes back about 400,000 years, Dome Fuji that goes back 340,000 years and Dronning Maud Land (aka DML or EDML) which is higher resolution but only goes back 150,000 years.

What do these ice cores measure and how is the dating done?

The ice cores measure temperature at time of snow deposition via the δ18O measurement discussed above (note 2), which in this case is not a measure of global ice volume but of air temperature. The gas trapped in bubbles in the ice cores gives us CO2 and CH4 concentrations. We also can measure dust deposition and all kinds of other goodies.

The first problem is that the gas is “younger” than the ice because it moves around until the snow compacts enough. So we need a model to calculate this, and there is some uncertainty about the difference in age between the ice and the gas.

The second problem is how to work out the ice age. At the start we can count annual layers. After sufficient time (a few tens of thousands of years) these layers can’t be distinguished any more, instead we can use models of ice flow physics. Then a few handy constraints arrive like 10Be events that occurred about 50 kyrs BP. After ice flow physics and external events are exhausted, the data is constrained by “orbital tuning”, as with deep ocean cores.

Caves, Corals and Accurate Radiometric Dating

Neither deep sea cores, nor ice cores, give us much possibility of radiometric dating. But caves and corals do.

For newcomers to dating methods, if you have substance A that decays into substance B with a “half-life” that is accurately known, and you know exactly how much of substance A and B was there at the start (e.g. no possibility of additional amounts of A or B getting into the thing we want to measure) then you can very accurately calculate the age that the substance was formed.

Basically it’s all down to how to deposition process works. Uranium-Thorium dating has been successfully used to date calcite depositions in rock.

So, take a long section that has been continuously deposited, measure the δ18O (and 13C) at lots of points along the section, and take a number of samples and calculate the age along the section with radiometric dating. The subject of what exactly is being measured in the cores is complicated, but I will greatly over-simplify and say it revolves around two points:

  1. The actual amount of deposition, as not much water is available to create these depositions during extreme glaciation
  2. The variation of δ18O (and 13C), which to a first order depends on local air temperature

For people interested in more detail, I recommend McDermott 2004, some relevant extracts below in note 3).

Corals offer the possibility, via radiometric dating, of getting accurate dates of sea level. The most important variable to know is any depression and uplift of the earth’s crust.

Accurate dating of caves and coral has been a growth industry in the last twenty years with some interesting results.

Termination II

Winograd et al 1992 analyzed Devils Hole in Nevada (DH-11):

The Devils Hole δ18O time curve (fig 2) clearly displays the sawtooth pattern characteristic of marine δ18O records that have been interpreted to be the result of the waxing and waning of Northern Hemisphere ice sheets.. But what caused the δ18O variations in DH-11 shown on fig. 2? ..The δ18O variations in atmospheric precipitation are – to a first approximation – believed to reflect changes in average winter-spring surface temperature..

From Winograd et al 1992

From Winograd et al 1992

Figure 2

Termination II occurs at 140±3 (2σ) ka in the DH-11 record, at 140± 15 ka in the Vostok record (14), and at 128 ± 3 ka in the SPECMAP record (13). (The uncertainty in the DH-11 record is in the 2σ uncertainties on the MS uranium-series dates; other dates and uncertainties are from the sources cited.) Termination III occurred at about 253 +/- 3 (2σ) ka in the DH11 record and at about 244 +/- 3 ka in the SPECMAP record. These differences.. are minimum values..

They compare summer insolation at 65ºN with SPECMAP, Devils Hole and the Vostok ice core on a handy graph:

Winograd et al 1992

Winograd et al 1992

Figure 3

Of course, not everyone was happy with this new information, and who knows what the isotope measurement really was a proxy for?

Slowey, Henderson & Curry 1996

A few years later, in 1996, Slowey, Henderson & Curry (not the famous Judith) made this statement from their research:

Our dates imply a timing and duration for substage 5e in substantial agreement with the orbitally tunes marine chronology. Initial direct U-Th dating of the marine δ18O record supports the theory that orbital variations are a fundamental cause of Pleistocene climate change.

[Emphasis added, likewise with all quotes].

Henderson & Slowey 2000

Then in 2000, the same Henderson & Slowey (sans Curry):

Milankovitch proposed that summer insolation at mid-latitudes in the Northern Hemisphere directly causes the ice-age climate cycles. This would imply that times of ice-sheet collapse should correspond to peaks in Northern Hemisphere June insolation.

But the penultimate deglaciation has proved controversial because June insolation peaks 127 kyr ago whereas several records of past climate suggest that change may have occurred up to 15kyr earlier.

There is a clear signature of the penultimate deglaciation in marine oxygen-isotope records. But dating this event, which is significantly before the 14C age range, has not been possible.

Here we date the penultimate deglaciation in a record from the Bahamas using a new U-Th isochron technique. After the necessary corrections for a-recoil mobility of 234U and 230Th and a small age correction for sediment mixing, the midpoint age for the penultimate deglaciation is determined to be 135 +/-2.5 kyr ago. This age is consistent with some coral-based sea-level estimates, but it is difficult to reconcile with June Northern Hemisphere insolation as the trigger for the ice-age cycles.

Zhao, Xia & Collerson (2001)

High-precision 230Th- 238U ages for a stalagmite from Newdegate Cave in southern Tasmania, Australia.. The fastest stalagmite growth occurred between 129.2 ± 1.6 and 122.1 ± 2.0 ka (†61.5 mm/ka), coinciding with a time of prolific coral growth from Western Australia (128-122 ka). This is the first high-resolution continental record in the Southern Hemisphere that can be compared and correlated with the marine record. Such correlation shows that in southern Australia the onset of full interglacial sea level and the initiation of highest precipitation on land were synchronous. The stalagmite growth rate between 129.2 and 142.2 ka (†5.9 mm/ka) was lower than that between 142.2 and 154.5 ka (†18.7 mm/ka), implying drier conditions during the Penultimate Deglaciation, despite rising temperature and sea level.

This asymmetrical precipitation pattern is caused by latitudinal movement of subtropical highs and an associated Westerly circulation, in response to a changing Equator-to-Pole temperature gradient.

Both marine and continental records in Australia strongly suggest that the insolation maximum between 126 and 128 ka at 65°N was directly responsible for the maintenance of full Last Interglacial conditions, although the triggers that initiated Penultimate Deglaciation (at †142 ka) remain unsolved.

From Zhao et al 2001

From Zhao et al 2001

Figure 4

Gallup, Cheng, Taylor & Edwards (2002)

An outcrop within the last interglacial terrace on Barbados contains corals that grew during the penultimate deglaciation, or Termination II. We used combined 230Th and 231Pa dating to determine that they grew 135.8 ± 0.8 thousand years ago, indicating that sea level was 18 ± 3 meters below present sea level at the time. This suggests that sea level had risen to within 20% of its peak last- interglacial value by 136 thousand years ago, in conflict with Milankovitch theory predictions..

Figure 2B summarizes the sea level record suggested by the new data. Most significantly our record includes corals that document sea level directly during Termination II, suggesting that the majority (~80%) of the Termination II sea level rise occurred before 135 ka. This is broadly consistent with early shifts in δ18O recorded in the Bahamas and Devils Hole and with early dates (134 ka) of last interglacial corals from Hawaii, which call into question the timing of Termination II in the SPECMAP record..

From Gallup et al 2002

From Gallup et al 2002

Figure 5 – Click to expand

 Of course, all is not lost for the many-headed Hydra (and see note 4):

..The Milankovitch theory in its simplest form cannot explain Termination II, as it does Termination I. However, it is still plausible that insolation forcing played a role in the timing of Termination II. As deglaciations must begin while Earth is in a glacial state, it is useful to look at factors that could trigger deglaciation during a glacial maximum. These include – (i) sea ice cutting off a moisture source for the ice sheets; – (ii) isostatic depression of continental crust; and – (iii) high Southern Hemisphere summer insolation through effects on the atmospheric CO concentration.

Yuan et al 2004 provide evidence in opposition:

Thorium-230 ages and oxygen isotope ratios of stalagmites from Dongge Cave, China, characterize the Asian Monsoon and low-latitude precipitation over the past 160,000 years. Numerous abrupt changes in 18O/16O values result from changes in tropical and subtropical precipitation driven by insolation and millennial-scale circulation shifts.

The Last Interglacial Monsoon lasted 9.7 +/- 1.1 thousand years, beginning with an abrupt (less than 200 years) drop in 18O/16O values 129.3 ± 0.9 thousand years ago and ending with an abrupt (less than 300 years) rise in 18O/16O values 119.6 ± 0.6 thousand years ago. The start coincides with insolation rise and measures of full interglacial conditions, indicating that insolation triggered the final rise to full interglacial conditions.

But they also comment:

Although the timing of Monsoon Termination II is consistent with Northern Hemisphere insolation forcing, not all evidence of climate change at about this time is consistent with such a mechanism (Fig. 3).

From Yuan et al 2004

From Yuan et al 2004

Figure 6 – Click to expand

Sea level apparently rose to levels as high as –21 m as early as 135 ky before the present (27 & Gallup et al 2002), preceding most of the insolation rise. The half-height of marine oxygen isotope Termination II has been dated at 135 +/- 2.5 ky (Henderson & Slowey 2000).

Speleothem evidence from the Alps indicates temperatures near present values at 135 +/- 1.2 ky (31). The half-height of the d18O rise at Devils Hole (142 +/- 3 ky) also precedes most of the insolation rise (20). Increases in Antarctic temperature and atmospheric CO2 (32) at about the time of Termination II appear to have started at times ranging from a few to several millennia before most of the insolation rise (4, 32, 33).

[Their reference numbers amended to papers where cited in this article]

Drysdale et al 2009

Variations in the intensity of high-latitude Northern Hemisphere summer insolation, driven largely by precession of the equinoxes, are widely thought to control the timing of Late Pleistocene glacial terminations. However, recently it has been suggested that changes in Earth’s obliquity may be a more important mechanism. We present a new speleothem-based North Atlantic marine chronology that shows that the penultimate glacial termination (Termination II) commenced 141,000 ± 2,500 years before the present, too early to be explained by Northern Hemisphere summer insolation but consistent with changes in Earth’s obliquity. Our record reveals that Terminations I and II are separated by three obliquity cycles and that they started at near-identical obliquity phases.

Standard stuff by now, for readers who have made it this far.

But the Drysdale paper is interesting on two fronts – their dating method and their “one result in a row” matching a theory with evidence (I extracted more text from the paper in note 5  for interested readers). Let’s look at the dating method first.

Basically what they did was match up the deep ocean cores that record global ice volume (but have no independent dating) with accurately radiometrically-dated speleothems (cave depositions). How did they do the match up? It’s complicated but relies on the match between the δ18O in both records. The approach of providing absolute dating for existing deep ocean cores will give very interesting results if it proves itself.

From Drysdale et al 2009

From Drysdale et al 2009

Figure 7 – Click to expand

The correspondence between Corchia δ18O and Iberian-margin sea-surface temperatures (SSTs) through T-II (Fig. 2) is remarkable. Although the mechanisms that force speleothem δ18O variations are complex, we believe that Corchia δ18O is driven largely by variations in rainfall amount in response to changes in regional SSTs. Previous studies from Corchia show that speleothem δ18O is sensitive to past changes in North Atlantic circulation at both orbital and millennial time scales, with δ18O increasing during colder (glacial or stadial) phases and the reverse occurring during warmer (inter- glacial or interstadial) phases.

From Drysdale et al 2009

From Drysdale et al 2009

Figure 8 – Click to expand

Now to the hypothesis:

From Drysdale et al 2009

From Drysdale et al 2009

Figure 9

We find that NHSI [NH summer insolation] intensity is unlikely to be the driving force for T-II: Intensity values are close to minimum at the time of the start of T-II, and a lagged response to the previous insolation peak at ~148 ka is unlikely because of its low amplitude (Fig. 3A). This argues against the SPECMAP curve being a reliable age template through T-II, given the age offset of ~8 ky for the T-II midpoint (8) with respect to our record. A much stronger case can be made for obliquity as a forcing mechanism.

On the basis of our results (Fig. 3B), both T-I and T-II commence at the same phase of obliquity, and the period between them is exactly equivalent to three obliquity cycles (~123 ky).

(More of their explanation in note 5).

EPICA 2006

Here is my plot of the Dronning Maud Land ice core (DML) on EDC3 timescale from EPICA 2006 (data downloaded from the Nature website):

Data from EPICA

Data from EPICA

Figure 10

The many Antarctic and Greenland ice cores are still undergoing revisions of dating, and so I haven’t attempted to get the latest work. I just thought it would be good to throw in an ice core.

The value of δ18O here is a proxy for local temperature. On this timescale local temperatures began rising about 138 kyrs BP.


New data on Termination II from the last 20 years of radiometric dating from a number of different sites with different approaches demonstrate that TII started about 140 kyrs BP.

Here is the solar insolation curve at 65ºN over the last 180 kyrs, with the best dates of the two ice age terminations, separated by about 121 – 125 kyrs:


Figure – Click to expand

It’s proven that ice ages are terminated by low solar insolation in the high latitudes of the northern hemisphere. The basis for this is that the low solar insolation allows a much quicker build up of the northern hemisphere ice sheets, causing dynamic instability, leading to ice sheet calving, which disrupts ocean currents, outgassing CO2 in large concentrations and thereby creating a positive feedback for temperature rise. The local ice sheet collapses also create positive feedback due to higher solar insolation now absorbed. As the ice sheets continue to melt, the (finally) rising solar insolation in high northern latitudes strengthens the pre-existing conditions and helps to establish the termination. Thus the orbital theory is given strong support from the evidence of the timing of the last two terminations.

I just made that up in a few minutes for fun. It’s not true.

We saw one paper with different evidence for the start of TII – Yuan et al (and see note 6). However, it seems that most lines of evidence, including absolute dating of sea level rise puts TII starting around 140 kyrs BP.

This also means, for maths wizards, that the time between ice age terminations, from one result in a row, is about 122 kyrs.

As an aside, because Winograd et al 1992 calculated their age for TIII “at about 253 kyrs”:

This puts the time between TIII and TII at about 113 kyrs which is exactly the time of five precessional cycles!

I haven’t yet dug into any other dates for TIII so this theory is quite preliminary.

Articles in the Series

Part One – An introduction

Part Two – Lorenz – one point of view from the exceptional E.N. Lorenz

Part Three – Hays, Imbrie & Shackleton – how everyone got onto the Milankovitch theory

Part Four – Understanding Orbits, Seasons and Stuff – how the wobbles and movements of the earth’s orbit affect incoming solar radiation

Part Five – Obliquity & Precession Changes – and in a bit more detail

Part Six – “Hypotheses Abound” – lots of different theories that confusingly go by the same name

Part Seven – GCM I – early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Seven and a Half – Mindmap – my mind map at that time, with many of the papers I have been reviewing and categorizing plus key extracts from those papers

Part Eight – GCM II – more recent work from the “noughties” – GCM results plus EMIC (earth models of intermediate complexity) again trying to produce perennial snow cover

Part Nine – GCM III – very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age – a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Eleven – End of the Last Ice age – latest data showing relationship between Southern Hemisphere temperatures, global temperatures and CO2

Twelve – GCM V – Ice Age Termination – very recent work from He et al 2013, using a high resolution GCM (CCSM3) to analyze the end of the last ice age and the complex link between Antarctic and Greenland

Fourteen – Concepts & HD Data – getting a conceptual feel for the impacts of obliquity and precession, and some ice age datasets in high resolution

Fifteen – Roe vs Huybers – reviewing In Defence of Milankovitch, by Gerard Roe

Sixteen – Roe vs Huybers II – remapping a deep ocean core dataset and updating the previous article

Seventeen – Proxies under Water I – explaining the isotopic proxies and what they actually measure

Eighteen – “Probably Nonlinearity” of Unknown Origin – what is believed and what is put forward as evidence for the theory that ice age terminations were caused by orbital changes

Nineteen – Ice Sheet Models I – looking at the state of ice sheet models


A Pliocene-Pleistocene stack of 57 globally distributed benthic D18O records, Lorraine E. Lisiecki & Maureen E. Raymo, Paleoceanography (2005) – free paper

Continuous 500,000-Year Climate Record from Vein Calcite in Devils Hole, Nevada, Winograd, Coplen, Landwehr, Riggs, Ludwig, Szabo, Kolesar & Revesz, Science (1992) – paywall, but might be available with a free Science registration

Palaeo-climate reconstruction from stable isotope variations in speleothems: a review, Frank McDermott, Quaternary Science Reviews 23 (2004) – free paper

Direct U-Th dating of marine sediments from the two most recent interglacial periods, NC Slowey, GM Henderson & WB Curry, Nature (1996)

Evidence from U-Th dating against northern hemisphere forcing of the penultimate deglaciation, GM Henderson & NC Slowey, Nature (2000)

Timing and duration of the last interglacial inferred from high resolution U-series chronology of stalagmite growth in Southern Hemisphere, J Zhao, Q Xia & K Collerson, Earth and Planetary Science Letters (2001)

Direct determination of the timing of sea level change during Termination II, CD Gallup, H Cheng, FW Taylor & RL Edwards, Science (2002)

Timing, Duration, and Transitions of the Last Interglacial Asian Monsoon, Yuan, Cheng, Edwards, Dykoski, Kelly, Zhang, Qing, Lin, Wang, Wu, Dorale, An & Cai, Science (2004)

Evidence for Obliquity Forcing of Glacial Termination II, Drysdale, Hellstrom, Zanchetta, Fallick, Sánchez Goñi, Couchoud, McDonald, Maas, Lohmann & Isola, Science (2009)

One-to-one coupling of glacial climate variability in Greenland and Antarctica, EPICA Community Members, Nature (2006)

Millennial- and orbital-scale changes in the East Asian monsoon over the past 224,000 years, Wang, Cheng, Edwards, Kong, Shao, Chen, Wu, Jiang, Wang & An, Nature (2008)


Note 1 – In common ice age convention, the date of a termination is the midpoint of the sea level rise from the last glacial maximum to the peak interglacial condition. This can be confusing for newcomers.

Note 2 – The alternative method used on some of the ice cores is δD, which works on the same basis – water with the hydrogen isotope Deuterium evaporates and condenses at different rates to “regular” water.

Note 3 – A few interesting highlights from McDermott 2004:

2. Oxygen isotopes in precipitation

As discussed above, d18O in cave drip-waters reflect

(i) the d18O of precipitation (d18Op) and

(ii) in arid/semi- arid regions, evaporative processes that modify d18Op at the surface prior to infiltration and in the upper part of the vadose zone.

The present-day pattern of spatial and seasonal variations in d18Op is well documented (Rozanski et al., 1982, 1993; Gat, 1996) and is a consequence of several so-called ‘‘effects’’ (e.g. latitude, altitude, distance from the sea, amount of precipitation, surface air temperature).

On centennial to millennial timescales, factors other than mean annual air temperature may cause temporal variations in d18Op (e.g. McDermott et al., 1999 for a discussion). These include:

(i) changes in the d18O of the ocean surface due to changes in continental ice volume that accompany glaciations and deglaciations;

(ii) changes in the temperature difference between the ocean surface temperature in the vapour source area and the air temperature at the site of interest;

(iii) long-term shifts in moisture sources or storm tracks;

(iv) changes in the proportion of precipitation which has been derived from non-oceanic sources, i.e. recycled from continental surface waters (Koster et al., 1993); and

(v) the so-called ‘‘amount’’ effect.

As a result of these ambiguities there has been a shift from the expectation that speleothem d18Oct might provide quantitative temperature estimates to the more attainable goal of providing precise chronological control on the timing of major first-order shifts in d18Op, that can be interpreted in terms of changes in atmospheric circulation patterns (e.g. Burns et al., 2001; McDermott et al., 2001; Wang et al., 2001), changes in the d18O of oceanic vapour sources (e.g. Bar Matthews et al., 1999) or first-order climate changes such as D/O events during the last glacial (e.g. Spo.tl and Mangini, 2002; Genty et al., 2003)..

4.1. Isotope stage 6 and the penultimate deglaciation

Speleothem records from Late Pleistocene mid- to high-latitude sites are discussed first, because these are likely to be sensitive to glacial–interglacial transitions, and they illustrate an important feature of speleothems, namely that calcite deposition slows down or ceases during glacials. Fig. 1 is a compilation of approximately 750 TIMS U-series speleothem dates that have been published during the past decade, plotted against the latitude of the relevant cave site.

The absence of speleothem deposition in the mid- to high latitudes of the Northern Hemisphere during isotope stage 2 is striking, consistent with results from previous compilations based on less precise alpha-spectrometric dates (e.g. Gordon et al., 1989; Baker et al., 1993; Hercmann, 2000). By contrast, speleothem deposition appears to have been essentially continuous through the glacial periods at lower latitudes in the Northern Hemisphere (Fig. 1)..

..A comparison of the DH-11 [Devils Hole] record with the Vostok (Antarctica) ice-core deuterium record and the SPEC- MAP record that largely reflects Northern Hemisphere ice volume (Fig. 2) indicates that both clearly record the first-order glacial–interglacial transitions.

Note 4 – Note the reference to Milankovitch theory “explaining” Termination I. This appears to be the point that insolation was at least rising as Termination began, rather than falling. It’s not demonstrated or proven in any way in the paper that Termination I was caused by high latitude northern insolation, it is an illustration of the way the “widely-accepted point of view” usually gets a thumbs up. You can see the same point in the quotation from the Zhao paper. It’s the case with almost every paper.

If it’s impossible to disprove a theory with any counter evidence then it fails the test of being a theory.

Note 5 – More from Drysdale et al 2009:

During the Late Pleistocene, the period of glacial-to-interglacial transitions (or terminations) has increased relative to the Early Pleistocene [~100 thousand years (ky) versus 40 ky]. A coherent explanation for this shift still eludes paleoclimatologists. Although many different models have been proposed, the most widely accepted one invokes changes in the intensity of high-latitude Northern Hemisphere summer insolation (NHSI). These changes are driven largely by the precession of the equinoxes, which produces relatively large seasonal and hemispheric insolation intensity anomalies as the month of perihelion shifts through its ~23-ky cycle.

Recently, a convincing case has been made for obliquity control of Late Pleistocene terminations, which is a feasible hypothesis because of the relatively large and persistent increases in total summer energy reaching the high latitudes of both hemispheres during times of maximum Earth tilt. Indeed, the obliquity period has been found to be an important spectral component in methane (CH4) and paleotemperature records from Antarctic ice cores.

Testing the obliquity and other orbital-forcing models requires precise chronologies through terminations, which are best recorded by oxygen isotope ratios of benthic foraminifera (d18Ob) in deep-sea sediments (1, 8).

Although affected by deep-water temperature (Tdw) and composition (d18Odw) variations triggered by changes in circulation patterns (9), d18Ob signatures remain the most robust measure of global ice-volume changes through terminations. Unfortunately, dating of marine sediment records beyond the limits of radiocarbon methods has long proved difficult, and only Termination I [T-I, ~18 to 9 thousand years ago (ka)] has a reliable independent chronology.

Most marine chronologies for earlier terminations rely on the SPECMAP orbital template (8) with its a priori assumptions of insolation forcing and built-in phase lags between orbital trigger and ice-sheet response. Although SPECMAP and other orbital-based age models serve many important purposes in paleoceanography, their ability to test climate- forcing hypotheses is limited because they are not independent of the hypotheses being tested. Consequently, the inability to accurately date the benthic record of earlier terminations constitutes perhaps the single greatest obstacle to unraveling the problem of Late Pleistocene glaciations..


Obliquity is clearly very important during the Early Pleistocene, and recently a compelling argument was advanced that Late Pleistocene terminations are also forced by obliquity but that they bridge multiple obliquity cycles. Under this model, predominantly obliquity-driven total summer energy is considered more important in forcing terminations than the classical precession-based peak summer insolation model, primarily because the length of summer decreases as the Earth moves closer to the Sun. Hence, increased insolation intensity resulting from precession is offset by the shorter summer duration, with virtually no net effect on total summer energy in the high latitudes. By contrast, larger angles of Earth tilt lead to more positive degree days in both hemispheres at high latitudes, which can have a more profound effect on the total summer energy received and can act essentially independently from a given precession phase. The effect of obliquity on total summer energy is more persistent at large tilt angles, lasting up to 10 ky, because of the relatively long period of obliquity. Lastly, in a given year the influence of maximum obliquity persists for the whole summer, whereas at maximum precession early summer positive insolation anomalies are cancelled out by late summer negative anomalies, limiting the effect of precession over the whole summer.

Although the precise three-cycle offset between T-I and T-II in our radiometric chronology and the phase relationships shown in Fig. 3 together argue strongly for obliquity forcing, the question remains whether obliquity changes alone are responsible.

Recent work invoked an “insolation-canon,” whereby terminations are Southern Hemisphere–led but only triggered at times when insolation in both hemispheres is increasing simultaneously, with SHSI approaching maximum and NHSI just beyond a minimum. However, it is not clear that relatively low values of NHSI (at times of high SHSI) should play a role in deglaciation. An alternative is an insolation canon involving SHSI and obliquity.

Note 6 – There are a number of papers based on Dongge and Hulu caves in China that have similar data and conclusions but I am still trying to understand them. They attempt to tease out the relationship between δ18O and the monsoonal conditions and it’s involved. These papers include: Kelly et al 2006, High resolution characterization of the Asian Monsoon between 146,000 and 99,000 years B.P. from Dongge Cave, China and global correlation of events surrounding Termination II; Wang et al 2008, Millennial- and orbital-scale changes in the East Asian monsoon over the past 224,000 years.

Read Full Post »

In the recent articles we mostly reviewed climate models’ successes or otherwise with simulating the last glacial inception.

Part Seven looked at some early GCM work – late 80′s to mid 90′s. Part Eight reviewed four different studies a decade or so ago. Part Nine was on one study which simulated the last 120 kyrs, and Part Ten reviewed one of the most recent studies of glacial inception 115 kyrs ago with a very latest climate model, CCSM4.

We will return to glacial inception, but in this article we will look at the end of the last ice age, partly because on another blog someone highlighted a particular paper which covered it and I spent some time trying to understand the paper.

The last 20 kyrs now have some excellent records from both polar regions. The EPICA project, initiated almost 20 years ago, has produced ice core data for Antarctica to match up with the Greenland NGRIP ice core data going back almost 800 kyrs. And from other research more proxy temperature data has become available from around the globe.

Shakun et al 2012

This paper is from Shakun et al 2012 (thanks to commenter BBD for highlighting it). As an aside, Bette Otto-Bliesner is one of the co-authors, also for Jochum et al (2012) that we reviewed in Part Ten. She is one of the lead authors of the IPCC AR5 for the section on Paleoclimate.

The Last Glacial Maximum (LGM) was around 22k-18 kyrs ago. Sea level was 120m lower than today as thick ice sheets covered parts of North America and Europe.

Why did it end? How did it end?

The paper really addresses the second question.

The top graph below shows Antarctic temperatures in red, CO2 in yellow dots and global temperatures in blue:

From Shakun et al 2012

From Shakun et al 2012

Figure 1

The second graph shows us the histogram of leads and lags vs CO2 changes for both Antarctica and global temperature.

We can see clearly that the Antarctic temperatures started a sustained increase about 18 kyrs ago and led the global temperatures. We can see that CO2 is slightly preceded by, or in sync with, Antarctic temperatures. This  indicates that the CO2 increases here are providing a positive feedback on an initial Antarctic temperature rise (knowing from basic physics that more CO2 increases radiative forcing in the troposphere – see note 1).

But what caused this initial rise in Antarctic temperatures? One possibility put forward is an earlier rise in temperature in the higher northern latitudes that can be seen in the second graph below:

From Shakun et al 2012

From Shakun et al 2012

Figure 2

..An important exception is the onset of deglaciation, which features about 0.3ºC of global warming before the initial increase in CO2 ,17.5 kyr ago. This finding suggests that CO2 was not the cause of initial warming.

..Substantial temperature change at all latitudes (Fig. 5b), as well as a net global warming of about 0.3ºC (Fig. 2a), precedes the initial increase in CO2 concentration at 17.5 kyr ago, suggesting that CO2 did not initiate deglacial warming. This early global warming occurs in two phases: a gradual increase between 21.5 and 19 kyr ago followed by a somewhat steeper increase between 19 and 17.5 kyr ago (Fig. 2a). The first increase is associated with mean warming of the northern mid to high latitudes, most prominently in Greenland, as there is little change occurring elsewhere at this time (Fig. 5 and Supplementary Fig. 20). The second increase occurs during a pronounced interhemispheric seesaw event (Fig. 5), presumably related to a reduction in AMOC strength, as seen in the Pa/Th record and our modelling (Fig. 4f, g).

..In any event, we suggest that these spatiotemporal patterns of temperature change are consistent with warming at northern mid to high latitudes, leading to a reduction in the AMOC at ~19 kyr ago, being the trigger for the global deglacial warming that followed, although more records will be required to confirm the extent and magnitude of early warming at such latitudes.

The interhemispheric seesaw referred to is important to understand and refers to the relationship between two large scale ocean currents – between the tropics and the high northern latitudes and between the tropics and Antartica. (A good paper to start is Asynchrony of Antarctic and Greenland climate change during the last glacial period, Blunier et al 1998). Perhaps a subject for a later article.

Then a “plausible scenario” is presented for the initial NH warming:

A possible forcing model to explain this sequence of events starts with rising boreal summer insolation driving northern warming. This leads to the observed retreat of Northern Hemisphere ice sheets and the increase in sea level commencing, 19 kyr ago (Fig. 3a, b), with the attendant freshwater forcing causing a reduction in the AMOC that warms the Southern Hemisphere through the bipolar seesaw.

This is a poor section of the paper. I find it strange that someone could write this and not at least point out the obvious flaws in it. Before explaining, two points are worth noting:

  1. That this is described a “possible forcing model” and it’s really not the paper’s subject or apparently supported by any evidence in the paper
  2. Their model runs, fig 4c, don’t support this hypothesis – they show NH temperatures trending down over this critical period. Compare 4b and 4c (b is proxy data, c is the model). However, they don’t break out the higher latitudes so perhaps their model did show this result.
From Shakun et al 2012

From Shakun et al 2012

Figure 3

The obvious criticism of this hypothesis is that insolation (summer, 65ºN) has been a lot higher during earlier periods:


Figure 4

We saw this in Ghosts of Climates Past – Pop Quiz: End of An Ice Age.

And also earlier periods of significant temperature rises in the high northern latitudes have been recorded during the last glacial period. Why were none of these able to initiate this same sequence of events and initiate an Antarctic temperature rise?

At the time of the LGM, the ice sheets were at their furthest extent, with the consequent positive feedback of the higher albedo. If a small increase in summer insolation in high northern latitudes could initiate a deglaciation, surely the much higher summer insolation at 100 kyrs BP or 82 kyrs BP would have initiated a deglaciation given the additional benefit of the lower albedo at the time.

As I was completing this section of the article I went back to the Nature website to see if there was any supplemental information (Nature papers are short and online material that doesn’t appear in the pdf paper can be very useful).

There was a link to a News & Views article on this paper by Eric Wolff. Eric Wolff is one of the key EPICA contributors, a lead author and co-author of many EPICA papers, so I was quite encouraged to read his perspective on the paper.

Many people seem convinced the Milankovitch theory is without question and not accepting it is absurd, see for example the blog discussion I referred to earlier, so it’s worth quoting extensively from Wolff’s short article:

Between about 19,000 and 10,000 years ago, Earth emerged from the last glacial period. The whole globe warmed, ice sheets retreated from Northern Hemisphere continents and atmospheric composition changed significantly. Many theories try to explain what triggered and sustained this transformation (known as the glacial termination), but crucial evidence to validate them is lacking.

On page 49 of this issue, Shakun et al use a global reconstruction of temperature to show that the transition from the glacial period to the current interglacial consisted of an antiphased temperature response of Earth’s two hemispheres, superimposed on a globally coherent warming. Ocean-circulation changes, controlling the contrasting response in each hemisphere, seem to have been crucial to the glacial termination.

Once again, a key climate scientist notes that we don’t know why the last ice age ended. As we saw in Part Six – “Hypotheses Abound” – the title explains the content..

..Some studies have proposed that changes in ocean heat transport are an essential part of glacial termination. Shakun et al. combine their data with simulations based on an ocean–atmosphere general circulation model to present a plausible sequence of events from about 19,000 years ago onwards. They propose that a reduction in the AMOC (induced in the model by introducing fresh water into the North Atlantic) led to Southern Hemisphere warming, and a net cooling in the Northern Hemisphere. Carbon dioxide concentration began to rise soon afterwards, probably owing to degassing from the deep Southern Ocean; although quite well documented, the exact combination of mechanisms for this rise remains a subject of debate. Both hemispheres then warmed together, largely in response to the rise in carbon dioxide, but with further oscillations in the hemispheric contrast as the strength of the AMOC varied. The model reproduces well both the magnitude and the pattern of global and hemispheric change, with carbon dioxide and changing AMOC as crucial components.

The success of the model used by Shakun and colleagues in reproducing the data is encouraging. But one caveat is that the magnitude of fresh water injected into the Atlantic Ocean in the model was tuned to produce the inferred strength of the AMOC and the magnitude of interhemispheric climate response; the result does not imply that the ocean circulation in the model has the correct sensitivity to the volume of freshwater input.

Shakun and colleagues’ work does provide a firm data-driven basis for a plausible chain of events for most of the last termination. But what drove the reduction in the AMOC 19,000 years ago? The authors point out that there was a significant rise in temperature between 21,500 and 19,000 years ago in the northernmost latitude band (60–90° N). They propose that this may have resulted from a rise in summer insolation (incoming solar energy) at high northern latitudes, driven by well-known cycles in Earth’s orbit around the Sun. They argue that this rise could have caused an initial ice-sheet melt that drove the subsequent reduction in the AMOC.

However, this proposal needs to be treated with caution. First, there are few temperature records in this latitude band: the warming is seen clearly only in Greenland ice cores. Second, there is at least one comparable rise in temperature in the Greenland records, between about 62,000 and 60,000 years ago, which did not result in a termination. Finally, although it is true that northern summer insolation increased from 21,500 to 19,000 years ago, its absolute magnitude remained lower than at any time between 65,000 and 30,000 years ago. It is not clear why an increase in insolation from a low value initiated termination whereas a continuous period of higher insolation did not.

In short, another ingredient is needed to explain the link between insolation and termination, and the triggers for the series of events described so well in Shakun and colleagues’ paper. The see-saw of temperature between north and south throughout the glacial period, most clearly observed in rapid Greenland warmings (Dansgaard–Oeschger events), is often taken as a sign that numerous changes in AMOC strength occurred. However, the AMOC weakening that started 19,000 years ago lasted for much longer than previous ones, allowing a much more substantial rise in southern temperature and in carbon dioxide concentration. Why was it so hard, at that time, to reinvigorate the AMOC and end this weakening?

And what is the missing ingredient that turned the rise in northern insolation around 20,000 years ago into the starting gun for deglaciation, when higher insolation at earlier times failed to do so? It has been proposed that terminations occur only when northern ice-sheet extent is particularly large. If this is indeed the extra ingredient, then the next step in unwinding the causal chain must be to understand what aspect of a large ice sheet controls the onset and persistence of changes in the AMOC that seem to have been key to the last deglaciation.

[Emphasis added].

Thanks, Eric Wolff. My summary on Shakun et al, overall – on its main subject – it’s a very good paper with solid new data, good explanations and graphs.

However, this field is still in flux..

Parrenin et al 2013

Understanding the role of atmospheric CO2 during past climate changes requires clear knowledge of how it varies in time relative to temperature. Antarctic ice cores preserve highly resolved records of atmospheric CO2 and Antarctic temperature for the past 800,000 years.

Here we propose a revised relative age scale for the concentration of atmospheric CO2 and Antarctic temperature for the last deglacial warming, using data from five Antarctic ice cores. We infer the phasing between CO2 concentration and Antarctic temperature at four times when their trends change abruptly.

We find no significant asynchrony between them, indicating that Antarctic temperature did not begin to rise hundreds of years before the concentration of atmospheric CO2, as has been suggested by earlier studies.

[Emphasis added].

Ouch. In a later article we will delve into the complex world of dating ice cores and the air trapped in the ice cores.

WAIS Divide Project Members (2013)

The cause of warming in the Southern Hemisphere during the most recent deglaciation remains a matter of debate.

Hypotheses for a Northern Hemisphere trigger, through oceanic redistributions of heat, are based in part on the abrupt onset of warming seen in East Antarctic ice cores and dated to 18,000 years ago, which is several thousand years after high-latitude Northern Hemisphere summer insolation intensity began increasing from its minimum, approximately 24,000 years ago.

An alternative explanation is that local solar insolation changes cause the Southern Hemisphere to warm independently. Here we present results from a new, annually resolved ice-core record from West Antarctica that reconciles these two views. The records show that 18,000 years ago snow accumulation in West Antarctica began increasing, coincident with increasing carbon dioxide concentrations, warming in East Antarctica and cooling in the Northern Hemisphere associated with an abrupt decrease in Atlantic meridional overturning circulation. However, significant warming in West Antarctica began at least 2,000 years earlier.

Circum-Antarctic sea-ice decline, driven by increasing local insolation, is the likely cause of this warming. The marine-influenced West Antarctic records suggest a more active role for the Southern Ocean in the onset of deglaciation than is inferred from ice cores in the East Antarctic interior, which are largely isolated from sea-ice changes.

[Emphasis added].

We see that “rising solar insolation” in any part of the world from any value can be presented as a hypothesis for ice age termination. Here “local solar insolation” means the solar insolation in the Antarctic region, compare with Shakun et al, where rising insolation (from a very low value) in the high northern latitudes was presented as a hypothesis for northern warming which then initiated a southern warming.

That said, this is a very interesting paper with new data from Antarctica, the West Antarctic Ice Sheet (WAIS) Divide ice core (WDC), where drilling was completed in 2011:

Because the climate of West Antarctica is distinct from that of interior East Antarctica, the exclusion of West Antarctic records may result in an incomplete picture of past Antarctic and Southern Ocean climate change. Interior West Antarctica is lower in elevation and more subject to the influence of marine air masses than interior East Antarctica, which is surrounded by a steep topographic slope. Marine-influenced locations are important because they more directly reflect atmospheric conditions resulting from changes in ocean circulation and sea ice. However, ice-core records from coastal sites are often difficult to interpret because of complicated ice-flow and elevation histories.

The West Antarctic Ice Sheet (WAIS) Divide ice core (WDC), in central West Antarctica, is unique in coming from a location that has experienced minimal elevation change, is strongly influenced by marine conditions and has a relatively high snow-accumulation rate, making it possible to obtain an accurately dated record with high temporal resolution.

WDC paints a slightly different picture from other Antarctic ice cores:

..and significant warming at WDC began by 20 kyr ago, at least 2,000 yr before significant warming at EDML and EDC.

..Both the WDC and the lower-resolution Byrd ice-core records show that warming in West Antarctica began before the decrease in AMOC that has been invoked to explain Southern Hemisphere warming [the references include Shakun et al 2012]. The most significant early warming at WDC occurred between 20 and 18.8 kyr ago, although a period of significant warming also occurred between 22 and 21.5 kyr ago. The magnitude of the warming at WDC before 18 kyr ago is much greater than at EDML or EDC..

From WAIS Divide Project (2013)

From WAIS Divide Project (2013)

Figure 5

We will look at this paper in more detail in a later article.


The termination of the last ice age is a fascinating topic that tests our ability to understand climate change.

One criticism made of climate science on many blogs is that climate scientists are obsessed with running GCMs, instead of doing “real science”, “running real experiments” and “gathering real data”. I can’t say where the balance really is, but in my own journey through climate science I find that there is a welcome and healthy obsession with gathering data, finding new sources of data, analyzing data, comparing datasets and running real experiments. The Greenland and Antarctic ice core projects, like NGRIP, EPICA and WAIS Divide Project are great examples.

On other climate blogs, writers and commenters seem very happy that climate scientists have written a paper that “supports the orbital hypothesis” without any critical examination of what the paper actually supports with evidence.

Returning to the question at hand, explaining the termination of the last ice age – the problem at the moment is less that there is no theory, and more that the wealth of data has not yet settled onto a clear chain of cause and effect. This is obviously essential to come up with a decent theory.

And any theory that explains the termination of the last ice age will need to explain why it didn’t happen earlier. Invoking “rising insolation” seems like lazy journalism to me. Luckily Eric Wolff, at least, agrees with me.

Articles in the Series

Part One – An introduction

Part Two – Lorenz – one point of view from the exceptional E.N. Lorenz

Part Three – Hays, Imbrie & Shackleton – how everyone got onto the Milankovitch theory

Part Four – Understanding Orbits, Seasons and Stuff – how the wobbles and movements of the earth’s orbit affect incoming solar radiation

Part Five – Obliquity & Precession Changes – and in a bit more detail

Part Six – “Hypotheses Abound” – lots of different theories that confusingly go by the same name

Part Seven – GCM I – early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Seven and a Half – Mindmap – my mind map at that time, with many of the papers I have been reviewing and categorizing plus key extracts from those papers

Part Eight – GCM II – more recent work from the “noughties” – GCM results plus EMIC (earth models of intermediate complexity) again trying to produce perennial snow cover

Part Nine – GCM III – very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age – a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Twelve – GCM V – Ice Age Termination – very recent work from He et al 2013, using a high resolution GCM (CCSM3) to analyze the end of the last ice age and the complex link between Antarctic and Greenland

Thirteen – Terminator II – looking at the date of Termination II, the end of the penultimate ice age – and implications for the cause of Termination II

Fourteen – Concepts & HD Data – getting a conceptual feel for the impacts of obliquity and precession, and some ice age datasets in high resolution

Fifteen – Roe vs Huybers – reviewing In Defence of Milankovitch, by Gerard Roe

Sixteen – Roe vs Huybers II – remapping a deep ocean core dataset and updating the previous article

Seventeen – Proxies under Water I – explaining the isotopic proxies and what they actually measure

Eighteen – “Probably Nonlinearity” of Unknown Origin – what is believed and what is put forward as evidence for the theory that ice age terminations were caused by orbital changes

Nineteen – Ice Sheet Models I – looking at the state of ice sheet models


Global warming preceded by increasing carbon dioxide concentrations during the last deglaciation, Shakun, Clark, He, Marcott, Mix, Liu, Otto-Bliesner, Schmittner & Bard, Nature (2012) – free paper

Climate change: A tale of two hemispheres, Eric W. Wolff, Nature (2012)

Synchronous Change of Atmospheric CO2 and Antarctic Temperature During the Last Deglacial Warming, Parrenin, Masson-Delmotte, Köhler, Raynaud, Paillard, Schwander, Barbante, Landais, Wegner & Jouzel, Science (2013) – free paper, in submission form (rather than published form)

For interest  Valérie Masson-Delmotte is one of the two co-ordinating lead authors for AR5 for the Paleoclimate section, Frédéric Parrenin is a contributing author.

Onset of deglacial warming in West Antarctica driven by local orbital forcing, WAIS Divide Project Members, Nature (2013) – free paper


Note 1 – see for example, CO2 – An Insignificant Trace Gas? – an 8-part series on CO2, Atmospheric Radiation and the “Greenhouse” Effect – a 12-part series on how radiation interacts with the atmosphere, Visualizing Atmospheric Radiation – a 13-part series to help us get a better grasp of how this works including “does water vapor overwhelm CO2″, “is CO2 saturated” and many other questions.

Read Full Post »

Measurements of outgoing longwave radiation (OLR) are essential for understanding many aspects of climate. Many people are confused about the factors that affect OLR. And its rich variability is often not appreciated.

There have been a number of satellite projects since the late 1970’s, with the highlight (prior to 2001) being the five year period of ERBE.

AIRS & CERES were launched on the NASA AQUA satellite in May 2002. These provide much better quality data, with much better accuracy and resolution.

CERES has three instruments:

  • Solar Reflected Radiation (Shortwave): 0.3 – 5.0 μm
  • Window: 8 – 12 μm
  • Total: 0.3 to > 100 μm

AIRS is an infrared spectrometer/radiometer that covers the 3.7–15.4 μm spectral range with 2378 spectral channels. It runs alongside two microwave instruments (better viewing through clouds): AMSU is a 15-channel microwave radiometer operating between 23 and 89 GHz; HSB is a four-channel microwave radiometer that makes measurements between 150 and 190 GHz.

From Aumann et al (2003):

The simultaneous use of the data from the three instruments provides both new and improved measurements of cloud properties, atmospheric temperature and humidity, and land and ocean skin temperatures, with the accuracy, resolution, and coverage required by numerical weather prediction and climate models.

Among the important datasets that AIRS will contribute to climate studies are as follows:

  • atmospheric temperature profiles;
  • sea-surface temperature;
  • land-surface temperature and emissivity;
  • relative humidity profiles and total precipitable water vapor;
  • fractional cloud cover;
  • cloud spectral IR emissivity;
  • cloud-top pressure and temperature;
  • total ozone burden of the atmosphere;
  • column abundances of minor atmospheric gases such as CO, CH, CO, and N2O;
  • outgoing longwave radiation and longwave cloud radiative forcing;
  • precipitation rate

More about AIRS = Atmospheric Infrared Sounder, at Wikipedia, plus the AIRS website.

More about CERES = Clouds and the Earth’s Radiant Energy System, at Wikipedia, plus the CERES website – where you can select and view or download your own data.

How do CERES & AIRS compare?

CERES and AIRS have different jobs. CERES directly measures OLR. AIRS measures lots of spectral channels that don’t cover the complete range needed to just “add up” OLR. Instead, OLR can be calculated from AIRS data by deriving surface temperature, water vapour concentration vs height, CO2 concentration, etc and using a radiative transfer algorithm to determine OLR.

Here is a comparison of the two measurement systems from Susskind et al (2012) over almost a decade:


From Susskind et al (2012)

Figure 1

The second thing to observe is that the measurements have a bias between the two datasets. But because we have two high accuracy measurement systems on the same satellite we do have a reasonable opportunity to identify the source of the bias (total OLR as shown in the graph is made of many components). If we only had one satellite, and then a new satellite took over with a small time overlap any biases would be much more difficult to identify. Of course, that doesn’t stop many people from trying but success would be much harder to judge.

In this paper, as we might expect, the error sources between the two datasets get considerable discussion. One important point is that version 6 AIRS data (prototyped at the time the paper was written) is much closer to CERES. The second point, probably more interesting, is that once we look at anomaly data the results are very close. We’ll see a number of comparisons as we review what the paper shows.

The authors comment:

Behavior of OLR over this short time period should not be taken in any way as being indicative of what long-term trends might be. The ability to begin to draw potential conclusions as to whether there are long-term drifts with regard to the Earth’s OLR, beyond the effects of normal interannual variability, would require consistent calibrated global observations for a time period of at least 20 years, if not longer. Nevertheless, a very close agreement of the 8-year, 10-month OLR anomaly time series derived using two different instruments in two very different manners is an encouraging result.

It demonstrates that one can have confidence in the 1° x 1° OLR anomaly time series as observed by each instrument over the same time period. The second objective of the paper is to explain why recent values of global mean, and especially tropical mean, OLR have been strongly correlated with El Niño/La Niña variability and why both have decreased over the time period under study.

Why Has OLR Varied?

The authors define the average rate of change (ARC) of an anomaly time series as “the slope of the linear least squares fit of the anomaly time series”.


We can see excellent correlation between the two datasets and we can see that OLR has, on average, decreased over this time period.

Below is a comparison with the El Nino index.

We define the term El Niño Index as the difference of the NOAA monthly mean oceanic Sea Surface Temperature (SST), averaged over the NOAA Niño-4 spatial area 5°N to 5°S latitude and 150°W westward to 160°E longitude, from an 8-year NOAA Niño-4 SST monthly mean climatology which we generated based on use of the same 8 years that we used in the generation of the OLR climatologies.

From Susskind et al (2012)

From Susskind et al (2012)

Figure 2

It gets interesting when we look at the geographical distribution of the OLR changes over this time period:

From Susskind et al (2012)

From Susskind et al (2012)

Figure 3 – Click to Enlarge

We see that the tropics have the larger changes (also seen clearly in figure 2) but that some regions of the tropics have strong positive values and other regions have strong negative values. The grey square square centered on 180 longitude is the Nino-4 region. Values as large as +4 W/m²/decade are found in this region. And values as large as -3 W/m²/decade are found over Indonesia (WPMC region).

Let’s look at the time series to see how these changes in OLR took place:


Figure 4 – Click to Enlarge

The main parameters which affect changes in OLR month to month and year to year are a) surface temperatures b) humidity c) clouds. As temperature increases, OLR increases. As humidity and clouds increase, OLR decreases.

Here are the changes in surface temperature, specific humidity at 500mbar and cloud fraction:

From Susskind (2012)

From Susskind (2012)

Figure 5 – Click to Enlarge

So, focusing again on the Nino-4 region, we might expect to find that OLR has decreased because of the surface temperature decrease (lower emission of surface radiation) – or we might expect to find that the OLR has increased because the specific humidity and cloud fraction have decreased (thus allowing more surface and lower atmosphere radiation to make it through to TOA). These are mechanisms pulling in opposite directions.

In fact we see that the reduced specific humidity and cloud fraction have outweighed the effect of the surface temperature decrease. So the physics should be clear (still considering the Nino-4 region) – if surface temperature has decreased and OLR has increased then the explanation is the reduction in “greenhouse” gases (in this case water vapor) and clouds, which contain water.


We can see similar relationships through correlations.

The term ENC in the graphs stands for El Nino Correlation. This is essentially the correlation of the time-series data with time-series temperature change in the Nino-4 region (more specifically the Nino-4 temperature less the global temperature).

As the Nino-4 temperature declined over the period in question, a positive correlation means the value declined, while a negative correlation means the value increased.

The first graph below is the geographical distribution of rate of change of surface temperature. Of course we see that the Nino-4 region has been declining in temperature (as already seen in figure 2). The second graph shows this as well, but also indicates that the regions west and east of the Nino-4 region have  a stronger (negative) correlation than  other areas of larger temperature change (like the arctic region).

The third graph shows that 500 mb humidity has been decreasing in the Nino-4 region, and increasing to the west and east of this region. Likewise for the cloud fraction. And all of these are strongly correlated to the Nino-4 time-series temperature:

From Susskind (2012)

From Susskind et al (2012)

Figure 6 – Click to expand

For OLR correlations with Nino-4 temperature we find a strong negative correlation, meaning the OLR has increased in the Nino-4 region. And the opposite – a strong positive correlation – in the highlighted regions to east and west of Nino-4:

From Susskind (2012)

From Susskind (2012)

Figure 7 – Click to expand

Note the two highlighted regions

  • to the west: WPMC, Warm Pool Maritime Continent;
  • and to the east: EEMA, Equatorial Eastern Pacific and Atlantic Region

We can see the correlations between the global & tropical OLR and the OLR changes in these regions:


Figure 8 – Click to expand

Both WPMC and EEPA regions together explain the reduction over 10 years in OLR. Without these two regions the change is indistinguishable from zero.


This article is interesting for a number of reasons.

It shows the amazing variability of climate – we can see adjacent regions in the tropics with completely opposite changes over 10 years.

It shows that CERES gets almost identical anomaly results (changes in OLR) to AIRS. CERES directly measures OLR, while AIRS retrieves surface temperature, humidity profiles, cloud fractions and “greenhouse” gas concentrations and uses these to calculate OLR.

AIRS results demonstrate how surface temperature, humidity and cloud fraction affect OLR.

OLR has – over the globe – decreased over 10 years. This is a result of the El-Nino phase – at the start of the measurement period we were coming out of a large El-Nino event, and at the end of the measurement period we were in a La Nina event.

The reduction in OLR is explained by the change in the two regions identified, which are themselves strongly correlated to the Nino-4 region.


Interannual variability of outgoing longwave radiation as observed by AIRS and CERES, Susskind et al, Journal of Geophysical Research (2012) – paywall paper

AIRS/AMSU/HSB on the Aqua Mission: Design, Science Objectives, Data Products, and Processing Systems, Aumann et al, IEEE Transactions on Geoscience and Remote Sensing (2003) – free paper

Read Full Post »

In the last article we had a look at the depth of the “mixed ocean layer” (MLD) and its implications for the successful measurement of climate sensitivity (assuming such a parameter exists as a constant).

In Part One I created a Matlab model which reproduced the same problems as Spencer & Braswell (2008) had found. This model had one layer  (an “ocean slab” model) to represent the MLD with a “noise” flux into the deeper ocean (and a radiative noise flux at top of atmosphere). Murphy & Forster claimed that longer time periods require an MLD of increased depth to “model” the extra heat flow into the deeper ocean over time:

Because heat slowly penetrates deeper into the ocean, an appropriate depth for heat capacity depends on the length of the period over which Eq. (1) is being applied (Watterson 2000; Held et al. 2010). For 80-yr global climate model runs, Gregory (2000) derived an optimum mixed layer depth of 150 m. Watterson (2000) found an initial global heat capacity equivalent to a mixed layer of 200 m and larger values for longer simulations.

This seems like it might make sense – if we wanted to keep a “zero dimensional model”. But it’s questionable whether the model retains any value with this “fudge”. So because heat actually moves from the mixed layer into the deeper ocean (rather than the mixed layer increasing in depth) I instead enhanced the model to create a heat flux from the MLD through a number of ocean layers with a parameter called the vertical eddy diffusivity to determine this heat flux.

So the model is now a 1D model with a parameterized approach to ocean convection.

Eddy Diffusivity

The concept here is the analogy of conductivity but when convection is instead the primary mover of heat.

Heat flow by conduction is governed by a material property called conductivity and by the temperature difference. Changes in temperature are governed by heat flow and by the heat capacity. The result is this equation for reference and interest – so don’t worry if you don’t understand it:

∂T / ∂tα∂²T / ∂z²  – the 1-d version (see note 1)

where T = temperature, t = time, α = thermal diffusivity and z = depth

What it says in almost plain English is that the change in temperature with respect to time is equal to the thermal diffusivity times the change in gradient of temperature with depth. Don’t worry if that’s not clear (and there is a explanation of the simple steps required to calculate this in note 1).

Now the thermal diffusivity, α:

α = k/cpρ, where k = conductivity, cp = heat capacity and ρ = density

So, an important bit to understand..

  • if the conductivity is high and the heat capacity is low then temperature can change quickly
  • if the conductivity is high and the heat capacity is high then it slows down temperature change, and
  • if the conductivity is low and the heat capacity is high then temperature takes much longer to change

Many researchers have attempted to measure an average value for eddy diffusivity in the ocean (and in lakes). The concept here, an explained in Part Two, is that turbulent motions of the ocean move heat much more effectively than conduction. The value can’t be calculated from first principles because that would mean solving the problem of turbulence, which is one of the toughest problems in physics. Instead it has to be estimated from measurements.

There is an inherent problem with eddy diffusivity for vertical heat transfer that we will come back to shortly.

There is also a minor problem of notation that is “solved” here by changing the notation. Usually conductivity is written as “k”. However, most papers on eddy diffusivity write diffusivity as “k”, sometimes “K”, sometimes “κ” (Greek ‘kappa’) – creating potential confusion so I revert back to “α”. And to make it clear that it is the convective value rather than the conductive value, I use αeddy. And for the equivalent parameter to conductivity, keddy.

keddy = αeddycpρ

because cp= 4200 J/K.kg and ρ ≈ 1000 kg/m³:

keddy =4.2 x 106  αeddy – it’s useful to be able to see what the diffusivity means in terms of an equivalent “conductivity” type parameter

Measurements of Eddy Diffusivity

Oeschger et al (1975):

α is an apparent global eddy diffusion coefficient which helps to reproduce an average transport phenomenon consisting of a series of distinct and overlapping mechanisms.

Oeschger and his co-workers studied the problem via the diffusion into the ocean of 14C from nuclear weapons testing.

The range they calculated for αeddy = 1 x 10-4 – 1.8 x 10-4 m²/s.

This equates to keddy = 420 – 760 W/m.K, and by comparison, the conductivity of still water, k = 0.6 W/m.K – making convection around 1,000 times more effective at moving heat vertically through the ocean.

Broecker et al (1980) took a similar approach to estimating this value and commented:

We do not mean to imply that the process of vertical eddy mixing actually occurs within the body of the main oceanic thermocline. Indeed, the values we require are an order of magnitude greater than those permitted by conventional oceanographic wisdom (see Garrett, 1979, for summary).

The vertical eddy coefficients used here should rather be thought of as parameters that take into account all the processes that transfer tracers across density horizons. In addition to vertical mixing by eddies, these include mixing induced by sediment friction at the ocean margins and mixing along the surface in the regions where density horizons outcrop.

Their calculation, like Oeschger’s, used a simple model with the observed values plugged in to estimate the parameter:

Anyone familiar with the water mass structure and ventilation dynamics of the ocean will quickly realize that the box-diffusion model is by no means a realistic representation. No simple modification to the model will substantially improve the situation.

To do so we must take a giant step in complexity to a new generation of models that attempt to account for the actual geometry of ventilation of the sea. We are as yet not in a position to do this in a serious way. At least a decade will pass before a realistic ocean model can be developed.

The values they calculated for eddy diffusivity were broken up into different regions:

  • αeddy(equatorial) = 3.5 x 10-5 m²/s
  • αeddy(temperate) = 2.0 x 10-4 m²/s
  • αeddy(polar) = 3.0 x 10-4 m²/s

We will use these values from Broecker to see what happens to the measurement problems of climate sensitivity when used in my simple model.

These two papers were cited by Hansen et al in their 1985 paper with the values for vertical eddy diffusivity used to develop the value of the “effective mixed depth” of the ocean.

In reviewing these papers and searching for more recent work in the field, I tapped into a rich vein of research that will be the subject of another day.

First, Ledwell et al (1998) who measured eddy diffusivity via SF6 that they injected into the ocean:

The diapycnal eddy diffusivity K estimated for the first 6 months was 0.12 ± 0.02 x10-4 m²/s, while for the subsequent 24 months it was 0.17 ± 0.02 x10-4 m²/s.

[Note: units changed from cm²/s into m²/s for consistency]

It is worth reading their comment on this aspect of ocean dynamics. (Note that isopycnal = contact density surfaces and diapycnal = across isopycnal):

The circulation of the ocean is severely constrained by density stratification. A water parcel cannot move from one surface of constant potential density to another without changing its salinity or its potential temperature. There are virtually no sources of heat outside the sunlit zone and away from the bottom where heat diffuses from the lithosphere, except for the interesting hydrothermal vents in special regions. The sources of salinity changes are similarly confined to the boundaries of the ocean. If water in the interior is to change potential density at all, it must be by mixing across density surfaces (diapycnal mixing) or by stirring and mixing of water of different potential temperature and salinity along isopycnal surfaces (isopycnal mixing).

Most inferences of dispersion parameters have been made from observations of the large-scale fields or from measurements of dissipation rates at very small scales. Unambiguously direct measurements of the mixing have been rare. Because of the stratification of the ocean, isopycnal mixing involves very different processes than diapycnal mixing, extending to much greater length scales. A direct approach to the study of both isopycnal and diapycnal mixing is to release a tracer and measure its subsequent dispersal. Such an experiment, lasting 30 months and involving more than 105 km² of ocean, is the subject of this paper.

From Jayne (2009):

For example, the Community Climate Simulation Model (CCSM) ocean component model uses a form similar to Eq. (1), but with an upper-ocean value of 0.1 x 10-4 m²/s and a deep-ocean value of 1.0 x 10-4 m²/s, with the transition depth at 1000 m.

However, there is no observational evidence to suggest that the mixing in the ocean is horizontally uniform, and indeed there is significant evidence that it is heterogeneous with spatial variations of several orders of magnitude in its intensity (Polzin et al. 1997; Ganachaud 2003).

More on eddy diffusivity measurements in another article – the parameter has a significant impact on modeling of the ocean in GCMs and there is a lot of current research into this subject.

Eddy Diffusivity and Buoyancy Gradient

Sarmiento et al (1976) measured isotopes near the ocean floor:

Two naturally occurring isotopes can be applied to the determination of the rate of vertical turbulent mixing in the deep sea: 222Rn (half-life 3.824 days) and 228Ra (half-life 5.75 years). In this paper we discuss the results from fourteen 222Rn and two 228Ra profiles obtained as part of the GEOSECS program.

From these results we conclude that the most important factor influencing the vertical eddy diffusivity is the buoyancy gradient [(g/p)(∂ρpot/∂z)]. The vertical diffusivity shows an inverse proportionality to the buoyancy gradient.

Their paper is very much about the measurements and calculations of the deeper ocean, but is relevant for anywhere in the ocean, and helps explain why the different values for different regions were obtained by Broecker that we saw earlier. (Prof. Wallace S. Broecker was a co-author on this paper as well, and has authored/co-authored 100’s of papers on the ocean).

What is the buoyancy gradient and why does it matter?

Cold fluids sink and hot fluids rise. This is because cold substances contract and so are more dense. So in general, in the ocean, the colder water is below and the warmer water above. Probably everyone knows this.

The buoyancy gradient is a measure of how strong this effect is. The change in density with depth determines how resistant the ocean is to being overturned. If the ocean was totally stable no heat would ever penetrate below the mixed layer. But it does. And if the ocean was totally stable then the measurements of 14C from nuclear testing would be zero below the mixed layer.

But it is not surprising that the more stable the ocean is due to the buoyancy gradient the less heat diffuses down by turbulent motion.

And this is why the estimates by Broecker shown earlier have a much lower value of diffusivity for the tropics than for the poles. In general the poles are where deep convection takes place – lots of cold water sinks, mixing the ocean – and the tropics are where much weaker upwelling takes place – because the ocean surface is strongly heated. This is part of the large scale motion of the ocean, known as the thermohaline circulation. More on this another day.

Now water is largely incompressible which means that the density gradient is only determined by temperature and salinity. This creates the problem that eddy diffusivity is a value which is not only parameterized, but also dependent on the vertical temperature difference in the ocean.

Heat flow also depends on temperature difference, but with the opposite relationship. This is not something to untangle today. Today we will just see what happens to our simple model when we use the best estimates of vertical eddy diffusivity.

Modeling, Non-Linearity and Climate Sensitivity Measurement Problems

Murphy & Forster agreed in part with Spencer & Braswell about the variation in radiative noise from CERES measurements. I quote at length, because the Murphy & Forster paper is not freely available:

For the parameter N, SB08 use a random daily shortwave flux scaled so that the standard deviation of monthly averages of outgoing radiation (N – λT) is 1.3 W/m².

They base this on the standard deviation of CERES shortwave data between March 2000 and December 2005 for the oceans between 20 °Nand 20 °S.

We have analyzed the same dataset and find that, after the seasonal cycle and slow changes in forcing are removed, the standard deviation of monthly means of the shortwave radiation is 1.24 W/m², close to the 1.3 W/m² specified by SB08. However, longwave (infrared) radiation changes the energy budget just as effectively from the earth as shortwave radiation (reflected sunlight). Cloud systems that might induce random fluctuations in reflected sunlight also change outgoing longwave radiation. In addition, the feedback parameter λ is due to both longwave and shortwave radiation.

Modeled total outgoing radiation should therefore be compared with the observed sum of longwave and shortwave outgoing radiation, not just the shortwave component. The standard deviation of the sum of longwave and shortwave radiation in the same CERES dataset is 0.94 W/m². Even this is an upper limit, since imperfect spatial sampling and instrument noise contribute to the standard deviation.

[Note I change their α (climate feedback) to λ for consistency with previous articles].

And they continue:

We therefore use 0.94 W/m² as an upper limit to the standard deviation of outgoing radiation over the tropical oceans. For comparison, the standard deviation of the global CERES outgoing radiation is about 0.55 W/m².

All of these points seem valid (however, I am still in the process of examining CERES data, and can’t comment on their actual values of standard deviation. Apart from the minor challenge of extracting the data from the netCDF format there is a lot to examine. A lot of data and a lot of issues surrounding data quality).

However, it raised an interesting idea about non-linearity. Readers who remember on Part One will know that as radiative noise increases and ocean MLD decreases the measurement problem gets worse. And as the radiative noise decreases and ocean MLD increases the measurement problem goes away.

If we average global radiative noise and global MLD, plug these values into a zero-dimensional model and get minimal measurement problem what does this mean?

Due to non-linearity, it tells us nothing.

Averaging the inputs, applying them to a global model (i.e., a zero-dimensional model) and calculating λest (from the regression) gets very different results from applying the inputs separately to each region, averaging the results and calculating λest

I tested this with a simple model – created two regions, one 10% of the surface area, the other 90%. In the larger region the MLD was 200m and the radiative noise was zero; and in the smaller region the MLD was 20m and the (standard deviation of) radiative noise was varied from 0 – 2. The temperature and radiative flux were converted into an area weighted time series and the regression produced large deviations from the real value of λ.

A similar run on a global model with an MLD of 180m and radiative noise of 0-0.2 shows an accurate assessment of λ.

This is to be expected of course.

So with this in mind I tested the new 1D model with different values of ocean depth eddy diffusivity,  radiative noise, and an AR(1) model for the radiative noise. I used values for the tropical region as this is clearly the area most likely to upset the measurement – shallow MLD, higher radiative noise and weaker eddy diffusivity.

As best as I could determine from de Boyer Montegut’s paper, the average MLD for the 20°N – 20°S region is approximately 30m.

Here are the results using Oeschger’s value of eddy diffusivity for the tropics and the tropical value of radiative noise from MF2010 – varying ocean depth around 30m and the value of the AR(1) model for radiative noise:

Figure 1

For reference, as it’s hard to read off the graph, the value at 30m and φ=0.5 is λest = 2.3.

Using the current CCSM value of eddy diffusivity for the upper ocean:

Figure 2

For reference,  the value at 30m and φ=0.5 is λest = 0.2. (Compared with the real value of 3.0)

Note that these values are only for one region, not for the whole globe.

Another important point is that I have used the radiative noise value as the standard deviation of daily radiative noise. I have started to dig into CERES data to see whether such a value can be calculated, and also what typical value of autoregressive parameter should be used (and what kind of ARMA model), but this might take some time.

Yet smaller values of eddy diffusivity are possible for smaller regions, according to Jochum (2009). This would likely cause the problems of estimating climate sensitivity to become worse.

Simple Models

Murphy & Forster comment:

Although highly simplified, a single box model of the earth has some pedagogic value. One must remember that the heat capacity c and feedback parameter λ are not really constants, since heat penetrates more deeply into the ocean on long time scales and there are fast and slow climate feedbacks (Knutti et al. 2008).

It is tempting to add a few more boxes to account for land, ocean, different latitudes, and so forth. Adding more boxes to an energy balancemodel can be problematic because one must ensure that the boxes are connected in a physically consistent way. A good option is to instead consider a global climate model that has many boxes connected in a physically consistent manner.

The point being that no one believes a slab model of the ocean to be a model that gives really useful results. Spencer & Braswell likewise don’t believe that the slab model is in any way an accurate model of the climate.

They used such a model just to demonstrate a possible problem. Murphy & Forster’s criticism doesn’t seem to have solved the problem of “can we measure climate sensitivity?

Or at least, it appears easy to show that slightly different enhancements of the simple model demonstrate continued problems in measuring climate sensitivity – due to the impact of radiative noise in the climate system.


I have produced a simple model and apparently demonstrated continued climate sensitivity measurement problems. This is in contrast to Murphy & Forster who took a different approach and found that the problem went away. However, my model has a more realistic approach to moving heat from the mixed layer into the ocean depths than theirs.

My model does have the drawback that the massive army of Science of Doom model testers and quality control champions are all away on their Xmas break. So the model might be incorrectly coded.

It’s also likely that someone else can come along and take a slightly enhanced version of this model and make the problem vanish.

I have used values for MLD and eddy diffusivity that seem to represent real-world values but I have no idea as to the correct values for standard deviation and auto-correlation of daily radiative noise (or appropriate ARMA model). These values have a big impact on the climate sensitivity measurement problem for reasons explained in Part One.

A useful approach to determining the effect of radiative noise on climate sensitivity measurement might be to use a coupled atmosphere-ocean GCM with a known climate sensitivity and an innovative way of removing radiative noise. These kind of experiments are done all the time to isolate one effect or one parameter.

Perhaps someone has already done this specific test?

I see other potential problems in measuring climate sensitivity. Here is one obvious problem – as the temperature of the mixed layer increases with continued increases in radiative forcing the buoyancy gradient increases and the eddy diffusivity reduces. We can calculate radiative forcing due to “greenhouse” gases quite accurately and therefore remove it from the regression analysis (see Spencer & Braswell 2008 for more on this). But we can’t calculate the change in eddy diffusivity and heat loss to the deeper ocean. This adds another “correlated” term that seems impossible to disentangle from the climate sensitivity calculation.

An alternative way of looking at this is that climate sensitivity might not be a constant – as already noted in Part One.

Articles in this Series

Measuring Climate Sensitivity – Part One

Measuring Climate Sensitivity – Part Two – Mixed Layer Depths


Potential Biases in Feedback Diagnosis from Observational Data: A Simple Model Demonstration, Spencer & Braswell, Journal of Climate (2008) – FREE

On the accuracy of deriving climate feedback parameters from correlations between surface temperature and outgoing radiation, Murphy & Forster, Journal of Climate (2010)

A box diffusion model to study the carbon dioxide exchange in nature, Oeschger et al, Tellus (1975)

Modeling the carbon system, Broecker et al, Radiocarbon (1980) – FREE

Climate response times: dependence on climate sensitivity and ocean mixing, Hansen et al, Science (1985)

The study of mixing in the ocean: A brief history, MC Gregg, Oceanography (1991) – FREE

Spatial Variability of Turbulent Mixing in the Abyssal Ocean, Polzin et al, Science (1997) – FREE

The Impact of Abyssal Mixing Parameterizations in an Ocean General Circulation Model, Steven R. Jayne, Journal of Physical Oceanography (2009)

The relationship between vertical eddy diffusion and buoyancy gradient in the deep sea, Sarmiento et al, Earth & Planetary Science Letters (1976)

Mixing of a tracer in the pycnocline, Ledwell et al, JGR (1998)

Impact of latitudinal variations in vertical diffusivity on climate simulations, Jochum, JGR (2009) – FREE

Mixed layer depth over the global ocean: An examination of profile data and a profile-based climatology, de Boyer Montegut et al, JGR (2004)


Note 1: The 1D version is really:

∂T / ∂t = ∂/∂z (α.∂T/∂z)

due to the fact that α can be a function of z (and definitely is in the case of the ocean).

Although this looks tricky – and it is tricky to find analytical solutions – solving the 1D version numerically is very straightforward and anyone can do it.

In plain English is looks something like:

– Heat flow into cell X = temperature difference between cell X and cell X-1

– Heat flow out of cell X = temperature difference between cell X and cell X+1

– Change in temperature = (Heat flow into cell X – Heat flow out of cell X) x time / heat capacity

Note 2: I am in the process of examining CERES data. Apart from the challenge of extracting the data from the netCDF format there is a lot to examine. A lot of data and a lot of issues surrounding data quality.

Read Full Post »

In Measuring Climate Sensitivity – Part One we saw that there can be potential problems in attempting to measure the parameter called “climate sensitivity”.

Using a simple model Spencer & Braswell (2008) had demonstrated that even when the value of “climate sensitivity” is constant and known, measurement of it can be obscured for a number of reasons.

The simple model was a “slab model” of the ocean with a top of atmosphere imbalance in radiation.

Murphy & Forster (2010) criticized Spencer & Braswell for a few reasons including the value chosen for the depth of this ocean mixed layer. As the mixed layer depth increases the climate sensitivity measurement problems are greatly reduced.

First, we will consider the mixed layer in the context of that simple model. Then we will consider what it means in real life.

The Simple Model of Climate Sensitivity

The simple model used by Spencer & Braswell has a “mixed ocean layer” of depth 50m.

Figure 1

In the model the mixed layer is where all of the imbalance in top of atmosphere radiation gets absorbed.

The idea in the simple model is that the energy absorbed from the top of atmosphere gets mixed into the top layer of the ocean very quickly. In reality, as we will see, there isn’t such a thing as one layer but it is a handy approximation.

Murphy & Forster commented:

For the heat capacity parameter c, SB08 use the heat capacity of a 50-m ocean mixed layer. This is too shallow to be realistic.

Because heat slowly penetrates deeper into the ocean, an appropriate depth for heat capacity depends on the length of the period over which Eq. (1) is being applied (Watterson 2000; Held et al. 2010).

For 80-yr global climate model runs, Gregory (2000) derived an optimum mixed layer depth of 150 m. Watterson (2000) found an initial global heat capacity equivalent to a mixed layer of 200 m and larger values for longer simulations.

Held et al. (2010) found an initial time constant τ = c/α of about four yr in the Geophysical Fluid Dynamics Laboratory global climate model. Schwartz (2007) used historical data to estimate a globally averaged mixed layer depth of 150 m, or 106 m if the earth were only ocean.

The idea is an attempt to keep the simplicity of one mixed layer for the model, but increase the depth of this mixed layer for longer time periods.

There is always a point where models – simplified versions of the real world – start to break down. This might be the case here.

The initial model was of a mixed layer of ocean, all at the same temperature because the layer is well-mixed – and with some random movement of heat between this mixed layer and the ocean depths. In a more realistic scenario, more heat flows into the deeper ocean as the length of time increases.

What Murphy & Forster are proposing is to keep the simple model and “account” for the ever increasing heat flow into the deeper ocean by using a depth of the mixed layer that is dependent on the time period.

If we do this perhaps the model will work, perhaps it won’t. By “work” we mean provide results that tell us something useful about the real world.

So I thought I would introduce some more realism (complexity) into the model and see what happened. This involves a bit of a journey.

Real Life Ocean Mixed Layer

Water is a very bad conductor of heat – as are plastic and other insulators. Good conductors of heat include metals.

However, in the ocean and the atmosphere conduction is not the primary heat transfer mechanism. It isn’t even significant. Instead, in the ocean it is convection – the bulk movement of fluids – that moves heat. Think of it like this – if you move a “parcel” of water, the heat in that parcel moves with it.

Let’s take a look at the temperature profile at the top of the ocean. Here the first graph shows temperature:

Soloviev & Lukas (1997)

Soloviev & Lukas (1997)

Figure 2

Note that the successive plots are not at higher and higher temperatures – they are just artificially separated to make the results easier to see. During the afternoon the sun heats the top of the ocean. As a result we get a temperature gradient where the surface is hotter than a few meters down. At night and early morning the temperature gradient disappears. (No temperature gradient means that the water is all at the same temperature)

Why is this?

Once the sun sets the ocean surface cools rapidly via radiation and convection to the atmosphere. The result is colder water, which is heavier. Heavier water sinks, so the ocean gets mixed. This same effect takes place on a larger scale for seasonal changes in temperature.

And the top of the ocean is also well mixed due to being stirred by the wind.

A comment from de Boyer Montegut and his coauthors (2004):

A striking and nearly universal feature of the open ocean is the surface mixed layer within which salinity, temperature, and density are almost vertically uniform. This oceanic mixed layer is the manifestation of the vigorous turbulent mixing processes which are active in the upper ocean.

Here is a summary graphic from the excellent Marshall & Plumb:

From Marshall & Plumb (2008)

Figure 3

There’s more on this subject in Does Back-Radiation “Heat” the Ocean? – Part Three.

How Deep is the Ocean Mixed Layer?

This is not a simple question. Partly it is a measurement problem, and partly there isn’t a sharp demarcation between the ocean mixed layer and the deeper ocean. Various researchers have made an effort to map it out.

Here is a global overview, again from Marshall & Plumb:

Figure 4

You can see that the deeper mixed layers occur in the higher latitudes.

Comment from de Boyer Montegut:

The main temporal variabilities of the MLD [mixed layer depth] are directly linked to the many processes occurring in the mixed layer (surface forcing, lateral advection, internal waves, etc), ranging from diurnal [Brainerd and Gregg, 1995] to interannual variability, including seasonal and intraseasonal variability [e.g., Kara et al., 2003a; McCreary et al., 2001]. The spatial variability of the MLD is also very large.

The MLD can be less than 20 m in the summer hemisphere, while reaching more than 500 m in the winter hemisphere in subpolar latitudes [Monterey and Levitus, 1997].

Here is a more complete map by month. Readers probably have many questions about methodology and I recommend reading the free paper:

From de Boyer Montegut et al (2004)

Figure 5 – Click for a larger image

Seeing this map definitely had me wondering about the challenge of measuring climate sensitivity. Spencer & Braswell had used 50m MLD to identify some climate sensitivity measurement problems. Murphy & Forster had reproduced their results with a much deeper MLD to demonstrate that the problems went away.

But what happens if instead we retest the basic model using the actual MLD which varies significantly by month and by latitude?

So instead of “one slab of ocean” at MLD = choose your value, we break up the globe into regions, have different values in each region each month and see what happens to climate sensitivity problems.

By the way, I also attempted to calculate the global annual (area weighted) average of MLD from the maps above, by eye. I also emailed the author of the paper to get some measurement details but no response.

My estimate of the data in this paper was a global annual area weighted average of 62 meters.

Trying Simple Models with Varying MLD

I updated the Matlab program from Measuring Climate Sensitivity – Part One. The globe is now broken up into 30º latitude bands, with the potential for a different value of mixed layer depth for each month of the year.

I created a number of different profiles:

Depth Type 0 – constant with month and latitude, as in the original article

Type 1 – using the values from de Boyer’s paper, as best as can be estimated from looking at the monthly maps.

Type 2 – no change each month, with scaling of 60ºN-90ºN = 100x the value for 0ºN – 30ºN, and 30ºN – 60ºN = 10x the value for 0ºN – 30ºN – similarly for the southern hemisphere.

Type 3 – alternating each month between Type 2 and its inverse, i.e., scaling of 0ºN – 30ºN = 100x the value for 60ºN-90ºN and 30ºN – 60ºN = 10x the value for 60ºN-90ºN.

Type 4 – no variation by latitude, but  month 1 = 1000x month 4, month 2 = 100x month 4, month 3 = 10x month 4, repeating 3 times  per year.

In each case the global annual (area weighted) average = 62m.

Essentially types 2-4 are aimed at creating extreme situations.

Here are some results (review the original article for some of the notation), recalling that the actual climate sensitivity, λ = 3.0:

Figure 6

Figure 7 – as figure 6 without 30-day averaging

Figure 8

Figure 9

Figure 10

Figure 11

Figure 12

What’s the message from these results?

In essence, type 0 (the original) and type 1 (using actual MLDs vs latitude and month from de Boyer’s paper) are quite similar – but not exactly the same.

However, if we start varying the MLD by latitude and month in a more extreme way the results come out very differently – even though the global average MLD is the same in each case.

This demonstrates that the temporal and area variation of MLD can have a significant effect and modeling the ocean as one slab – for the purposes of this enterprise – may be risky.


We haven’t considered the effect of non-linearity in these simple models. That is, what about interactions between different regions and months. If we created a yet more complex model where heat flowed between regions dependent on the relative depths of the mixed layers what would we find?

Losing the Plot?

Now, in case anyone has lost the plot by this stage – and it’s possible that I have – don’t get confused into thinking that we are evaluating GCM’s and gosh aren’t they simplistic.. No, GCM’s have very sophisticated modeling.

What we have been doing is tracing a path that started with a paper by Spencer & Braswell. This paper used a very simple model to show that with some random daily fluctuations in top of atmosphere radiative flux, perhaps due to clouds, the measurement of climate sensitivity doesn’t match the actual climate sensitivity.

We can do this in a model – prescribe a value and then test whether we can measure it. This is where this simple model came in. It isn’t a GCM.

However, Murphy & Forster came along and said if you use a deeper mixed ocean layer (which they claim is justified) then the measurement of climate sensitivity does more or less match the actual climate sensitivity (they also had comment on the values chosen for radiative flux anomalies, a subject for another day).

What struck me was that the test model needs some significant improvement to be able to assess whether or not climate sensitivity can be measured. And this is with the caveat – if climate sensitivity is a constant.

The Next Phase – More Realistic Ocean Model

As Murphy & Forster have pointed out, the longer the time period, the more heat is “injected” into the deeper ocean from the mixed layer.

So a better model would capture this better than just creating a deeper mixed layer for a longer time. Modeling true global ocean convection is an impossible task.

As a recap, conducted heat flow:

q” = k.ΔT/d

where q” = heat flow per unit area, k = conductivity, ΔT = temperature difference, and d = depth of layer

Take a look at Heat Transfer Basics – Part Zero for more on these basics.

For water, k = 0.6 W/m².K. So, as an example, if we have a 10ºC temperature difference across 1 km depth of water, q” = 0.006 W/m². This is tiny. Heat flow via conduction is insignificant. Convection is what moves heat in the ocean.

Many researchers have measured and estimated vertical heat flow in the ocean to come up with a value for vertical eddy diffusivity. This allows us to make some rough estimates of vertical heat flow via convection.

In the next version of the Matlab program (“in press”) the ocean is modeled with different eddy diffusivities below the mixed ocean layer to see what happens to the measurement of climate sensitivity. So far, the model comes up with wildly varying results when the eddy diffusivity is low, i.e., heat cannot easily move into the ocean depths. And it comes up with normal results when the eddy diffusivity is high, i.e., heat moves relatively quickly into the ocean depths.

Due to shortness of time, this problem has not yet been resolved. More in due course.

This article is already long enough, so the next part will cover the estimated values for eddy diffusivity because it’s an interesting subject


Regular readers of this blog understand that navigating to any kind of conclusion takes some time on my part. And that’s when the subject is well understood. I’m finding that the signposts on the journey to measuring climate sensitivity are confusing and hard to read.

And that said, this article hasn’t shed any more light on the measurement of climate sensitivity. Instead, we have reviewed more ways in which measurements of it might be wrong. But not conclusively.

Next up we will take a detour into eddy diffusivity, hoping in the meantime that the Matlab model problems can be resolved. Finally a more accurate model incorporating eddy diffusivity to model vertical heat flow in the ocean will show us whether or not climate sensitivity can be accurately measured.


Articles in this Series

Measuring Climate Sensitivity – Part One

Measuring Climate Sensitivity – Part Three – Eddy Diffusivity


Potential Biases in Feedback Diagnosis from Observational Data: A Simple Model Demonstration, Spencer & Braswell, Journal of Climate (2008)

On the accuracy of deriving climate feedback parameters from correlations between surface temperature and outgoing radiation, Murphy & Forster, Journal of Climate (2010)

Observation of large diurnal warming events in the near-surface layer of the western equatorial Pacific warm pool, Soloviev & Lukas, Deep Sea Research Part I: Oceanographic Research Papers (1997)

Atmosphere, Ocean and Climate Dynamics: An Introductory Text, Marshall & Plumb, Elsevier Academic Press (2008)

Mixed layer depth over the global ocean: An examination of profile data and a profile-based climatology, de Boyer Montegut et al, JGR (2004)

Read Full Post »

I don’t think this is a simple topic.

The essence of the problem is this:

Can we measure the top of atmosphere (TOA) radiative changes and the surface temperature changes and derive the “climate sensivity” from the relationship between the two parameters?

First, what do we mean by “climate sensitivity”?

In simple terms this parameter should tell us how much more radiation (“flux”) escapes to space for each 1°C increase in surface temperature.

Climate Sensitivity Is All About Feedback

Climate sensitivity is all about trying to discover whether the climate system has positive or negative feedback.

If the average surface temperature of the earth increased by 1°C and the radiation to space consequently increased by 3.3 W/m², this would be approximately “zero feedback”.

Why is this zero feedback?

If somehow the average temperature of the surface of the planet increased by 1°C – say due to increased solar radiation – then as a result we would expect a higher flux into space. A hotter planet should radiate more. If the increase in flux = 3.3 W/m² it would indicate that there was no negative or positive feedback from this solar forcing (note 1).

Suppose the flux increased by 0. That is, the planet heated up but there was no increase in energy radiated to space. That would be positive feedback within the climate system – because there would be nothing to “rein in” the increase in temperature.

Suppose the flux increased by 5 W/m². In this case it would indicate negative feedback within the climate system.

The key value is the “benchmark” no feedback value of 3.3 W/m². If the value is above this, it’s negative feedback. If the value is below this, it’s positive feedback.

Essentially, the higher the radiation to space as a result of a temperature increase the more the planet is able to “damp out” temperature changes that are forced via solar radiation, or due to increases in inappropriately-named “greenhouse” gases.

Consider the extreme case where as the planet warms up it actually radiates less energy to space – clearly this will lead to runaway temperature increases (less energy radiated means more energy absorbed, which increased temperatures, which leads to even less energy radiated..).

As a result we measure sensitivity as W/m².K which we read as Watts per meter squared per Kelvin” – and 1K change is the same as 1°C change.

Theory and Measurement

In many subjects, researchers’ algebra converges on conventional usage, but in the realm of climate sensitivity everyone has apparently adopted their own. As a note for non-mathematicians, there is nothing inherently wrong with this, but it just makes each paper confusing especially for newcomers and probably for everyone.

I mostly adopt the Spencer & Braswell 2008 terminology in this article (see reference and free link below). I do change their α (climate sensitivity) into λ (which everyone else uses for this value) mainly because I had already produced a number of graphs with λ before starting to write the article..

The model is a very simple 1-dimensional model of temperature deviation into the ocean mixed layer, from the first law of thermodynamics:

C.∂T/∂t = F + S ….[1]

where C = heat capacity of the ocean, T = temperature anomaly, t = time, F = total top of atmosphere (TOA) radiative flux anomaly, S = heat flux anomaly into the deeper ocean

What does this equation say?

Heat capacity times change in temperature equals the net change in energy

– this is a simple statement of energy conservation, the first law of thermodynamics.

The TOA radiative flux anomaly, F, is a value we can measure using satellites. T is average surface temperature, which is measured around the planet on a frequent basis. But S is something we can’t measure.

What is F made up of?

Let’s define:

F = N + f – λT ….[1a]

where N = random fluctuations in radiative flux, f = “forcings”, and λT is the all important climate response or feedback.

The forcing f is, for the purposes of this exercise, defined as something added into the system which we believe we can understand and estimate or measure. This could be solar increases/decreases, it could be the long term increase in the “greenhouse” effect due to CO2, methane and other gases. For the purposes of this exercise it is not feedback. Feedback includes clouds and water vapor and other climate responses like changing lapse rates (atmospheric temperature profiles), all of which combine to produce a change in radiative output at TOA.

And an important point is that for the purposes of this theoretical exercise, we can remove f from the measurements because we believe we know what it is at any given time.

N is an important element. Effectively it describes the variations in TOA radiative flux due to the random climatic variations over many different timescales.

The climate sensitivity is the value λT, where λ is the value we want to find.

Noting the earlier comment about our assumed knowledge of ‘f’ (note 2), we can rewrite eqn 1:

C.∂T/∂t = – λT + N + S ….[2]

remembering that – λT + N = F is the radiative value we measure at TOA


If we plot F (measured TOA flux) vs T we can estimate λ from the slope of the least squares regression.

However, there is a problem with the estimate:

λ (est) = Cov[F,T] / Var[T] ….[3]

          = Cov[- λT + N, T] / Var[T]

where Cov[a,b] = covariance of a with b, and Var[a]= variance of a

Forster & Gregory 2006

This oft-cited paper (reference and free link below) calculates the climate sensitivity from 1985-1996 using measured ERBE data at 2.3 ± 1.3 W/m².K.

Their result indicates positive feedback, or at least, a range of values which sit mainly in the positive feedback space.

On the method of calculation they say:

This equation includes a term that allows F to vary independently of surface temperature.. If we regress (- λT+ N) against T, we should be able to obtain a value for λ. The N terms are likely to contaminate the result for short datasets, but provided the N terms are uncorrelated to T, the regression should give the correct value for λ, if the dataset is long enough..

[Terms changed to SB2008 for easier comparison, and emphasis added].


Like Spencer & Braswell, I created a simple model to demonstrate why measured results might deviate from the actual climate sensitivity.

The model is extremely simple:

  • a “slab” model of the ocean of a certain depth
  • daily radiative noise (normally distributed with mean=0, and standard deviation σN)
  • daily ocean flux noise (normally distributed with mean=0, and standard deviation σS)
  • radiative feedback calculated from the temperature and the actual climate sensitivity
  • daily temperature change calculated from the daily energy imbalance
  • regression of the whole time series to calculate the “apparent” climate sensitivity

In this model, the climate sensitivity, λ = 3.0 W/m².K.

In some cases the regression is done with the daily values, and in other cases the regression is done with averaged values of temperature and TOA radiation across time periods of 7, 30 & 90 days. I also put a 30-day low pass filter on the daily radiative noise in one case (before “injecting” into the model).

Some results are based on 10,000 days (about 30 years), with 100,000 days (300 years) as a separate comparison.

In each case the estimated value of λ is calculated from the mean of 100 simulation results. The 2nd graph shows the standard deviation σλ, of these simulation results which is a useful guide to the likely spread of measured results of λ (if the massive oversimplifications within the model were true). The vertical axis (for the estimate of λ) is the same in each graph for easier comparison, while the vertical axis for the standard deviation changes according to the results due to the large changes in this value.

First, the variation as the number of time steps changes and as the averaging period changes from 1 (no averaging) through to 90-days. Remember that the “real” value of λ = 3.0 :

Figure 1

Second, the estimate as the standard deviation of the radiative flux is increased, and the ocean depth ranges from 20-200m. The daily temperature and radiative flux is calculated as a monthly average before the regression calculation is carried out:

Figure 2

As figure 2, but for 100,000 time steps (instead of 10,000):

Figure 3

Third, the estimate as the standard deviation of the radiative flux is increased, and the ocean depth ranges from 20-200m. The regression calculation is carried out on the daily values:

Figure 4

As figure 4, but with 100,000 time steps:

Figure 5

Now against averaging period and also against low pass filtering of the “radiative flux noise”:

Figure 6

As figure 6 but with 100,000 time steps:

Figure 7

Now with the radiative “noise” as an AR(1) process (see Statistics and Climate – Part Three – Autocorrelation), vs the autoregressive parameter φ and vs the number of averaging periods: 1 (no averaging), 7, 30, 90 with 10,000 time steps (30 years):

Figure 8

And the same comparison but with 100,000 timesteps:

Figure 9

Discussion of Results

If we consider first the changes in the standard deviation of the estimated value of climate sensitivity we can see that the spread in the results is much higher in each case when we consider 30 years of data vs 300 years of data. This is to be expected. However, given that in the 30-year cases σλ is similar in magnitude to λ we can see that doing one estimate and relying on the result is problematic. This of course is what is actually done with measurements from satellites where we have 30 years of history.

Second, we can see that mostly the estimates of λ tend to be lower than the actual value of 3.0 W/m².K. The reason is quite simple and is explained mathematically in the next section which non-mathematically inclined readers can skip.

In essence, it is related to the idea in the quote from Forster & Gregory. If the radiative flux noise is uncorrelated to temperature then the estimates of λ will be unbiased. By the way, remember that by “noise” we don’t mean instrument noise, although that will certainly be present. We mean the random fluctuations due to the chaotic nature of weather and climate.

If we refer back to Figure 1 we can see that when the averaging period = 1, the estimates of climate sensitivity are equal to 3.0. In this case, the noise is uncorrelated to the temperature because of the model construction. Slightly oversimplifying, today’s temperature is calculated from yesterday’s noise. Today’s noise is a random number unrelated to yesterday’s noise. Therefore, no correlation between today’s temperature and today’s noise.

As soon as we average the daily data into monthly results which we use to calculate the regression then we have introduced the fact that monthly temperature is correlated to monthly radiative flux noise (note 3).

This is also why Figures 8 & 9 show a low bias for λ even with no averaging of daily results. These figures are calculated with autocorrelation for radiative flux noise. This means that past values of flux are correlated to current vales – and so once again, daily temperature will be correlated with daily flux noise. This is also the case where low pass filtering is used to create the radiative noise data (as in Figures 6 & 7).


x = slope of the line from the linear regression

x = Cov[- λT + N, T] / Var[T] ….[3]

It’s not easy to read equations with complex terms numerator and denominator on the same line, so breaking it up:

Cov[- λT + N, T] = E[ (λT + N)T ] – E[- λT + N]E[T] ….[4], where E[a] = expected value of a

= E[-λT²] + E[NT] + λ.E[T].E[T] – E[N].E[T]

= -λ { E[T²] – (E[T])² } + E[NT] – E[N].E[T] …. [4]


Var[T] = E[T²] – (E[T])² …. [5]


x = -λ + { E[NT] – E[N].E[T] } / { E[T²] – (E[T])² } …. [6]

And we see that the regression of the line is always biased if N is correlated with T. If the expected value of N = 0 the last term in the top part of the equation drops out, but E[NT] ≠ 0 unless N is uncorrelated with T.

Note of course that we will use the negative of the slope of the line to estimate λ, and so estimates of λ will be biased low.

As a note for the interested student, why is it that some of the results show λ > 3.0?

Murphy & Forster 2010

Murphy & Forster picked up the challenge from Spencer & Braswell 2008 (reference below but no free link unfortunately). The essence of their paper is that using more realistic values for radiative noise and mixed ocean depth the error in calculation of λ is very small:

From Murphy & Forster (2010)

Figure 10

The value ba on the vertical axis is a normalized error term (rather than the estimate of λ).

Evaluating their arguments requires more work on my part, especially analyzing some CERES data, so I hope to pick that up in a later article. [Update, Spencer has a response to this paper on his blog, thanks to Ken Gregory for highlighting it]

Linear Feedback Relationship?

One of the biggest problems with the idea of climate sensitivity, λ, is the idea that it exists as a constant value.

From Stephens (2005), reference and free link below:

The relationship between global-mean radiative forcing and global-mean climate response (temperature) is of intrinsic interest in its own right. A number of recent studies, for example, discuss some of the broad limitations of (1) and describe procedures for using it to estimate Q from GCM experiments (Hansen et al. 1997; Joshi et al. 2003; Gregory et al. 2004) and even procedures for estimating  from observations (Gregory et al. 2002).

While we cannot necessarily dismiss the value of (1) and related interpretation out of hand, the global response, as will become apparent in section 9, is the accumulated result of complex regional responses that appear to be controlled by more local-scale processes that vary in space and time.

If we are to assume gross time–space averages to represent the effects of these processes, then the assumptions inherent to (1) certainly require a much more careful level of justification than has been given. At this time it is unclear as to the specific value of a global-mean sensitivity as a measure of feedback other than providing a compact and convenient measure of model-to-model differences to a fixed climate forcing (e.g., Fig. 1).

[Emphasis added and where the reference to “(1)” is to the linear relationship between global temperature and global radiation].

If, for example, λ is actually a function of location, season & phase of ENSO.. then clearly measuring overall climate response is a more difficult challenge.


Measuring the relationship between top of atmosphere radiation and temperature is clearly very important if we want to assess the all-important climate sensitivity.

Spencer & Braswell have produced a very useful paper which demonstrates some obvious problems with deriving the value of climate sensitivity from measurements. Although I haven’t attempted to reproduce their actual results, I have done many other model simulations to demonstrate the same problem.

Murphy & Forster have produced a paper which claims that the actual magnitude of the problem demonstrated by Spencer & Braswell is quite small in comparison to the real value being measured (as yet I can’t tell whether their claim is correct).

The value called climate sensitivity might be a variable (i.e., not a constant value) and it might turn out to be much harder to measure than it really seems (and already it doesn’t seem easy).

Articles in this Series

Measuring Climate Sensitivity – Part Two – Mixed Layer Depths

Measuring Climate Sensitivity – Part Three – Eddy Diffusivity


The Climate Sensitivity and Its Components Diagnosed from Earth Radiation Budget Data, Forster & Gregory, Journal of Climate (2006)

Potential Biases in Feedback Diagnosis from Observational Data: A Simple Model Demonstration, Spencer & Braswell, Journal of Climate (2008)

On the accuracy of deriving climate feedback parameters from correlations between surface temperature and outgoing radiation, Murphy & Forster, Journal of Climate (2010)

Cloud Feedbacks in the Climate System: A Critical Review, Stephens, Journal of Climate (2005)


Note 1 – The reason why the “no feedback climate response” = 3.3 W/m².K is a little involved but is mostly due to the fact that the overall climate is radiating around 240 W/m² at TOA.

Note 2 – This is effectively the same as saying f=0. If that seems alarming I note in advance that the exercise we are going through is a theoretical exercise to demonstrate that even if f=0, the regression calculation of climate sensitivity includes some error due to random fluctuations.

Note 3 – If the model had one random number for last month’s noise which was used to calculate this month’s temperature then the monthly results would also be free of correlation between the temperature and radiative noise.

Read Full Post »

« Newer Posts - Older Posts »