Feeds:
Posts
Comments

Archive for the ‘Climate History’ Category

In Part Seven we looked at a couple of papers from 1989 and 1994 which attempted to use GCMs to “start an ice age”. The evolution of the “climate science in progress” has been:

  1. Finding indications that the timing of ice age inception was linked to redistribution of solar insolation via orbital changes – possibly reduced summer insolation in high latitudes (Hays et al 1976 – discussed in Part Three)
  2. Using simple energy balance models to demonstrate there was some physics behind the plausible ideas (we saw a subset of the plausible ideas in Part Six – Hypotheses Abound)
  3. Using a GCM with the starting conditions of around 115,000 years ago to see if “perennial snow cover” could be achieved at high latitudes that weren’t ice covered in the last inter-glacial – i.e., can we start a new ice age?

Why, if an energy balance model can “work”, i.e., produce perennial snow cover to start a new ice age, do we need to use a more complex model? As Rind and his colleagues said in their 1989 paper:

Various energy balance climate models have been used to assess how much cooling would be associated with changed orbital parameters.. With the proper tuning of parameters, some of which is justified on observational grounds, the models can be made to simulate the gross glacial/interglacial climate changes. However, these models do not calculate from first principles all the various influences on surface air temperature noted above, nor do they contain a hydrologic cycle which would allow snow cover to be generated or increase. The actual processes associated with allowing snow cover to remain through the summer will involve complex hydrologic and thermal influences, for which simple models can only provide gross approximations.

[Emphases added – and likewise in all following quotations, bold is emphasis added]. So interestingly, moving to a more complex model with better physics showed that there was a problem with (climate models) starting an ice age. Still, that was early GCMs with much more limited computing power. In this article we will look at the results a decade or so later.

Reviews

We’ll start with a couple of papers that include excellent reviews of “the problem so far”, one in 2002 by Yoshimori and his colleagues and one in 2004 by Vettoretti & Peltier. Yoshimori et al 2002:

One of the fundamental and challenging issues in paleoclimate modelling is the failure to capture the last glacial inception (Rind et al. 1989)..

..Between 118 and 110 kaBP, the sea level records show a rapid drop of 50 – 80 m from the last interglacial, which itself had a sea level only 3 – 5 m higher than today. This sea level lowering, as a reference, is about half of the last glacial maximum. ..As the last glacial inception offers one of few valuable test fields for the validation of climate models, particularly atmospheric general circulation models (AGCMs), many studies regarding this event have been conducted.

Phillipps & Held (1994) and Gallimore & Kutzbach (1995).. conducted a series of sensitivity experiments with respect to orbital parameters by specifying several extreme orbital configurations. These included a case with less obliquity and perihelion during the NH winter, which produces a cooler summer in the NH. Both studies came to a similar conclusion that although a cool summer orbital configuration brings the most favorable conditions for the development of permanent snow and expansion of glaciers, orbital forcing alone cannot account for the permanent snow cover in North America and Europe.

This conclusion was confirmed by Mitchell (1993), Schlesinger & Verbitsky (1996), and Vavrus (1999).. ..Schlesinger & Verbitsky (1996), integrating an ice sheet-asthenosphere model with AGCM output, found that a combination of orbital forcing and greenhouse forcing by reduced CO2 and CH4 was enough to nucleate ice sheets in Europe and North America. However, the simulated global ice volume was only 31% of the estimate derived from proxy records.

..By using a higher resolution model, Dong & Valdes (1995) simulated the growth of perennial snow under combined orbital and CO2 forcing. As well as the resolution of the model, an important difference between their model and others was the use of “envelope orography” [playing around with the height of land].. found that the changes in sea surface temperature due to orbital perturbations played a very important role in initiating the Laurentide and Fennoscandian ice sheets.

And as a note on the last quote, it’s important to understand that these studies were with an Atmospheric GCM, not an Atmospheric Ocean GCM – i.e., a model of the atmosphere with some prescribed sea surface temperatures (these might be from a separate run using a simpler model, or from values determined from proxies). The authors then comment on the potential impact of vegetation:

..The role of the biosphere in glacial inception has been studied by Gallimore & Kutzbach (1996), de Noblet et al. (1996), and Pollard and Thompson (1997).

..Gallimore & Kutzbach integrated an AGCM with a mixed layer ocean model under five different forcings:  1) control; 2) orbital; 3) #2 plus CO2; 4) #3 plus 25% expansion of tundra based on the study of Harrison et al. (1995); and (5) #4 plus further 25% expansion of tundra. The effect of the expansion of tundra through a vegetation-snow masking feedback was approximated by increasing the snow cover fraction. In only the last case was perennial snow cover seen..

..Pollard and Thompson (1997) also conducted an interactive vegetation and AGCM experiment under both orbital and CO2 forcing. They further integrated a dynamic ice-sheet model for 10 ka under the surface mass balance calculated from AGCM output using a multi-layer snow/ice-sheet surface column model on the grid of the dynamical ice-sheet model including the effect of refreezing of rain and meltwater. Although their model predicted the growth of an ice sheet over Baffin Island and the Canadian Archipelago, it also predicted a much faster growth rate in north western Canada and southern Alaska, and no nucleation was seen on Keewatin or Labrador [i.e. the wrong places]. Furthermore, the rate of increase of ice volume over North America was an order of magnitude less than that estimated from proxy records.

They conclude:

It is difficult to synthesise the results of these earlier studies since each model used different parameterisations of unresolved physical processes, resolution, and had different control climates as well as experimental design.

They summarize that results to date indicate that orbital forcing alone nor CO2  alone can explain glacial inception, and the combined effects are not consistent. And the difficulty appears to relate to the resolution of the model or feedback from the biosphere (vegetation).

A couple of years later Vettoretti & Peltier (2004) had a good review at the start of their paper.

Initial attempts to gain deeper understanding of the nature of the glacial–interglacial cycles involved studies based upon the use of simple energy balance models (EBMs), which have been directed towards the simulation of perennial snow cover under the influence of appropriately modified orbital forcing (e.g. Suarez and Held, 1979).

Analyses have since evolved such that the models of the climate system currently employed include explicit coupling of ice sheets to the EBM or to more complete AGCM models of the atmosphere.

The most recently developed models of the complete 100 kyr iceage cycle have evolved to the point where three model components have been interlinked, respectively, an EBM of the atmosphere that includes the influence of ice-albedo feedback including both land ice and sea ice, a model of global glaciology in which ice sheets are forced to grow and decay in response to meteorologically mediated changes in mass balance, and a model of glacial isostatic adjustment, through which process the surface elevation of the ice sheet may be depressed or elevated depending upon whether accumulation or ablation is dominant..

..Such models have also been employed to investigate the key role that variations in atmospheric carbon dioxide play in the 100 kyr cycle, especially in the transition out of the glacial state (Tarasov and Peltier, 1997; Shackleton, 2000). Since such models are rather efficient in terms of the computer resources required to integrate them, they are able to simulate the large number of glacial– interglacial cycles required to understand model sensitivities.

There has also been a movement within the modelling community towards the use of models that are currently referred to as earth models of intermediate complexity (EMICs) which incorporate sub-components that are of reduced levels of sophistication compared to the same components in modern Global ClimateModels (GCMs). These EMICs attempt to include representations of most of the components of the real Earth system including the atmosphere, the oceans, the cryosphere and the biosphere/carbon cycle (e.g. Claussen, 2002). Such models have provided, and will continue to provide, useful insight into long-term climate variability by making it possible to perform a large number of sensitivity studies designed to investigate the role of various feedback mechanisms that result from the interaction between the components that make up the climate system (e.g. Khodri et al., 2003).

Then the authors comment on the same studies and issues covered by Yoshimori et al, and additionally on their own 2003 paper and another study. On their own research:

Vettoretti and Peltier (2003a), more recently, have demonstrated that perennial snow cover is achieved in a recalibrated version of the CCCma AGCM2 solely as a consequence of orbital forcing when the atmospheric CO2 concentration is fixed to the pre-industrial level as constrained by measurements on air bubbles contained in the Vostok ice core (Petit et al., 1999).

This AGCM simulation demonstrated that perennial snow cover develops at high northern latitudes without the necessity of including any feedbacks due to vegetation or other effects. In this work, the process of glacial inception was analysed using three models having three different control climates that were, respectively, the original CCCma cold biased model, a reconfigured model modified so as to be unbiased, and a model that was warm biased with respect to the modern set of observed AMIP2 SSTs.. ..Vettoretti and Peltier (2003b) suggested a number of novel feedback mechanisms to be important for the enhancement of perennial snow cover.

In particular, this work demonstrated that successively colder climates increased moisture transport into glacial inception sensitive regions through increased baroclinic eddy activity at mid- to high latitudes. In order to assess this phenomenon quantitatively, a detailed investigation was conducted of changes in the moisture balance equation under 116 ka BP orbital forcing for the Arctic polar cap. As well as illustrating the action of a ‘‘cyrospheric moisture pump’’, the authors also proposed that the zonal asymmetry of the inception process at high latitudes, which has been inferred on the basis of geological observations, is a consequence of zonally heterogeneous increases and decreases of the northwards transport of heat and moisture.

And they go on to discuss other papers with an emphasis on moisture transport poleward. Now we’ll take a look at some work from that period.

Newer GCM work

Yoshimori et al 2002

Their models – an AGCM (atmospheric GCM) with 116kyrs orbital conditions and a) present day SSTs b) 116 kyrs SSTs. Then another model run with the above conditions and changed vegetation based on temperature (if the summer temperature is less than -5ºC the vegetation type is changed to tundra). Because running a “fully coupled” GCM (atmosphere and ocean) over a long time period required too much computing resources a compromise approach was used.

The SSTs were calculated using an intermediate complexity model, with a simple atmospheric model and a full ocean model (including sea ice) – and by running the model for 2000 years (oceans have a lot of thermal inertia). The details of this is described in section 2.1 of their paper. The idea is to get some SSTs that are consistent between ocean and atmosphere.

The SSTs are then used as boundary conditions for a “proper” atmospheric GCM run over 10 years – this is described in section 2.2 of their paper. The insolation anomaly, with respect to present day: Yoshimori-2002-Fig1-insolation-anomaly-116kaBP

Figure 1

They use 240 ppm CO2 for the 116 kyr condition, as “the lowest probably equivalent CO2 level” (combining radiative forcing of CO2 and CH4). This equates to a reduction of 2.2 W/m² of radiative forcing. The SSTs calculated from the preliminary model are colder globally by 1.1ºC for the 116 kyr condition compared to the present day SST run. This is not due to the insolation anomaly, which just “redistributes” solar energy, it is due to the lower atmospheric CO2 concentration. The 116kyr SST in the northern North Atlantic is about 6ºC colder. This is due to the lower insolation value in summer plus a reduction in the MOC (note 1). The results of their work:

  • with modern SSTs, orbital and CO2 values from 116 kyrs – small extension of perennial snow cover
  • with calculated 116 kyr SST, orbital and CO2 values – a large extension in perennial snow cover into Northern Alaska, eastern Canada and some other areas
  • with vegetation changes (tundra) added – further extension of snow cover north of 60º

They comment (and provide graphs) that increased snow cover is partly from reduced snow melt but also from additional snowfall. This is the case even though colder temperatures generally favor less precipitation.

Contrary to the earlier ice age hypothesis, our results suggest that the capturing of glacial inception at 116kaBP requires the use of “cooler” sea surface conditions than those of the present climate. Also, the large impact of vegetation change on climate suggests that the inclusion of vegetation feedback is important for model validation, at least, in this particular period of Earth history.

What we don’t find out is why their model produces perennial snow cover (even without vegetation changes) where earlier attempts failed. What appears unstated is that although the “orbital hypothesis” is “supported” by the paper, the necessary conditions are colder sea surface temperatures induced by much lower atmospheric CO2. Without the lower CO2 this model cannot start an ice age. And an additional point to note, Vettoretti & Peltier 2004, say this about the above paper:

The meaningfulness of these results, however, remain to be seen as the original CCCma AGCM2 model is cold biased in summer surface temperature at high latitudes and sensitive to the low value of CO2 specified in the simulations.

Vettoretti & Peltier 2003

This is the paper referred to by their 2004 paper.

This simulation demonstrates that entry into glacial conditions at 116 kyr BP requires only the introduction of post-Eemian orbital insolation and standard preindustrial CO2 concentrations

Here are the seasonal and latitudinal variations in solar TOA of 116 kyrs ago vs today:

From Vettoretti & Peltier 2003

From Vettoretti & Peltier 2003

The essence of their model testing was they took an atmospheric GCM coupled to prescribed SSTs – for three different sets of SSTs – with orbital and GHG conditions from 116 kyrs BP and looked to see if perennial snow cover occurred (and where):

The three 116 kyr BP experiments demonstrated that glacial inception was successfully achieved in two of the three simulations performed with this model.

The warm-biased experiment delivered no perennial snow cover in the Arctic region except over central Greenland.

The cold-biased 116 kyr BP experiment had large portions of the Arctic north of 608N latitude covered in perennial snowfall. Strong regions of accumulation occurred over the Canadian Arctic archipelago and eastern and central Siberia. The accumulation over eastern Siberia appears to be excessive since there is little evidence that eastern Siberia ever entered into a glacial state. The accumulation pattern in this region is likely a result of the excessive precipitation in the modern simulation.

They also comment:

All three simulations are characterized by excessive summer precipitation over the majority of the polar land areas. Likewise, a plot of the annual mean precipitation in this region of the globe (not shown) indicates that the CCCma model is in general wet biased in the Arctic region. It has previously been demonstrated that the CCCma GCMII model also has a hydrological cycle that is more vigorous than is observed (Vettoretti et al. 2000b).

I’m not clear how much the model bias of excessive precipitation also affects their result of snow accumulation in the “right” areas.

In Part II of their paper they dig into the details of the changes in evaporation, precipitation and transport of moisture into the arctic region.

Crucifix & Loutre 2002

This paper (and the following paper) used an EMIC – an intermediate complexity model – which is a trade off model that has courser resolution, simpler parameterization but consequently much faster run time  – allowing for lots of different simulations over much longer time periods than can be done with a GCM. The EMICs are also able to have coupled biosphere, ocean, ice sheets and atmosphere – whereas the GCM runs we saw above had only an atmospheric GCM with some method of prescribing sea surface temperatures.

This study addresses the mechanisms of climatic change in the northern high latitudes during the last interglacial (126–115 kyr BP) using the earth system model of intermediate complexity ‘‘MoBidiC’’.

Two series of sensitivity experiments have been performed to assess (a) the respective roles played by different feedbacks represented in the model and (b) the respective impacts of obliquity and precession..

..MoBidiC includes representations for atmosphere dynamics, ocean dynamics, sea ice and terrestrial vegetation. A total of ten transient experiments are presented here..

..The model simulates important environmental changes at northern high latitudes prior the last glacial inception, i.e.: (a) an annual mean cooling of 5 °C, mainly taking place between 122 and 120 kyr BP; (b) a southward shift of the northern treeline by 14° in latitude; (c) accumulation of perennial snow starting at about 122 kyr BP and (d) gradual appearance of perennial sea ice in the Arctic.

..The response of the boreal vegetation is a serious candidate to amplify significantly the orbital forcing and to trigger a glacial inception. The basic concept is that at a large scale, a snow field presents a much higher albedo over grass or tundra (about 0.8) than in forest (about 0.4).

..It must be noted that planetary albedo is also determined by the reflectance of the atmosphere and, in particular, cloud cover. However, clouds being prescribed in MoBidiC, surface albedo is definitely the main driver of planetary albedo changes.

In their summary:

At high latitudes, MoBidiC simulates an annual mean cooling of 5 °C over the continents and a decrease of 0.3 °C in SSTs.

This cooling is mainly related to a decrease in the shortwave balance at the top-of-the atmosphere by 18 W/m², partly compensated for by an increase by 15 W/m² in the atmospheric meridional heat transport divergence.

These changes are primarily induced by the astronomical forcing but are almost quadrupled by sea ice, snow and vegetation albedo feedbacks. The efficiency of these feedbacks is enhanced by the synergies that take place between them. The most critical synergy involves snow and vegetation and leads to settling of perennial snow north of 60°N starting 122 kyr BP. The temperature-albedo feedback is also responsible for an acceleration of the cooling trend between 122 and 120 kyr BP. This acceleration is only simulated north of 60° and is absent at lower latitudes.

See note 2 for details on the model. This model has a cold bias of up to 5°C in the winter high latitudes.

Calov et al 2005

We study the mechanisms of glacial inception by using the Earth system model of intermediate complexity, CLIMBER-2, which encompasses dynamic modules of the atmosphere, ocean, biosphere and ice sheets. Ice-sheet dynamics are described by the three- dimensional polythermal ice-sheet model SICOPOLIS. We have performed transient experiments starting at the Eemian interglacial, at 126 ky BP (126,000 years before present). The model runs for 26 kyr with time-dependent orbital and CO2 forcings.

The model simulates a rapid expansion of the area covered by inland ice in the Northern Hemisphere, predominantly over Northern America, starting at about 117 kyr BP. During the next 7 kyr, the ice volume grows gradually in the model at a rate which corresponds to a change in sea level of 10 m per millennium.

We have shown that the simulated glacial inception represents a bifurcation transition in the climate system from an interglacial to a glacial state caused by the strong snow-albedo feedback. This transition occurs when summer insolation at high latitudes of the Northern Hemisphere drops below a threshold value, which is only slightly lower than modern summer insolation.

By performing long-term equilibrium runs, we find that for the present-day orbital parameters at least two different equilibrium states of the climate system exist—the glacial and the interglacial; however, for the low summer insolation corresponding to 115 kyr BP we find only one, glacial, equilibrium state, while for the high summer insolation corresponding to 126 kyr BP only an interglacial state exists in the model.

We can get some sense of the simplification of the EMIC from the resolution:

The atmosphere, land- surface and terrestrial vegetation models employ the same grid with latitudinal resolution of 10° and longitudinal resolution of approximately 51°

Their ice sheet model has much more detail, with about 500 “cells” of the ice sheet fitting into 1 cell of the land surface model.

They also comment on the general problems (so far) with climate models trying to produce ice ages:

We speculate that the failure of some climate models to successfully simulate a glacial inception is due to their coarse spatial resolution or climate biases, that could shift their threshold values for the summer insolation, corresponding to the transition from interglacial to glacial climate state, beyond the realistic range of orbital parameters.

Another important factor determining the threshold value of the bifurcation transition is the albedo of snow.

In our model, a reduction of averaged snow albedo by only 10% prevents the rapid onset of glaciation on the Northern Hemisphere under any orbital configuration that occurred during the Quaternary. It is worth noting that the albedo of snow is parameterised in a rather crude way in many climate models, and might be underestimated. Moreover, as the albedo of snow strongly depends on temperature, the under-representation of high elevation areas in a coarse- scale climate model may additionally weaken the snow– albedo feedback.

Conclusion

So in this article we have reviewed a few papers from a decade or so ago that have turned the earlier problems (see Part Seven)  into apparent (preliminary) successes.

We have seen two papers using models of “intermediate complexity” and coarse spatial resolution that simulated the beginnings of the last ice age. And we have seen two papers which used atmospheric GCMs linked to prescribed ocean conditions that simulated perennial snow cover in critical regions 116 kyrs ago.

Definitely some progress.

But remember the note that the early energy balance models had concluded that perennial snow cover could occur due to the reduction in high latitude summer insolation – support for the “Milankovitch” hypothesis. But then the much improved – but still rudimentary – models of Rind et al 1989 and Phillipps & Held 1994 found that with the better physics and better resolution they were unable to reproduce this case. And many later models likewise.

We’ve yet to review a fully coupled GCM (atmosphere and ocean) attempting to produce the start of an ice age. In the next article we will take a look at a number of very recent papers, including Jochum et al (2012):

So far, however, fully coupled, nonflux-corrected primitive equation general circulation models (GCMs) have failed to reproduce glacial inception, the cooling and increase in snow and ice cover that leads from the warm interglacials to the cold glacial periods..

..The GCMs failure to recreate glacial inception [see Otieno and Bromwich (2009) for a summary], which indicates a failure of either the GCMs or of Milankovitch’s hypothesis. Of course, if the hypothesis would be the culprit, one would have to wonder if climate is sufficiently understood to assemble a GCM in the first place.

We will also see that the strength of feedback mechanisms that contribute to perennial snow cover varies significantly for different papers.

And one of the biggest problems still being run into is the computing power necessary. From Jochum (2012) again:

This experimental setup is not optimal, of course. Ideally one would like to integrate the model from the last interglacial, approximately 126 kya ago, for 10,000 years into the glacial with slowly changing orbital forcing. However, this is not affordable; a 100-yr integration of CCSM on the NCAR supercomputers takes approximately 1 month and a substantial fraction of the climate group’s computing allocation.

More on this fascinating topic very soon.

Articles in the Series

Part One – An introduction

Part Two – Lorenz – one point of view from the exceptional E.N. Lorenz

Part Three – Hays, Imbrie & Shackleton – how everyone got onto the Milankovitch theory

Part Four – Understanding Orbits, Seasons and Stuff – how the wobbles and movements of the earth’s orbit affect incoming solar radiation

Part Five – Obliquity & Precession Changes – and in a bit more detail

Part Six – “Hypotheses Abound” – lots of different theories that confusingly go by the same name

Part Seven – GCM I – early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Seven and a Half – Mindmap – my mind map at that time, with many of the papers I have been reviewing and categorizing plus key extracts from those papers

Part Nine – GCM III – very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age – a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Eleven – End of the Last Ice age – latest data showing relationship between Southern Hemisphere temperatures, global temperatures and CO2

Twelve – GCM V – Ice Age Termination – very recent work from He et al 2013, using a high resolution GCM (CCSM3) to analyze the end of the last ice age and the complex link between Antarctic and Greenland

Thirteen – Terminator II – looking at the date of Termination II, the end of the penultimate ice age – and implications for the cause of Termination II

Fourteen – Concepts & HD Data – getting a conceptual feel for the impacts of obliquity and precession, and some ice age datasets in high resolution

Fifteen – Roe vs Huybers – reviewing In Defence of Milankovitch, by Gerard Roe

Sixteen – Roe vs Huybers II – remapping a deep ocean core dataset and updating the previous article

Seventeen – Proxies under Water I – explaining the isotopic proxies and what they actually measure

Eighteen – “Probably Nonlinearity” of Unknown Origin – what is believed and what is put forward as evidence for the theory that ice age terminations were caused by orbital changes

Nineteen – Ice Sheet Models I – looking at the state of ice sheet models

References

On the causes of glacial inception at 116 kaBP, Yoshimori, Reader, Weaver & McFarlane, Climate Dynamics (2002) – paywall paper – free paper

Sensitivity of glacial inception to orbital and greenhouse gas climate forcing, Vettoretti & Peltier, Quaternary Science Reviews (2004) – paywall paper

Post-Eemian glacial inception. Part I: the impact of summer seasonal temperature bias, Vettoretti & Peltier, Journal of Climate (2003) – free paper

Post-Eemian Glacial Inception. Part II: Elements of a Cryospheric Moisture Pump, Vettoretti & Peltier, Journal of Climate (2003)

Transient simulations over the last interglacial period (126–115 kyr BP): feedback and forcing analysis, Crucifix & Loutre 2002, Climate Dynamics (2002) – paywall paper with first 2 pages viewable for free

Transient simulation of the last glacial inception. Part I: glacial inception as a bifurcation in the climate system, Calov, Ganopolski, Claussen, Petoukhov & Greve, Climate Dynamics (2005) – paywall paper with first 2 pages viewable for free

True to Milankovitch: Glacial Inception in the New Community Climate System Model, Jochum et al, Journal of Climate (2012) – free paper

Notes

1. MOC = meridional overturning current. The MOC is the “Atlantic heat conveyor belt” where the cold salty water in the polar region of the Atlantic sinks rapidly and forms a circulation which pulls (warmer) surface equatorial waters towards the poles.

2. Some specifics on MoBidiC from the paper to give some idea of the compromises:

MoBidiC links a zonally averaged atmosphere to a sectorial representation of the surface, i.e. each zonal band (5° in latitude) is divided into different sectors representing the main continents (Eurasia–Africa and America) and oceans (Atlantic, Pacific and Indian). Each continental sector can be partly covered by snow and similarly, each oceanic sector can be partly covered by sea ice (with possibly a covering snow layer). The atmospheric component has been described by Galle ́e et al. (1991), with some improvements given in Crucifix et al. (2001). It is based on a zonally averaged quasi-geostrophic formalism with two layers in the vertical and 5° resolution in latitude. The radiative transfer is computed by dividing the atmosphere into up to 15 layers.

The ocean component is based on the sectorially averaged form of the multi-level, primitive equation ocean model of Bryan (1969). This model is extensively described in Hovine and Fichefet (1994) except for some minor modifications detailed in Crucifix et al. (2001). A simple thermodynamic–dynamic sea-ice component is coupled to the ocean model. It is based on the 0-layer thermodynamic model of Semtner (1976), with modifications introduced by Harvey (1988a, 1992). A one-dimensional meridional advection scheme is used with ice velocities prescribed as in Harvey (1988a). Finally, MoBidiC includes the dynamical vegetation model VE- CODE developed by Brovkin et al. (1997). It is based on a continuous bioclimatic classification which describes vegetation as a composition of simple plant functional types (trees and grass). Equilibrium tree and grass fractions are parameterised as a function of climate expressed as the GDD0 index and annual precipitation. The GDD0 (growing degree days above 0) index is defined as the cumulate sum of the continental temperature for all days during which the mean temperature, expressed in degrees, is positive.

MoBidiC’s simulation of the present-day climate has been discussed at length in (Crucifix et al. 2002). We recall its main features. The seasonal cycle of sea ice is reasonably reproduced with an Arctic sea-ice area ranging from 5 · 106 (summer) to 15 · 106 km2 (winter), which compares favourably with present-day observations (6.2 · 106 to 13.9 · 106 km2, respectively, Gloersen et al. 1992). Nevertheless, sea ice tends to persist too long in spring, and most of its melting occurs between June and August, which is faster than in the observations. In the Atlantic Ocean, North Atlantic Deep Water forms mainly between 45 and 60°N and is exported at a rate of 12.4 Sv to the Southern Ocean. This export rate is compatible with most estimates (e.g. Schmitz 1995). Furthermore, the main water masses of the ocean are well reproduced, with recirculation of Antarctic Bottom Water below the North Atlantic Deep Water and formation of Antarctic Intermediate Water. However no convection occurs in the Atlantic north of 60°N, contrary to the real world. As a consequence, continental high latitudes suffer of a cold bias, up to 5 °C in winter. Finally, the treeline is around about 65°N, which is roughly comparable to zonally averaged observations (e.g. MacDonald et al. 2000) but experiments made with this model to study the Holocene climate revealed its tendency to overestimate the amplitude of the treeline shift in response to the astronomical forcing (Crucifix et al. 2002).

Read Full Post »

For those interested, I’ve been using a mindmap to try and keep on top of all of the different papers and ideas. It’s a work in progress. The iPad app produces a pdf output but not a scalable graphic (just a blurred one).

Lots of papers and extracts:

Ice Ages-3

Articles in the Series

Part One – An introduction

Part Two – Lorenz – one point of view from the exceptional E.N. Lorenz

Part Three – Hays, Imbrie & Shackleton – how everyone got onto the Milankovitch theory

Part Four – Understanding Orbits, Seasons and Stuff – how the wobbles and movements of the earth’s orbit affect incoming solar radiation

Part Five – Obliquity & Precession Changes – and in a bit more detail

Part Six – “Hypotheses Abound” – lots of different theories that confusingly go by the same name

Part Seven – GCM I – early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Eight – GCM II – more recent work from the “noughties” – GCM results plus EMIC (earth models of intermediate complexity) again trying to produce perennial snow cover

Part Nine – GCM III – very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age – a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Eleven – End of the Last Ice age – latest data showing relationship between Southern Hemisphere temperatures, global temperatures and CO2

Twelve – GCM V – Ice Age Termination – very recent work from He et al 2013, using a high resolution GCM (CCSM3) to analyze the end of the last ice age and the complex link between Antarctic and Greenland

Thirteen – Terminator II – looking at the date of Termination II, the end of the penultimate ice age – and implications for the cause of Termination II

Read Full Post »

In Part Six we looked at some of the different theories that confusingly go by the same name. The “Milankovitch” theories.

The essence of these many theories – even though the changes in “tilt” of the earth’s axis and the time of closest approach to the sun don’t change the total annual solar energy incident on the climate, the changing distribution of energy causes massive climate change over thousands of years.

One of the “classic” hypotheses is increases in July insolation at 65ºN cause the ice sheets to melt. Or conversely, reductions in July insolation at 65ºN cause the ice sheets to grow.

The hypotheses described can sound quite convincing. Well, one at a time can sound quite convincing – when all of the “Milankovitch theories” are all lined up alongside each other they start to sound more like hopeful ideas.

In this article we will start to consider what GCMs can do in falsifying these theories. For some basics on GCMs, take a look at Models On – and Off – the Catwalk.

Many readers of this blog have varying degrees of suspicion about GCMs. But as regular commenter DeWitt Payne often says, “all models are wrong, but some are useful“, that is, none are perfect, but some can shed light on the climate mechanisms we want to understand.

In fact, GCMs are essential to understand many climate mechanisms and essential to understand the interaction between different parts of the climate system.

Digression – Ice Sheets and Positive Feedback

For beginners, a quick digression into ice sheets and positive feedback. Melting and forming of ice & snow is undisputably a positive feedback within the climate system.

Snow reflects around 60-90% of incident solar radiation. Water reflects less than 10% and most ground surfaces reflect less than 25%.  If a region heats up sufficiently, ice and snow melt. Which means less solar radiation gets reflected, which means more radiation is absorbed, which means the region heats up some more. The effect “feeds itself”. It’s a positive feedback.

In the annual cycle it doesn’t lead to any kind of thermal runaway or a snowball earth because the solar radiation goes through a much bigger cycle.

Over much longer time periods it’s conceivable that (regional) melting of ice sheets leads to more (regional) solar radiation absorbed, causing more melting of ice sheets which leads to yet more melting. And the converse for growth of ice sheets. The reason it’s conceivable is because it’s just that same mechanism.

Digression over.

Why GCMs ?

The only alternative is to do the calculation in your head or on paper. Take a piece of paper, plot a graph of the incident radiation at all latitudes vs the time period we are interested in – say 150 kyrs ago through to 100 kyrs – now work out by year, decade or century, how much ice melts. Work out the new albedo for each region. Calculate the change in absorbed radiation. Calculate the regional temperature changes. Calculated the new heat transfer from low to high latitudes (lots of heat is exported from the equator to the poles via the atmosphere and the ocean) due to the latitudinal temperature gradient, the water vapor transported, and the rainfall and snowfall. Don’t forget to track ice melt at high latitudes and its impact on the Meridional Overturning Circulation (MOC) which drives a significant part of the heat transfer from the equator to poles. Step to the next year, decade or century and repeat.

How are those calculations coming along?

A GCM uses some fundamental physics equations like energy balance and mass balance. It uses a lot of parameterized equations to calculate things like heat transfer from the surface to the atmosphere dependent on the wind speed, cloud formation, momentum transfer from wind to ocean, etc. Whatever we have in a GCM is better than trying to do it on a sheet of paper (and in the end you will be using the same equations with much less spatial and time granularity).

If we are interested in the “classic” Milankovitch theory mentioned above we need to find out the impact of an increase of 50W/m² (over 10,000 years) in summer at 65ºN – see figure 1 in Ghosts of Climates Past – Part Five – Obliquity & Precession Changes.  What effect does the simultaneous spring reduction at 65ºN have. Do these two effects cancel each other out? Is the summer increase more significant than the spring reduction?

How quickly does the circulation lessen the impact? The equator-pole export of heat is driven by the temperature difference – as with all heat transfer. So if the northern polar region is heating up due to ice melting, the ocean and atmospheric circulation will change and less heat will be driven to the poles. What effect does this have?

How quickly does an ice sheet melt and form? Can the increases and reductions in solar radiation absorbed explain the massive ice sheet growth and shrinking?

If the positive feedback is so strong how does an ice age terminate and how does it restart 10,000 years later?

We can only assess all of these with a general circulation model.

There is a problem though. A typical GCM run is a few decades or a century. We need a 10,000 – 50,000 year run with a GCM. So we need 500x the computing power – or we have to reduce the complexity of the model.

Alternatively we can run a model to equilibrium at a particular time in history to see what effect the historical parameters had on the changes we are interested in.

Early Work

Many readers of this blog are frequently mystified by my choosing “old work” to illuminate a topic. Why not pick the most up to date research?

Because the older papers usually explain the problem more clearly and give more detail on the approach to the problem.

The latest papers are written for researchers in the field and assume most of the preceding knowledge – that everyone in that field already has. A good example is the Myhre et al (1998) paper on the “logarithmic formula” for radiative forcing with increasing CO2, cited by the IPCC TAR in 2001. This paper has mystified so many bloggers. I have read many blog articles where the blog authors and commenters throw up their metaphorical hands at the lack of justification for the contents of this paper. However, it is not mystifying if you are familiar with the physics of radiative transfer and the papers from the 70’s through the 90’s calculating radiative imbalance as a result of more “greenhouse” gases.

It’s all about the context.

We’ll take a walk through a few decades of GCMs..

We’ll start with Rind, Peteet & Kukla (1989). They review the classic thinking on the problem:

Kukla et al. [1981] described how the orbital configurations seemed to match up with gross climate variations for the last 150 millennia or so. As a result of these and other geological studies, the consensus exists that orbital variations are responsible for initiating glacial and interglacial climatic regimes. The most obvious difference between these two regimes, the existence of subpolar continental ice sheets, appears related to solar insolation at northern hemisphere high latitudes in summer. For example, solar insolation at these latitudes in August and September was reduced, compared with today’s values, around 116,000 years before the present (116 kyr B.P.), during the time when ice growth apparently began, and it was increased around 10 kyr B.P. during a time of rapid ice sheet retreat [e.g., Berger, 1978] (Figure 1).

And the question of whether basic physics can link the supposed cause and effect:

Are the solar radiation variations themselves sufficient to produce or destroy the continental ice sheets?

The July solar radiation incident at 50ºN and 60ºN over the past 170 kyr is shown in Figure 1, along with August and September values at 50ºN (as shown by the example for July, values at the various latitudes of concern for ice age initiation all have similar insolation fluctuations). The peak variations are of the order of 10%, which if translated with an equal percentage into surface air temperature changes would be of the order of 30ºC. This would certainly be sufficient to allow snow to remain throughout the summer in extreme northern portions of North America, where July surface temperatures today are only about 10ºC above freezing.

However, the direct translation ignores all of the other features which influence surface air temperature during summer, such as cloud cover and albedo variations, long wave radiation, surface flux effects, and advection.

[Emphasis added].

Various energy balance climate models have been used to assess how much cooling would be associated with changed orbital parameters. As the initiation of ice growth will alter the surface albedo and provide feedback to the climate change, the models also have to include crude estimates of how ice cover will change with climate. With the proper tuning of parameters, some of which is justified on observational grounds, the models can be made to simulate the gross glacial/interglacial climate changes.

However, these models do not calculate from first principles all the various influences on surface air temperature noted above, nor do they contain a hydrologic cycle which would allow snow cover to be generated or increase. The actual processes associated with allowing snow cover to remain through the summer will involve complex hydrologic and thermal influences, for which simple models can only provide gross approximations.

They comment then on the practical problems of using GCMs for 10 kyr runs that we noted above. The problem is worked around by using prescribed values for certain parameters and by using a coarse grid – 8° x 10° and 9 vertical layers.

The various GCMs runs are typical of the approach to using GCMs to “figure stuff out” – try different runs with different things changed to see what variations have the most impact and what variations, if any, result in the most realistic answers:

Rind et al 1989-1

We have thus used the Goddard Institute for Space Studies (GISS) GCM for a series of experiments in which orbital parameters, atmospheric composition, and sea surface temperatures are changed. We examine how the various influences affect snow cover and low-elevation ice sheets in regions of the northern hemisphere where ice existed at the Last Glacial Maximum (LGM). As we show, the GCM is generally incapable of simulating the beginnings of ice sheet growth, or of maintaining low-elevation ice sheets, regardless of the orbital parameters or sea surface temperatures used.

[Emphasis added].

And the result:

The experiments indicate there is a wide discrepancy between the model’s response to Milankovitch perturbations and the geophysical evidence of ice sheet initiation. As the model failed to grow or sustain low-altitude ice during the time of high-latitude maximum solar radiation reduction (120-110 kyrB.P.), it is unlikely it could have done so at any other time within the last several hundred thousand years.

If the model results are correct, it indicates that the growth of ice occurred in an extremely ablative environment, and thus demanded some complicated strategy, or else some other climate forcing occurred in addition to the orbital variation influence (and CO2 reduction), which would imply we do not really understand the cause of the ice ages and the Milankovitch connection. If the model is not nearly sensitive enough to climate forcing, it could have implications for projections of future climate change.

[Emphasis added].

The basic model experiment on the ability of Milankovitch variations by themselves to generate ice sheets in a GCM, experiment 2, shows that in the GISS GCM even exaggerated summer radiation deficits are not sufficient. If widespread ice sheets at 10-m elevation are inserted, CO2 reduced by 70ppm, sea ice increases to full ice age conditions, and sea surface temperatures reduced to CLIMAP 18 kyr BP estimates or below, the model is just barely able keep these ice sheets from melting in restricted regions. How likely are these results to represent the actual state of affairs?

That was 1989 GCM’s.

Phillipps & Held (1994) had basically the same problem. This is the famous Isaac Held, who has written extensively on climate dynamics, water vapor feedback, GCMs and runs an excellent blog that is well-worth reading.

While paleoclimatic records provide considerable evidence in support of the astronomical, or Milankovitch, theory of the ice ages (Hays et al. 1976), the mechanisms by which the orbital changes influence the climate are still poorly understood..

..For this study we utilize the atmosphere-mixed layer ocean model.. In examining this model’s sensitivity to different orbital parameter combinations, we have compared three numerical experiments.

They describe the comparison models:

Our starting point was to choose the two experiments that are likely to generate the largest differences in climate, given the range of the parameter variations computed to have occurred over the past few hundred thousand years. The eccentricity is set equal to 0.04 in both cases. This is considerably larger than the present value of 0.016 but comparable to that which existed from ~90 to 150k BP.

In the first experiment, the perihelion is located at NH summer solstice and the obliquity is set at the high value of 24°.

In the second case, perihelion is at NH winter solstice and the obliquity equals 22°.

The perihelion and obliquity are both favorable for warm northern summers in the first case, and for cool northern summers in the second. These experiments are referred to as WS and CS respectively.

We then performed another calculation to determine how much of the difference between these two integrations is due to the perihelion shift and how much to the change in obliquity. This third model has perihelion at summer solstice, but a low value (22°) of the obliquity. The eccentricity is still set at 0.04. This experiment is referred to as WS22.

Sadly:

We find that the favorable orbital configuration is far from being able to maintain snow cover throughout the summer anywhere in North America..

..Despite the large temperature changes on land the CS experiment does not generate any new regions of permanent snow cover over the NH. All snow cover melts away completely in the summer. Thus, the model as presently constituted is unable to initiate the growth of ice sheets from orbital perturbations alone. This is consistent with the results of Rind with a GCM (Rind et al. 1989)..

In the next article we will look at more favorable results in the 2000’s.

Articles in the Series

Part One – An introduction

Part Two – Lorenz – one point of view from the exceptional E.N. Lorenz

Part Three – Hays, Imbrie & Shackleton – how everyone got onto the Milankovitch theory

Part Four – Understanding Orbits, Seasons and Stuff – how the wobbles and movements of the earth’s orbit affect incoming solar radiation

Part Five – Obliquity & Precession Changes – and in a bit more detail

Part Six – “Hypotheses Abound” – lots of different theories that confusingly go by the same name

Part Seven and a Half – Mindmap – my mind map at that time, with many of the papers I have been reviewing and categorizing plus key extracts from those papers

Part Eight – GCM II – more recent work from the “noughties” – GCM results plus EMIC (earth models of intermediate complexity) again trying to produce perennial snow cover

Part Nine – GCM III – very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age – a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Eleven – End of the Last Ice age – latest data showing relationship between Southern Hemisphere temperatures, global temperatures and CO2

Twelve – GCM V – Ice Age Termination – very recent work from He et al 2013, using a high resolution GCM (CCSM3) to analyze the end of the last ice age and the complex link between Antarctic and Greenland

Thirteen – Terminator II – looking at the date of Termination II, the end of the penultimate ice age – and implications for the cause of Termination II

Fourteen – Concepts & HD Data – getting a conceptual feel for the impacts of obliquity and precession, and some ice age datasets in high resolution

Fifteen – Roe vs Huybers – reviewing In Defence of Milankovitch, by Gerard Roe

Sixteen – Roe vs Huybers II – remapping a deep ocean core dataset and updating the previous article

Seventeen – Proxies under Water I – explaining the isotopic proxies and what they actually measure

Eighteen – “Probably Nonlinearity” of Unknown Origin – what is believed and what is put forward as evidence for the theory that ice age terminations were caused by orbital changes

Nineteen – Ice Sheet Models I – looking at the state of ice sheet models

References

Can Milankovitch Orbital Variations Initiate the Growth of Ice Sheets in a General Circulation Model?, Rind, Peteet & Kukla, JGR (1989) – behind a paywall, email me if you want to read it, scienceofdoom – you know what goes here – gmail.com

Response to Orbital Perturbations in an Atmospheric Model Coupled to a Slab Ocean, Phillipps & Held, Journal of Climate (1994) – free paper

New estimates of radiative forcing due to well-mixed greenhouse gases, Myhre et al, GRL (1998)

Read Full Post »

It is common to find blogs and articles from what we might call the “consensus climate science” corner that we know what caused the ice ages.

The cause being changes in solar insolation at higher latitudes via the orbital changes described in Part Four and Five. These go under the banner of the “Milankovitch theory”.

While that same perspective is present in climate science papers, the case is presented more clearly. Or perhaps I could say, it’s made clear that the case is far from clear. It’s very very muddy.

Here are Smith & Gregory (2012):

It is generally accepted that the timing of glacials is linked to variations in solar insolation that result from the Earth’s orbit around the sun (Hays et al. 1976; Huybers and Wunsch 2005). These solar radiative anomalies must have been amplified by feedback processes within the climate system, including changes in atmospheric greenhouse gas (GHG) concentrations (Archer et al. 2000) and ice-sheet growth (Clark et al. 1999), and whilst hypotheses abound as to the details of these feedbacks, none is without its detractors and we cannot yet claim to know how the Earth system produced the climate we see recorded in numerous proxy records.

[Emphasis added].

Still, there are always outliers in every field and one paper doesn’t demonstrate a consensus on anything. So let’s take a walk through the mud..

Wintertime NH High Latitude Insolation

Kukla (1972):

The link between the Milankovitch mechanism and climate remains unclear. Summer half-year insolation curves for 65°N are usually offered on the assumption that the incoming radiation could directly control the retreat or advance of glaciers, thus controlling the global climate.

The validity of this assumption was questioned long ago by Croll (1875) and Ball (1891). Modern satellite measurements fully justify Croll’s concept of climate formation, with ocean currents playing the basic role in distributing heat and moisture to continents. The simplistic model of Koppen and Wegener must be definitely abandoned..

..The principal cold periods are found, within the accuracy limits of radiometric dating, to be precisely parallelled by intervals of decreasing winter insolation income for Northern Hemisphere (glacial insolation regime) and vice versa. Gross climatic changes originate in winters on the continents of the Northern Hemisphere.

Just for interest for history buffs, he also comments:

Two facts are highly probable: (1) in A. D. 2100 the globe will be cooler than today (Bray 1970), and (2) Man-made warming will hardly be noticeable on global scale at that time.

Self-Oscillations of the Climate System

Broecker & Denton (1990):

Although we are convinced that the Earth’s climate responds to orbital cycles in some fashion, we reject the view of a direct linkage between seasonality and ice-sheet size with consequent changes to climate of distant regions. Such a linkage cannot explain synchronous climate changes of similar severity in both polar hemispheres. Also, it cannot account for the rapidity of the transition from full glacial toward full interglacial conditions. If global climates are driven by changes in seasonality, then another linkage must exist.

We propose that Quaternary glacial cycles were dominated by abrupt reorganizations of the ocean-atmosphere system driven by orbitally induced changes in fresh water transports which impact salt structure in the sea. These reorganizations mark switches between stable modes of operation of the ocean-atmosphere system. Although we think that glacial cycles were driven by orbital change, we see no basis for rejecting the possibility that the mode changes are part of a self-sustained internal oscillation that would operate even in the absence of changes in the Earth’s orbital parameters. If so, as pointed out by Saltzman et al. (1984), orbital cycles can merely modulate and pace a self-oscillating climate system..

..Existing data from the Earth’s glacier system thus imply that the last termination began simultaneously and abruptly in both polar hemispheres, despite the fact that summer insolation signals were out of phase at the latitude of the key glacial records..

..Although variations in the Earth’s orbital geometry are very likely the cause of glacial cycles (Hays et al., 1976; Imbrie et al., 1984), the nature of the link between seasonal insolation and global climate remains a major unanswered question..

[Emphasis added].

Strictly speaking this is a “not quite Milankovitch” theory (and there are other flavors of this theory not covered in this article). I put forward this paper because Wallace S. Broecker is a very influential climate scientist on this topic and the subject of the thermohaline circulation (THC) in past climate, has written many papers, and generally appears to stick with a “Milankovitch” flavor to his theories.

Temperature Gradient between Low & High Latitude

George Kukla, Clement, Cane, Gavin & Zebiak  (2002):

Although the link between insolation and climate is commonly thought to be in the high northern latitudes in summer, our results show that the start of the last glaciation in marine isotope stage (MIS) 5d was associated with a change of insolation during the transitional seasons in the low latitudes.

A simplified coupled ocean-atmosphere model shows that changes in the seasonal cycle of insolation could have altered El Nino Southern Oscillation (ENSO) variability so that there were almost twice as many warm ENSO events in the early glacial than in the last interglacial. This indicates that ice buildup in the cooled high latitudes could have been accelerated by a warmed tropical Pacific..

..Since the early 1900s, the link between insolation and climate has been seen in the high latitudes of the Northern Hemisphere where summer insolation varies significantly.

Insolation at the top of the atmosphere (TOA) during the summer solstice at 65°N is commonly taken to represent the solar forcing of changing global climate. This is at odds with the results of Berger et al. (1981), who correlated the varying monthly TOA insolation at different latitudes of both hemispheres with the marine oxygen isotope record of Hays et al. (1976). The highest positive correlation (p ≤ 0.01) was found not for June but for September, and not in the high latitudes but in the three latitudinal bands representing the tropics (25°N, 5°N, and 15°S)..

..At first glance the implications of our results appear to be counterintuitive, indicating that the early buildup of glacier ice was associated not with the cooling, but with a relative warming of tropical oceans. Recent analogs suggest that it might even have been accompanied by a temporary increase of globally averaged annual mean temperature. If correct, the main trigger of glaciations would not be the expansion of snow fields in subpolar belts, but rather the increase in temperature gradient between the low and the high latitudes.

[Emphasis added].

A Puzzle

George Kukla et al (2002) – written along with a cast of eminents like Shackleton, Imbrie, Broecker:

At the end of the last interglacial period, over 100,000 yr ago, the Earth’s environments, similar to those of today, switched into a profoundly colder glacial mode. Glaciers grew, sea level dropped, and deserts expanded. The same transition occurred many times earlier, linked to periodic shifts of the Earth’s orbit around the Sun. The mechanism of this change, the most important puzzle of climatology, remains unsolved.

[Emphasis added].

Gradient in Insolation from Low to High Latitudes

Maureen Raymo & Kerim Nisancioglu (2003):

Based mainly on climate proxy records of the last 0.5 Ma, a general scientific consensus has emerged that variations in summer insolation at high northern latitudes are the dominant influence on climate over tens of thousands of years. The logic behind nearly a century’s worth of thought on this topic is that times of reduced summer insolation could allow some snow and ice to persist from year to year, lasting through the ‘‘meltback’’ season. A slight increase in accumulation from year to year, enhanced by a positive snow-albedo feedback, would eventually lead to full glacial conditions. At the same time, the cool summers are proposed to be accompanied by mild winters which, through the temperature-moisture feedback, would lead to enhanced winter accumulation of snow. Both effects, reduced spring-to-fall snowmelt and greater winter accumulation, seem to provide a logical and physically sound explanation for the waxing and waning of the ice sheets as high-latitude insolation changes.

Then they point out the problems with this hypothesis and move onto their theory:

We propose that the gradient in insolation between high and low latitudes may, through its influence on the poleward flux of moisture which fuels ice sheet growth, play the dominant role in controlling climate from ~3 to 1 million years ago..

And conclude with an important comment:

..Building a model which can reproduce the first-order features of the Earth’s Ice Age history over the Plio-Pleistocene would be an important step forward in the understanding of the dynamic processes that drive global climate change.

In a later article we will look at the results of GCMs in starting and ending ice ages.

Summertime NH High Latitude Insolation

Roe (2006):

The Milankovitch hypothesis is widely held to be one of the cornerstones of climate science. Surprisingly, the hypothesis remains not clearly defined despite an extensive body of research on the link between global ice volume and insolation changes arising from variations in the Earth’s orbit. In this paper, a specific hypothesis is formulated. Basic physical arguments are used to show that, rather than focusing on the absolute global ice volume, it is much more informative to consider the time rate of change of global ice volume.

This simple and dynamically-logical change in perspective is used to show that the available records support a direct, zero-lag, antiphased relationship between the rate of change of global ice volume and summertime insolation in the northern high latitudes.

[Emphasis added]

And with very nice curve fits of his hypothesis.

Length of Southern Hemisphere Summer

Huybers & Denton (2008):

We conclude that the duration of Southern Hemisphere summer is more likely to control Antarctic climate than the intensity of Northern Hemisphere summer with which it (often misleadingly) covaries. In our view, near interhemispheric climate symmetry at the obliquity and precession timescales arises from a northern response to local summer intensity and a southern response to local summer duration.

And with very nice curve fits of their hypothesis.

Warming in Antarctic Changes Atmospheric CO2

Wolff et al (2009):

The change from a glacial to an interglacial climate is paced by variations in Earth’s orbit.

However, the detailed sequence of events that leads to a glacial termination remains controversial. It is particularly unclear whether the northern or southern hemisphere leads the termination. Here we present a hypothesis for the beginning and continuation of glacial terminations, which relies on the observation that the initial stages of terminations are indistinguishable from the warming stage of events in Antarctica known as Antarctic Isotopic Maxima, which occur frequently during glacial periods. Such warmings in Antarctica generally begin to reverse with the onset of a warm Dansgaard–Oeschger event in the northern hemisphere.

However, in the early stages of a termination, Antarctic warming is not followed by any abrupt warming in the north.

We propose that the lack of an Antarctic climate reversal enables southern warming and the associated atmospheric carbon dioxide rise to reach a point at which full deglaciation becomes inevitable. In our view, glacial terminations, in common with other warmings that do not lead to termination, are led from the southern hemisphere, but only specific conditions in the northern hemisphere enable the climate state to complete its shift to interglacial conditions.

[Emphasis added]

A Puzzle

In a paper on radiative forcing during glacial periods and attempts to calculate climate sensitivity, Köhler et al (2010) state:

Natural climate variations during the Pleistocene are still not fully understood. Neither do we know how much the Earth’s annual mean surface temperature changed in detail, nor which processes were responsible for how much of these temperature variations.

Another Perspective

Final comments from the always fascinating Carl Wunsch:

The long-standing question of how the slight Milankovitch forcing could possibly force such an enormous glacial–interglacial change is then answered by concluding that it does not do so..

..The appeal of explaining the glacial/interglacial cycles by way of the Milankovitch forcing is clear: it is a deterministic story..

..Evidence that Milankovitch forcing ‘‘controls’’ the records, in particular the 100 ka glacial/ interglacial, is very thin and somewhat implausible, given that most of the high frequency variability lies elsewhere. These results are not a proof of stochastic control of the Pleistocene glaciations, nor that deterministic elements are not in part a factor. But the stochastic behavior hypothesis should not be set aside arbitrarily—as it has at least as strong a foundation as does that of orbital control. There is a common view in the paleoclimate community that describing a system as ‘‘stochastic’’ is equivalent to ‘‘unexplainable’’.

Nothing could be further from the truth (e.g., Gardiner, 1985): stochastic processes have a rich physics and kinematics which can be described and understood, and even predicted.

Conclusion

This is not an exhaustive list of hypotheses because I have definitely missed some (Wunsch, in another paper, notes there are at least 30 theories).

It’s also possible I have misinterpreted the key point of at least one of the hypotheses above (apologies to any authors of papers if so). Attempting to understand the ice ages, and attempting to survey the ideas of climate science on the ice ages are both daunting tasks.

What should be clear from this small foray into the subject is that there is no “Milankovitch theory”.

There are many theories with a common premise – solar insolation changes via orbital changes “explain” the start and end of ice ages – but then each with a contradictory theory of how this change is effected.

Therefore, a maximum of one of these theories is correct.

And my current perspective – and an obvious one from reading over 50 papers on the causes of the ice ages – is the number of confusingly-named “Milankovitch theories” that are correct is zero.

Articles in the Series

Part One – An introduction

Part Two – Lorenz – one point of view from the exceptional E.N. Lorenz

Part Three – Hays, Imbrie & Shackleton – how everyone got onto the Milankovitch theory

Part Four – Understanding Orbits, Seasons and Stuff – how the wobbles and movements of the earth’s orbit affect incoming solar radiation

Part Five – Obliquity & Precession Changes – and in a bit more detail

Part Seven – GCM I – early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Seven and a Half – Mindmap – my mind map at that time, with many of the papers I have been reviewing and categorizing plus key extracts from those papers

Part Eight – GCM II – more recent work from the “noughties” – GCM results plus EMIC (earth models of intermediate complexity) again trying to produce perennial snow cover

Part Nine – GCM III – very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age – a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Eleven – End of the Last Ice age – latest data showing relationship between Southern Hemisphere temperatures, global temperatures and CO2

Twelve – GCM V – Ice Age Termination – very recent work from He et al 2013, using a high resolution GCM (CCSM3) to analyze the end of the last ice age and the complex link between Antarctic and Greenland

Thirteen – Terminator II – looking at the date of Termination II, the end of the penultimate ice age – and implications for the cause of Termination II

Fourteen – Concepts & HD Data – getting a conceptual feel for the impacts of obliquity and precession, and some ice age datasets in high resolution

Fifteen – Roe vs Huybers – reviewing In Defence of Milankovitch, by Gerard Roe

Sixteen – Roe vs Huybers II – remapping a deep ocean core dataset and updating the previous article

Seventeen – Proxies under Water I – explaining the isotopic proxies and what they actually measure

Eighteen – “Probably Nonlinearity” of Unknown Origin – what is believed and what is put forward as evidence for the theory that ice age terminations were caused by orbital changes

Nineteen – Ice Sheet Models I – looking at the state of ice sheet models

References

Hopefully in the order they appeared in the article:

The last glacial cycle: transient simulations with an AOGCM, Robin Smith & Jonathan Gregory, Climate Dynamics (2012)

Insolation and Glacials, George Kukla (1972)

The role of ocean-atmosphere reorganizations in glacial cycles, Wallace Broecker & George Denton, Quaternary Science Reviews (1990)

Last Interglacial and Early Glacial ENSO, George Kukla, Clement, Cane, Gavin & Zebiak (2002)

Last Interglacial Climates, George Kukla et al, Quaternary Research (2002)

The 41 kyr world: Milankovitch’s other unsolved mystery, Maureen Raymo & Kerim Nisancioglu, Paleoceanography (2003)

In defense of Milankovitch, Gerard Roe, Geophysical Research Letters (2006)

Antarctic temperature at orbital timescales controlled by local summer duration, Huybers & Denton, Nature Geoscience (2008)

Glacial terminations as southern warmings without northern control, E. W. Wolff, H. Fischer & R. Röthlisberger, Nature Geoscience (2009)

What caused Earth’s temperature variations during the last 800,000 years? Data-based evidence on radiative forcing and constraints on climate sensitivity, Peter Köhler, Bintanja, Fischer, Joos,  Knutti, Lohmann, & Masson-Delmotte, Quaternary Science Reviews (2010)

Quantitative estimate of the Milankovitch-forced contribution to observed Quaternary climate change, Carl Wunsch, Quaternary Science Reviews (2004)

Read Full Post »

In Part Four we  started looking at the changes in solar insolation due to the different orbital effects.

Eccentricity itself has a negligible effect on solar insolation. Obliquity and precession change the (geographic and temporal) distribution of solar radiation, but not the annual amount.

Here is the annual variation for each season at 65ºN:

TOA-time-65N-500kyr-by-quarter

Figure 1

There is less variation by year than the value on any given day (compare fig 5 & 6) in Part Four.

Here is the corresponding graph for 55ºN:

TOA-time-55N-500kyr-by-quarter

Figure 2

Of course, higher solar radiation in one part of the year due to tilt, or obliquity, means less solar radiation in the “opposite” part of the year.

In the graphs above we see that at the peak of the Eemian inter-glacial, JJA (June-July-August) radiation is a minimum, MAM (March-April-May) is on the upswing towards its peak, SON is on a downswing past its peak and of course, DJF is very low and not changing much because there isn’t much sun at high latitudes during the winter.

So what about the annual variation? Let’s zoom in on the period around the Eemian inter-glacial. The top graph shows the daily average insolation for four different years, and the bottom graph shows the annual average by year:

TOA-time-120k-150kyrs-65'N-by-day-and-annual

Figure 3

And for reference the annual variation over the last 500 kyrs:

TOA-time-500ky-65N-annual-variation

Figure 4

And the same data for 55ºN:

TOA-time-120k-150kyrs-55'N-by-day-and-annual

Figure 5

TOA-time-500ky-55N-annual-variation

Figure 6

As we would expect, the peaks and troughs occur at the same times for 55ºN and 65ºN.

What is different between the two latitudes is the change in annual insolation with time at a given latitude. The 65ºN insolation varies by 7 W/m² over the last 500 kyrs, while the 55ºN figure is not quite 3 W/m². By comparison 45ºN varies by less than 1 W/m².

Around the 30 kyrs centered on the Eemian inter-glacial, the variation is:

  • 65ºN – 5.5 W/m²
  • 55ºN – 2.2 W/m²
  • 45ºN – 0.3 W/m²

And if we take the steepest part of the increase from 145  kyr – 135 kyr, we get a per century value of:

  • 65ºN – 40 mW/m² per century
  • 55ºN – 25 mW/m² per century
  • 45ºN – 2 mW/m² per century
  • (and in the southern hemisphere there were similar reductions in insolation over this period)

Now by comparison, due to increases in atmospheric CO2 and other “greenhouse” gases, the “radiative forcing” prior to any feedbacks (i.e., all other things remaining the same) is about 1.7 W/m² over 130 years, or 1.3 W/m² per century.

Now this has been applied globally of course, but in any case recent changes have been 30 – 50 times the rate of increase of high latitude radiative change during one of the key transitions in our past climate.

These values and comparisons aren’t aimed at promoting or attacking any theory, they are just intended to get some understanding of the values in question.

Of course, annual changes are smaller than seasonal changes. So let’s look back at the seasonal values around 120 kyrs – 150 kyrs:

TOA-time-120k-150kyrs-65'N-by-season

Figure 7

And let’s make it easier to understand the changes by looking at the anomaly plot (signal minus the mean for each season):

TOA-detrended-time-120k-150kyrs-65'N-by-season

Figure 8

We have quite large changes (comparatively) in each season. For example, the March-April-May figure increases by 60 W/m² from 143 kyrs ago to 130 kyrs ago, which is almost 0.5 W/m² per century, on a par with recent radiative forcing changes due to GHGs.

The problem with just looking at MAM – and is the reason why I started plotting all these results – is if the increase in MAM insolation caused more rapid ice melt at the end of winter, then didn’t the similarly large reduction in SON (autumn) insolation cause more ice to be there ready for spring? Each year has all the seasons so the whole year has to be considered..

And if there is such a clear argument for one season being some kind of dominant force compared with another season (some strong non-linearity), why isn’t there a consensus on what it is (along with some evidence)?

Huybers & Wunsch (2005) noted:

Taking these two [Milankovitch and chaos] perspectives together, there are currently more than 30 different models of the seven late Pleistocene glacial cycles.

Lastly, for interest, here is a typical spectral power plot of the TOA solar insolation (normalized). This one happens to have each season as a separate curve, but there isn’t much difference between each period so the plots pretty much overlay each other. The 3 vertical magenta lines represent (from left to right) the frequencies of 41 kyrs, 23 kyrs and 19 kyrs:

TOA-Spectral-power-last500ky-by-season-65N

Figure 7

In some later articles we will look at the spectral characteristics of the ice age record so knowing the spectral characteristics of orbital effects on insolation is important.

Articles in the Series

Part One – An introduction

Part Two – Lorenz – one point of view from the exceptional E.N. Lorenz

Part Three – Hays, Imbrie & Shackleton – how everyone got onto the Milankovitch theory

Part Four – Understanding Orbits, Seasons and Stuff – how the wobbles and movements of the earth’s orbit affect incoming solar radiation

Part Six – “Hypotheses Abound” – lots of different theories that confusingly go by the same name

Part Seven – GCM I – early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Seven and a Half – Mindmap – my mind map at that time, with many of the papers I have been reviewing and categorizing plus key extracts from those papers

Part Eight – GCM II – more recent work from the “noughties” – GCM results plus EMIC (earth models of intermediate complexity) again trying to produce perennial snow cover

Part Nine – GCM III – very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age – a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Eleven – End of the Last Ice age – latest data showing relationship between Southern Hemisphere temperatures, global temperatures and CO2

Twelve – GCM V – Ice Age Termination – very recent work from He et al 2013, using a high resolution GCM (CCSM3) to analyze the end of the last ice age and the complex link between Antarctic and Greenland

Thirteen – Terminator II – looking at the date of Termination II, the end of the penultimate ice age – and implications for the cause of Termination II

References

Obliquity pacing of the late Pleistocene glacial terminations, Peter Huybers & Carl Wunsch, Nature (2005)

All graphs produced thanks to the Matlab code supplied by Jonathan Levine.

Read Full Post »

In Part Three we had a very brief look at the orbital factors that affect solar insolation.

Here we will look at these factors in more detail. We start with the current situation.

Seasonal Distribution of Incoming Solar Radiation

The earth is tilted on its axis (relative to the plane of orbit) so that in July the north pole “faces” the sun, while in January the south pole “faces” the sun.

Here are the TOA graphs for average incident solar radiation at different latitudes by month:

From Vardavas & Taylor (2007)

From Vardavas & Taylor (2007)

Figure 1

And now the average values first by latitude for the year, then by month for northern hemisphere, southern hemisphere and the globe:

TOA-solar-total-by-month-and-latitude-present

Figure 2

We can see that the southern hemisphere has a higher peak value – this is because the earth is closest to the sun (perihelion) on January 3rd, during the southern hemisphere summer.

This is also reflected in the global value which varies between 330 W/m² at aphelion (furthest away from the sun) to 352 W/m² at perihelion.

Eccentricity

There is a good introduction to planetary orbits in Wikipedia. I was saved from the tedium of having to work out how to implement an elliptical orbit vs time by the Matlab code kindly supplied by Jonathan Levine. He also supplied the solution to the much more difficult problem of insolation vs latitude at any day in the Quaternary period, which we will look at later.

Here is the the TOA solar insolation by day of the year, as a function of the eccentricity of the orbit:

Daily-Change-TOA-Solar-vs-Eccentricity-2

Figure 3 – Updated

The earth’s orbit currently has an eccentricity of 0.0167. This means that the maximum variation in solar radiation is 6.9%.

Perihelion is 147.1 million km, while aphelion is 152.1 million km. The amount of solar radiation we receive is “the inverse square law”, which means if you move twice as far away, the solar radiation reduces by a factor of four. So to calculate the difference between the min and max you simply calculate: (152.1/147.1)² = 1.069 or a change of 6.9%.

Over the past million or more years the earth’s orbit has changed its eccentricity, from a low close to zero, to a maximum of about 0.055. The period of each cycle is about 100,000 years.

Here is my calculation of change in total annual TOA solar radiation with eccentricity:

Annual-%Change-TOA-Solar-vs-Eccentricity

Figure 4

Looking at figure 1 of Imbrie & Imbrie (1980), just to get a rule of thumb, eccentricity changed from 0.05 to 0.02 over a 50,000 year period (about 220k years ago to 170k years ago). This means that the annual solar insolation dropped by 0.1% over 50,000 years or 3 mW/m² per century. (This value is an over-estimate because it is the peak value with sun overhead, if instead we take the summer months at high latitude the change becomes  0.8 mW/m² per century)

It’s a staggering drop, and no wonder the strong 100,000 year cycle in climate history matching the Milankovitch eccentricity cycles is such a difficult theory to put together.

Obliquity & Precession

To understand those basics of these changes take a look at the Milankovitch article. Neither of these two effects, precession and obliquity, changes the total annual TOA incident solar radiation. They just change its distribution.

Here is the last 250,000 years of solar radiation on July 1st – for a few different latitudes:

TOA-Solar-July1-Latitude-vs0-250k-499px

Figure 5 – Click for a larger image

Notice that the equatorial insolation is of course lower than the mid-summer polar insolation.

Here is the same plot but for October 1st. Now the equatorial value is higher:

TOA-Solar-Oct1-Latitude-vs0-250k-499px

Figure 6 – Click for a larger image

Let’s take a look at the values for 65ºN, often implicated in ice age studies, but this time for the beginning of each month of the year (so the legend is now 1 = January 1st, 2 = Feb 1st, etc):

TOA-Solar-65N-bymonth-vs0-250k-lb-499px

Figure 7 – Click for a larger image

And just for interest I marked one date for the last inter-glacial – the Eemian inter-glacial as it is known.

Come up with a theory:

  • peak insolation at 65ºN
  • fastest rate of change
  • minimum insolation
  • average of summer months
  • average of winter half year
  • average autumn 3 months

Then pick from the graph and let’s start cooking.. Having trouble? Pick a different latitude. Southern Hemisphere – no problem, also welcome.

As we will see, there are a lot of theories, all of which call themselves “Milankovitch” but each one is apparently incompatible with other similarly-named “Milankovitch” theories.

At least we have a tool, kindly supplied by Jonathan Levine, which allows us to compute any value. So if any readers have an output request, just ask.

One word of caution for budding theorists of ice ages (hopefully we have many already) from Kukla et al (2002):

..The marine isotope record is commonly tuned to astronomic chronology, represented by June insolation at the top of the atmosphere at 60′ or 65′ north latitude. This was deemed justified because the frequency of the Pleistocene gross global climate states matches the frequency of orbital variations..

..The mechanism of the climate response to insolation remains unclear and the role of insolation in the high latitudes as opposed to that in the low latitudes is still debated..

..In either case, the link between global climates and orbital variations appears to be complicated and not directly controlled by June insolation at latitude 65’N. We strongly discourage dating local climate proxies by unsubstantiated links to astronomic variations..

[Emphasis added].

I’m a novice with the historical records and how they have been constructed, but I understand that SPECMAP is tuned to a Milankovitch theory, i.e., the dates of peak glacials and peak inter-glacials are set by astronomical values.

Articles in the Series

Part One – An introduction

Part Two – Lorenz – one point of view from the exceptional E.N. Lorenz

Part Three – Hays, Imbrie & Shackleton – how everyone got onto the Milankovitch theory

Part Five – Obliquity & Precession Changes – and in a bit more detail

Part Six – “Hypotheses Abound” – lots of different theories that confusingly go by the same name

Part Seven – GCM I – early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Seven and a Half – Mindmap – my mind map at that time, with many of the papers I have been reviewing and categorizing plus key extracts from those papers

Part Eight – GCM II – more recent work from the “noughties” – GCM results plus EMIC (earth models of intermediate complexity) again trying to produce perennial snow cover

Part Nine – GCM III – very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age – a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Eleven – End of the Last Ice age – latest data showing relationship between Southern Hemisphere temperatures, global temperatures and CO2

Twelve – GCM V – Ice Age Termination – very recent work from He et al 2013, using a high resolution GCM (CCSM3) to analyze the end of the last ice age and the complex link between Antarctic and Greenland

Thirteen – Terminator II – looking at the date of Termination II, the end of the penultimate ice age – and implications for the cause of Termination II

References

Last Interglacial Climates, Kukla et al, Quaternary Research (2002)

Modeling the Climatic Response to Orbital Variations, John Imbrie & John Z. Imbrie, Science (1980)

Read Full Post »

In Part Two we looked at one paper by Lorenz from 1968 where he put forward the theory that climate might be “intransitive”. In common parlance we could write this as “climate might be chaotic” (even though there is a slight but important difference between the two definitions).

In this article we will have a bit of a look at the history of the history of climate – that is, a couple of old papers about ice ages.

These papers are quite dated and lots of new information has since come to light, and of course thousands of papers have since been written about the ice ages. So why a couple of old papers? It helps to create some context around the problem. These are “oft-cited”, or seminal, papers, and understanding ice ages is so complex that it is probably easiest to set out an older view as some kind of perspective.

At the very least, it helps get my thinking into order. Whenever I try to understand a climate problem I usually end up trying to understand some of the earlier oft-cited papers because most later papers rely on that context without necessarily repeating it.

Variations in the Earth’s Orbit: Pacemaker of the Ice Ages by JD Hays, J Imbrie, NJ Shackleton (1976) is referenced by many more recent papers that I’ve read – and, according to Google Scholar, cited by 2,656 other papers – that’s a lot in climate science.

For more than a century the cause of fluctuations in the Pleistocene ice sheets has remained an intriguing and unsolved scientific mystery. Interest in this problem has generated a number of possible explanations.

One group of theories invokes factors external to the climate system, including variations in the output of the sun, or the amount of solar energy reaching the earth caused by changing concentrations of interstellar dust; the seasonal and latitudinal distribution of incoming radiation caused by changes in the earth’s orbital geometry; the volcanic dust content of the atmosphere; and the earth’s magnetic field. Other theories are based on internal elements of the system believed to have response times sufficiently long to yield fluctuations in the range 10,000 to 1,000,000 years.

Such features include the growth and decay of ice sheets, the surging of the Antarctic ice sheet; the ice cover of the Arctic Ocean; the distribution of carbon dioxide between atmosphere and ocean; and the deep circulation of the ocean.

Additionally, it has been argued that as an almost intransitive system, climate could alternate between different states on an appropriate time scale without the intervention of any external stimulus or internal time constant.

This last idea is referenced as Lorenz 1968, the paper we reviewed in Part Two.

The authors note that previous work has provided evidence of orbital changes being involved in climate change, and make an interesting comment that we will see has not changed in the intervening 38 years:

The first [problem] is the uncertainty in identifying which aspects of the radiation budget are critical to climate change. Depending on the latitude and season considered most significant, grossly different climate records can be predicted from the same astronomical data..

Milankovitch followed Koppen and Wegener’s view that the distribution of summer insolation at 65°N should be critical to the growth and decay of ice sheets.. Kukla pointed out weaknesses.. and suggested that the critical time may be Sep and Oct in both hemispheres.. As a result, dates estimated for the last interglacial on the basis of these curves have ranged from 80,000 to 180,000 years ago.

The other problem at that time was the lack of quality data on the dating of various glacials and interglacials:

The second and more critical problem in testing the orbital theory has been the uncertainty of geological chronology. Until recently, the inaccuracy of dating methods limited the interval over which a meaningful test could be made to the last 150,000 years.

This paper then draws on some newer, better quality data for the last few hundred thousand years of temperature history. By the way, Hays was (and is) a Professor of Geology, Imbrie was (and is) a Professor of Oceanography and Shackleton was at the time in Quarternary Research, later a professor in the field.

Brief Introduction to Orbital Parameters that Might Be Important

Now, something we will look at in a later article, probably Part Four, is exactly what changes in solar insolation are caused by changes in the earth’s orbital geometry. But as an introduction to that question, there are three parameters that vary and are linked to climate change:

  1. Eccentricity, e, (how close is the earth’s orbit to a circle) – currently 0.0167
  2. Obliquity, ε, (the tilt of the earth’s axis) – currently 23.439°
  3. Precession, ω, (how close is the earth to the sun in June or December) – currently the earth is closest to the sun on January 3rd

The first, eccentricity, is the only one that changes the total amount of solar insolation received at top of atmosphere in a given year. Note that a constant solar insolation at the top of atmosphere can be a varying solar absorbed radiation if more or less of that solar radiation happens to be reflected off, say, ice sheets, due to, say, obliquity.

The second, obliquity, or tilt, affects the difference between summer and winter TOA insolation. So it affects seasons and, specifically, the strength of seasons.

The third, precession, affects the amount of radiation received at different times of the year (moderated by item 1, eccentricity). So if the earth’s orbit was a perfect circle this parameter would disappear. When the earth is closest to the sun in June/July the Northern Hemisphere summer is stronger and the SH summer is weaker, and vice versa for winters.

So eccentricity affects total TOA insolation, while obliquity and precession change its distribution in season and latitude. However, variations in solar insolation at TOA depend on e² and so the total variation in TOA radiation has, over a very long period, only been only 0.1%.

This variation is very small and yet the strongest “orbital signal” in the ice age record is that of eccentricity. A problem, that even for the proponents of this theory, has not yet been solved.

Last Interglacial Climates, by a cast of many including George J. Kukla, Wallace S. Broecker, John Imbrie, Nicholas J. Shackleton:

At the end of the last interglacial period, over 100,000 yr ago, the Earth’s environments, similar to those of today, switched into a profoundly colder glacial mode. Glaciers grew, sea level dropped, and deserts expanded. The same transition occurred many times earlier, linked to periodic shifts of the Earth’s orbit around the Sun. The mechanism of this change, the most important puzzle of climatology, remains unsolved.

[Emphasis added].

History Cores

Our geological data comprise measurements of three climatically sensitive parameters in two deep-sea sediment cores. These cores were taken from an area where previous work shows that sediment is accumulating fast enough to preserve information at the frequencies of interest. Measurements of one variable, the per mil enrichment of oxygen 18 (δ18O), make it possible to correlate these records with others throughout the world, and to establish that the sediment studied accumulated without significant hiatuses and at rates which show no major fluctuations..

.. From several hundred cores studied stratigraphically by the CLIMAP project, we selected two whose location and properties make them ideal for testing the orbital hypothesis. Most important, they contain together a climatic record that is continuous, long enough to be statistically useful (450,000 years) and characterized by accumulation rates fast enough (>3 cm per 1,000 years) to resolve climatic fluctuations with periods well below 20,000 years.

The cores were located in the Southern Indian ocean. What is interesting about the cores is that 3 different mechanisms are captured from each location, including δ18O isotopes which should be a measure of ice sheets globally and temperature in the ocean at the location of the cores.

Hays, Imbrie & Shackleton (1976)

Hays, Imbrie & Shackleton (1976)

Figure 1

There is much discussion about the dating of the cores. In essence, other information allows a few transitions to be dated, while the working assumption is that within these transitions the sediment accumulation is at a constant rate.

Although uniform sedimentation is an ideal which is unlikely to prevail precisely anywhere, the fact that the characteristics of the oxygen isotope record are present throughout the cores suggests that there can be no substantial lacunae, while the striking resemblance to records from distant areas shows that there can be no gross distortion of accumulation rate.

Spectral Analysis

The key part of their analysis is a spectral analysis of the data, compared with a spectral analysis of the “astronomical forcing”.

The authors say:

.. we postulate a single, radiation-climate system which transforms orbital inputs into climatic outputs. We can therefore avoid the obligation of identifying the physical mechanism of climatic response and specify the behavior of the system only in general terms. The dynamics of our model are fixed by assuming that the system is a time-invariant, linear system – that is, that its behavior in the time domain can be described by a linear differential equation with constant coefficients. The response of such a system in the frequency domain is well known: frequencies in the output match those of the input, but their amplitudes are modulated at different frequencies according to a gain function. Therefore, whatever frequencies characterize the orbital signals, we will expect to find them emphasized in paleoclimatic spectra (except for frequencies so high they would be greatly attenuated by the time constants of response)..

My translation – let’s compare the orbital spectrum with the historical spectrum without trying to formulate a theory and see how the two spectra compare.

The orbital effects:

From Hays et al (1976)

From Hays et al (1976)

Figure 2

The historical data:

From Hays et al (1976)

From Hays et al (1976)

Figure 3

We have also calculated spectra for two time series recording variations in insolation [their fig 4 – our fig 2], one for 55°S and the other for 60°N. To the nearest 1,000 years, the three dominant cycles in these spectra (41,000, 23,000 and 19,000 years) correspond to those observed in the spectra for obliquity and precession.

This result, although expected, underscores two important points. First, insolation spectra are characterized by frequencies reflecting obliquity and precession, but not eccentricity.

Second, the relative importance of the insolation components due to obliquity and precession varies with latitude and season.

[Emphasis added]

In commenting on the historical spectra they say:

Nevertheless, five of the six spectra calculated are characterized by three discrete peaks, which occupy the same parts of the frequency range in each spectrum. Those correspond to periods from 87,000 to 119,000 years are labeled a; 37,000 to 47,000 years b; and 21,000 to 24,000 years c. This suggest that the b and c peaks represent a response to obliquity and precession variation, respectively.

Note that the major cycle shown in the frequency spectrum is the 100,000 peak.

There is a lot of discussion in their paper of the data analysis, please have a read of their paper to learn more. The detail probably isn’t so important for current understanding.

The authors conclude:

Over the frequency range 10,000 to 100,000 cycle per year, climatic variance of these records is concentrated in three discrete spectral peaks at periods of 23,000, 42,000, and approximately 100,000 years. These peaks correspond to the dominant periods of the earth’s solar orbit and contain respectively about 10, 25 and 50% of the climatic variance.

The 42,000-year climatic component has the same period as variations in the obliquity of the earth’s axis and retains a constant phase relationship with it.

The 23,000-year portion of the variance displays the same periods (about 23,000 and 19,000 years) as the quasi-periodic precession index.

The dominant 100,000 year climatic component has an average period close to, and is in phase with, orbital eccentricity. Unlike the correlations between climate and the higher frequency orbital variations (which can be explained on the assumption that the climate system responds linearly to orbital forcing) an explanation of the correlations between climate and eccentricity probably requires an assumption of non-linearity.

It is concluded that changes in the earth’s orbital geometry are the fundamental cause of the succession of Quarternary ice ages.

Things were looking good for explanations of the ice ages in 1975..

For those who want to understand more recent evaluation of the spectral analysis of temperature history vs orbital forcing, check out papers by Carl Wunsch from 2003, 2004 and 2005, e.g. The spectral description of climate change including the 100 ky energyClimate Dynamics (2003).

A Few Years Later

Here are a few comments from Imbrie & Imbrie (1980):

Since the work of Croll and Milankovitch, many investigations have been aimed at the central question of the astronomical theory of the ice ages:

Do changes in orbital geometry cause changes in climate that are geologically detectable?

On the one hand, climatologists have attacked the problem theoretically by adjusting the boundary conditions of energy-balance models, and then observing the magnitude of the calculated response. If these numerical experiments are viewed narrowly as a test of the astronomical theory, they are open to question because the models used contain untested parameterizations of important physical processes. Work with early models suggested that the climatic response to orbital changes was too small to account for the succession of Pleistocene ice ages. But experiments with a new generation of models suggest that orbital variations are sufficient to account for major changes in the size of Northern Hemisphere ice sheets..

..In 1968, Broecker et al. (34, 35) pointed out that the curve for summertime irradiation at 45°N was a much better match to the paleoclimatic records of the past 150,000 years than the curve for 65°N chosen by Milankovitch..

Current Status. This is not to say that all important questions have been answered. In fact, one purpose of this article is to contribute to the solution of one of the remaining major problems: the origin and history of the 100,000-year climatic cycle.

At least over the past 600,000 years, almost all climatic records are dominated by variance components in a narrow frequency band centered near a 100,000-year cycle. Yet a climatic response at these frequencies is not predicted by the Milankovitch version of the astronomical theory – or any other version that involves a linear response..

..Another problem is that most published climatic records that are more than 600,000 years old do not exhibit a strong 100,000-year cycle..

The goal of our modeling effort has been to simulate the climatic response to orbital variations over the past 500,000 years. The resulting model fails to simulate four important aspects of this record. It fails to produce sufficient 100k power; it produces too much 23k and 19k power; it produces too much 413k power; and it loses its match with the record around the time of the last 413k eccentricity minimum, when values of e [eccentricity] were low and the amplitude of the 100k eccentricity cycle was much reduced..

..The existence of an unstable fixed point makes tuning an extremely sensitive task. For example, Weertman notes that changing the value of one parameter by less than 1 percent of its physically allowed range made the difference between a glacial regime and an interglacial regime in one portion of an experimental run while leaving the rest virtually unchanged..

This would be a good example of Lorenz’s concept of an almost intransitive system (one whose characteristics over long but finite intervals of time depend strongly on initial conditions).

Once again the spectre of the Eminent Lorenz is raised. We will see in later articles that with much more sophisticated models it is not easy to create an ice-age, or to turn an ice-age into an inter-glacial.

Articles in the Series

Part One – An introduction

Part Two – Lorenz – one point of view from the exceptional E.N. Lorenz

Part Four – Understanding Orbits, Seasons and Stuff – how the wobbles and movements of the earth’s orbit affect incoming solar radiation

Part Five – Obliquity & Precession Changes – and in a bit more detail

Part Six – “Hypotheses Abound” – lots of different theories that confusingly go by the same name

Part Seven – GCM I – early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Seven and a Half – Mindmap – my mind map at that time, with many of the papers I have been reviewing and categorizing plus key extracts from those papers

Part Eight – GCM II – more recent work from the “noughties” – GCM results plus EMIC (earth models of intermediate complexity) again trying to produce perennial snow cover

Part Nine – GCM III – very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age – a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Eleven – End of the Last Ice age – latest data showing relationship between Southern Hemisphere temperatures, global temperatures and CO2

Twelve – GCM V – Ice Age Termination – very recent work from He et al 2013, using a high resolution GCM (CCSM3) to analyze the end of the last ice age and the complex link between Antarctic and Greenland

Thirteen – Terminator II – looking at the date of Termination II, the end of the penultimate ice age – and implications for the cause of Termination II

References

Variations in the Earth’s Orbit: Pacemaker of the Ice Ages, JD Hays, J Imbrie & NJ Shackleton, Science (1976)

Modeling the Climatic Response to Orbital Variations, John Imbrie & John Z. Imbrie, Science (1980)

Last Interglacial Climates, Kukla et al, Quaternary Research (2002)

Read Full Post »

A really long time ago I wrote Ghosts of Climates Past. I’ve read a lot of papers on the ice ages and inter-glacials but never got to the point of being able to write anything coherent.

This post is my attempt to get myself back into gear – after a long time being too busy to write any articles.

Here is what the famous Edward Lorenz said in his 1968 paper, Climatic Determinism – the opening paper at a symposium titled Causes of Climatic Change:

The often-accepted hypothesis that the physical laws governing the behavior of an atmosphere determine a unique climate is examined critically. It is noted that there are some physical systems (transitive systems) whose statistics taken over infinite time intervals are uniquely determined by the governing laws and the environmental conditions, and other systems (intransitive systems) where this is not the case.

There are also certain transitive systems (almost intransitive systems) whose statistics taken over very long but finite intervals differ considerably from one such interval to another. The possibility that long-term climatic changes may result from the almost-intransitivity of the atmosphere rather than from environmental changes is suggested.

The language might be obscure to many readers. But he makes it clear in the paper:

lorenz-1968-1

Here Lorenz describes transitive systems – that is,  starting conditions do not determine the future state of the climate. Instead, the physics and the “outside influences” or forcings (such as the solar radiation incident on the planet) determine the future climate.

lorenz-1968-2

Here Lorenz introduces the well-known concept of “chaotic systems” where different initial conditions result in different long term results. (Note that there can be chaotic systems where different initial conditions produce different time-series results but the same statistical results over a period of time – so the term intransitive is a more restrictive term, see the paper for more details).

lorenz-1968-3

lorenz-1968-4

lorenz-1968-5

Well, interesting stuff from the eminent Lorenz.

A later paper, Kagan, Maslova & Sept (1994), commented on (perhaps inspired by) Lorenz’s 1968 paper and produced some interesting results from quite a simple model:

Kagan et al 1994-2 Kagan et al 1994-1

That is, a few coupled systems, working together can produce profound shifts in the Earth’s climate with periods like 80,000 years.

In case anyone thinks it’s just obscure foreign journals that comment approvingly on Lorenz’s work, the well-published climate skeptic James Hansen had this to say:

The variation of the global-mean annual-mean surface air temperature during the 100-year control run is shown in Figure 1. The global mean temperature at the end of the run is very similar to that at the beginning, but there is substantial unforced variability on all time scales that can be examined, that is, up to decadal time scales. Note that an unforced change in global temperature of about 0.4°C (0.3°C, if the curve is smoothed with a 5-year running mean) occurred in one 20-year period (years 50-70). The standard deviation about the 100-year mean is 0.11°C. This unforced variability of global temperature in the model is only slightly smaller than the observed variability of global surface air temperature in the past century, as discussed in section 5. The conclusion that unforced (and unpredictable) climate variability may account for a large portion of climate change has been stressed by many researchers; for example, Lorenz [1968], Hasselmann [1976] and Robock [1978].

[Emphasis added].

And here is their Figure 1, the control run, from that paper:

Hansen et al 1998

In later articles we will look at some of the theories of Milankovitch cycles. Confusingly, many different theories, mostly inconsistent with each other, all go by the same name.

Articles in the Series

Part One – An introduction

Part Three – Hays, Imbrie & Shackleton – how everyone got onto the Milankovitch theory

Part Four – Understanding Orbits, Seasons and Stuff – how the wobbles and movements of the earth’s orbit affect incoming solar radiation

Part Five – Obliquity & Precession Changes – and in a bit more detail

Part Six – “Hypotheses Abound” – lots of different theories that confusingly go by the same name

Part Seven – GCM I – early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Seven and a Half – Mindmap – my mind map at that time, with many of the papers I have been reviewing and categorizing plus key extracts from those papers

Part Eight – GCM II – more recent work from the “noughties” – GCM results plus EMIC (earth models of intermediate complexity) again trying to produce perennial snow cover

Part Nine – GCM III – very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age – a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Eleven – End of the Last Ice age – latest data showing relationship between Southern Hemisphere temperatures, global temperatures and CO2

Twelve – GCM V – Ice Age Termination – very recent work from He et al 2013, using a high resolution GCM (CCSM3) to analyze the end of the last ice age and the complex link between Antarctic and Greenland

Thirteen – Terminator II – looking at the date of Termination II, the end of the penultimate ice age – and implications for the cause of Termination II

References

Climatic Determinism, Edward Lorenz (1968)

Discontinuous auto-oscillations of the ocean thermohaline circulation and internal variability of the climate system, Kagan, Maslova & Sept (1994)

Global Climate Changes as Forecast by Goddard Institute for Space Studies Three-Dimensional Model, Hansen et al (1998)

Read Full Post »

In Part One we saw:

  • some trends based on real radiosonde measurements
  • some reasons why long term radiosonde measurements are problematic
  • examples of radiosonde measurement “artifacts” from country to country
  • the basis of reanalyses like NCEP/NCAR
  • an interesting comparison of reanalyses against surface pressure measurements
  • a comparison of reanalyses against one satellite measurement (SSMI)

But we only touched on the satellite data (shown in Trenberth, Fasullo & Smith in comparison to some reanalysis projects).

Wentz & Schabel (2000) reviewed water vapor, sea surface temperature and air temperature from various satellites. On water vapor they said:

..whereas the W [water vapor] data set is a relatively new product beginning in 1987 with the launch of the special sensor microwave imager (SSM/I), a multichannel microwave radiometer. Since 1987 four more SSM/I’s have been launched, providing an uninterrupted 12-year time series. Imaging radiometers before SSM/I were poorly calibrated, and as a result early water-vapour studies (7) were unable to address climate variability on interannual and decadal timescales.

The advantage of SSMI is that it measures the 22 GHz water vapor line. Unlike measurements in the IR around 6.7 μm (for example the HIRS instrument) which require some knowledge of temperature, the 22 GHz measurement is a direct reflection of water vapor concentration. The disadvantage of SSMI is that it only works over the ocean because of the low ocean emissivity (but variable land emissivity). And SSMI does not provide any vertical resolution of water vapor concentration, only the “total precipitable water vapor” (TPW) also known as “column integrated water vapor” (IWV).

The algorithm, verification and error analysis for the SSMI can be seen in Wentz’s 1997 JGR paper: A well-calibrated ocean algorithm for special sensor microwave / imager.

Here is Wentz & Schabel’s graph of IWV over time (shown as W in their figure):

From Wentz & Schabel (2000)

From Wentz & Schabel (2000)

Figure 1 – Region captions added to each graph

They calculate, for the short period in question (1988-1998):

  • 1.9%/decade for 20°N – 60°N
  • 2.1%/decade for 20°S – 20°N
  • 1.0%/decade for 20°S – 60°S

Soden et al (2005) take the dataset a little further and compare it to model results:

From Soden et al (2005)

From Soden et al (2005)

Figure 2

They note the global trend of 1.4 ± 0.78 %/decade.

As their paper is more about upper tropospheric water vapor they also evaluate the change in channel 12 of the HIRS instrument (High Resolution Infrared Radiometer Sounder):

The radiance channel centered at 6.7 μm (channel 12) is sensitive to water vapor integrated over a broad layer of the upper troposphere (200 to 500 hPa) and has been widely used for studies of upper tropospheric water vapor. Because clouds strongly attenuate the infrared radiation, we restrict our analysis to clear-sky radiances in which the upwelling radiation in channel 12 is not affected by clouds.

The change in radiance from channel 12 is approximately zero over the time period, which for technical reasons (see note 1) corresponds to roughly constant relative humidity in that region over the period from the early 1980’s to 2004. You can read the technical explanation in their paper, but as we are focusing on total water vapor (IWV) we will leave a discussion over UTWV for another day.

Updated Radiosonde Trends

Durre et al (2009) updated radiosonde trends in their 2009 paper. There is a lengthy extract from the paper in note 2 (end of article) to give insight into why radiosonde data cannot just be taken “as is”, and why a method has to be followed to identify and remove stations with documented or undocumented instrument changes.

Importantly they note, as with Ross & Elliott 2001:

..Even though the stations were located in many parts of the globe, only a handful of those that qualified for the computation of trends were located in the Southern Hemisphere. Consequently, the trend analysis itself was restricted to the Northern Hemisphere as in that of RE01..

Here are their time-based trends:

From Durre et al (2009)

From Durre et al (2009)

Figure 3

And a map of trends:

From Durre et al (2009)

From Durre et al (2009)

Figure 4

Note the sparse coverage of the oceans and also the land regions in Africa and Asia, except China.

And their table of results:

From Durre et al (2009)

From Durre et al (2009)

Figure 5

A very interesting note on the effect of their removal of stations based on detection of instrument changes and other inhomogeneities:

Compared to trends based on unadjusted PW data (not shown), the trends in Table 2 are somewhat more positive. For the Northern Hemisphere as a whole, the unadjusted trend is 0.22 mm/decade, or 0.23 mm/decade less than the adjusted trend.

This tendency for the adjustments to yield larger increases in PW is consistent with the notion that improvements in humidity measurements and observing practices over time have introduced an artificial drying into the radiosonde record (e.g., RE01).

TOPEX Microwave

Brown et al (2007) evaluated data from the Topex Microwave Radiometer (TMR). This is included on the Topex/Poseiden oceanography satellite and is dedicated to measuring the integrated water vapor content of the atmosphere. TMR is nadir pointing and measures the radiometric brightness temperature at 18, 21 and 37 GHz. As with SSMI, it only provides data over the ocean.

For the period of operation of the satellite (1992 – 2005) they found the trend of 0.90 ± 0.06 mm/decade:

From Brown et al (2007)

From Brown et al (2007)

Figure 6 – Click for a slightly larger view

And a map view:

From Brown et al (2007)

From Brown et al (2007)

Figure 7

Paltridge et al (2009)

Paltridge, Arking & Pook (2009) – P09 – take a look at the NCEP/NCAR reanalysis project from 1973 – 2007. They chose 1973 as the start date for the reasons explained in Part One – Elliott & Gaffen have shown that pre-1973 data has too many problems. They focus on humidity data below 500mbar as the measurement of humidity at higher altitudes and lower temperatures are more prone to radiosonde problems.

The NCEP/NCAR data shows positive trends below 850 mbar (=hPa) in all regions, negative trends above 850 mbar in the tropics and midlatitudes, and negative trends above 600 mbar in the northern midlatitudes.

Here are the water vapor trends vs height (pressure) for both relative humidity and specific humidity:

From Paltridge et al (2009)

From Paltridge et al (2009)

Figure 8

And here is the map of trends:

from Paltridge et al (2009)

from Paltridge et al (2009)

Figure 9

They comment on the “boundary layer” vs “free troposphere” issue.. In brief the boundary layer is that “well-mixed layer” close to the surface where the friction from the ground slows down the atmospheric winds and results in more turbulence and therefore a well-mixed layer of atmosphere. This is typically around 300m to 1000m high (there is no sharp “cut off”). At the ocean surface the atmosphere tends to be saturated (if the air is still) and so higher temperatures lead to higher specific humidities. (See Clouds and Water Vapor – Part Two if this is a new idea). Therefore, the boundary layer is uncontroversially expected to increase its water vapor content with temperature increases. It is the “free troposphere” or atmosphere above the boundary layer where the debate lies.

They comment:

It is of course possible that the observed humidity trends from the NCEP data are simply the result of problems with the instrumentation and operation of the global radiosonde network from which the data are derived.

The potential for such problems needs to be examined in detail in an effort rather similar to the effort now devoted to abstracting real surface temperature trends from the face-value data from individual stations of the international meteorological networks.

In the meantime, it is important that the trends of water vapor shown by the NCEP data for the middle and upper troposphere should not be “written off” simply on the basis that they are not supported by climate models—or indeed on the basis that they are not supported by the few relevant satellite measurements.

There are still many problems associated with satellite retrieval of the humidity information pertaining to a particular level of the atmosphere— particularly in the upper troposphere. Basically, this is because an individual radiometric measurement is a complicated function not only of temperature and humidity (and perhaps of cloud cover because “cloud clearing” algorithms are not perfect), but is also a function of the vertical distribution of those variables over considerable depths of atmosphere. It is difficult to assign a trend in such measurements to an individual cause.

Since balloon data is the only alternative source of information on the past behavior of the middle and upper tropospheric humidity and since that behavior is the dominant control on water vapor feedback, it is important that as much information as possible be retrieved from within the “noise” of the potential errors.

So what has P09 added to the sum of knowledge? We can already see the NCEP/NCAR trends in Trends and variability in column-integrated atmospheric water vapor by Trenberth et al from 2005.

Did the authors just want to take the reanalysis out of the garage, drive it around the block a few times and park it out front where everyone can see it?

No, of course not!

– I hear all the NCEP/NCAR believers say.

One of our commenters asked me to comment on Paltridge’s reply to Dessler (which was a response to Paltridge..), and linked to another blog article. It seems like even the author of that blog article is confused about NCEP/NCAR. This reanalysis project (as explained in Part One), is a model output not a radiosonde dataset:

Humidity is in category B – ‘although there are observational data that directly affect the value of the variable, the model also has a very strong influence on the value ‘

And for those people who have a read of Kalnay’s 1996 paper describing the project they will see that with the huge amount of data going into the model, the data wasn’t quality checked by human inspection on the way in. Various quality control algorithms attempt to (automatically) remove “bad data”.

This is why we have reviewed Ross & Elliott (2001) and Durre et al (2009). These papers review the actual radiosonde data and find increasing trends in IWV. They also describe in a lot of detail what kind of process they had to go through to produce a decent dataset. The authors of both papers also both explained that they could only produce a meaningful trend for the northern hemisphere. There is not enough quality data for the southern hemisphere to even attempt to produce a trend.

And Durre et al note that when they use the complete dataset the trend is half that calculated with problematic data removed.

This is the essence of the problem with Paltridge et al (2009)

Why is Ross & Elliot (2001) not reviewed and compared? If Ross & Elliott found that Southern Hemisphere trends could not be calculated because of the sparsity of quality radiosonde data, why doesn’t P09 comment on that? Perhaps Ross & Elliott are wrong. But no comment from P09. (Durre et al find the same problem with SH data, and probably too late for P09 but not too late for the 2010 comments the authors have been making).

In The Mass of the Atmosphere: A Constraint on Global Analyses, Trenberth & Smith pointed out clear problems with NCEP/NCAR vs ERA-40. Perhaps Trenberth and Smith are wrong. Or perhaps there is another way to understand these results. But no comment on this from P09.

P09 comment on the issues with satellite humidity retrieval for different layers of the atmosphere but no comment on the results from the microwave SSMI which has a totally different algorithm to retrieve IWV. And it is important to understand that they haven’t actually demonstrated a problem with satellite measurements. Let’s review their comment:

In the meantime, it is important that the trends of water vapor shown by the NCEP data for the middle and upper troposphere should not be “written off” simply on the basis that they are not supported by climate models—or indeed on the basis that they are not supported by the few relevant satellite measurements.

The reader of the paper wouldn’t know that Trenberth & Smith have demonstrated an actual reason for preferring ERA-40 (if any reanalysis is to be used).

The reader of the paper might understand “a few relevant satellite measurements” as meaning there wasn’t much data from satellites. If you review figure 4 you can see that the quality radiosonde data is essentially mid-latitude northern hemisphere land. Satellites – that is, multiple satellites with different instruments at different frequencies – have covered the oceans much much more comprehensively than radiosondes. Are the satellites all wrong?

The reader of the paper would think that the dataset has been apparently ditched because it doesn’t fit climate models.

This is probably the view of Paltridge, Arking & Pook. But they haven’t demonstrated it. They have just implied it.

Dessler & Davis (2010)

Dessler & Davis responded to P09. They plot some graphs from 1979  to present. The reason for plotting graphs from 1979 is because this is when the satellite data was introduced. And all of the reanalysis projects, except NCEP/NCAR incorporated satellite humidity data. (NCEP/NCAR does incorporate satellite data for some other fields).

Basically when data from a new source is introduced, even if it is more accurate, it can introduce spurious trends and even in opposite direction to the real trends. This was explained in Part One under the heading Comparing Reanalysis of Humidity. So trend analysis usually takes place over periods of consistent data sources.

This figure contrasts short term relationships between temperature and humidity with long term relationships:

From Dessler & Davis (2010)

From Dessler & Davis (2010)

Figure 10

If the blog I referenced earlier is anything to go by, the primary reason for producing this figure has been missed. And as that blog article seemed to not comprehend that NCEP/NCAR is a reanalysis (= model output) it’s not so surprising.

Dessler & Davis said:

There is poorer agreement among the reanalyses, particularly compared to the excellent agreement for short‐term fluctuations. This makes sense: handling data inhomogeneities will introduce long‐term trends in the data but have less effect on short‐term trends. This is why long term trends from reanalyses tend to be looked at with suspicion [e.g., Paltridge et al., 2009; Thorne and Vose, 2010; Bengtsson et al., 2004].

[Emphasis added]

They are talking about artifacts of the model (NCEP/NCAR). In the short term the relationship between humidity and temperature agree quite well among the different reanalyses. But in the longer term NCEP/NCAR doesn’t – demonstrating that it is likely introducing biases.

The alternative, as Dessler & Davis explain, is that there is somehow an explanation for a long term negative feedback (temperature and water vapor) with a short term positive feedback.

If you look around the blog world, or at say, Professor Lindzen you don’t find this. You find arguments about why short term feedback is negative. Not an argument that short term is positive and yet long term is negative.

I agree that many people say:  “I don’t know, it’s complicated, perhaps there is a long term negative feedback..” and I respect that point of view.

But in the blog article pointed to me by our commenter in Part One, the author said:

JGR let some decidedly unscientific things slip into that Dessler paper. One of the reasons provided is nothing more than a form of argument from ignorance: “there’s no theory that explains why the short term might be different to the long term”.

Why would any serious scientist admit that they don’t have the creativity or knowledge to come up with some reasons, and worse, why would they think we’d find that ignorance convincing?

..It’s not that difficult to think of reasons why it’s possible that humidity might rise in the short run, but then circulation patterns or other slower compensatory effects shift and the long run pattern is different. Indeed they didn’t even have to look further than the Paltridge paper they were supposedly trying to rebut (see Garth’s writing below). In any case, even if someone couldn’t think of a mechanism in a complex unknown system like our climate, that’s not “a reason” worth mentioning in a scientific paper.

The point that seems to have been missed is this is not a reason to ditch the primary dataset but instead a reason why NCEP/NCAR is probably flawed compared with all the other reanalyses. And compared with the primary dataset. And compared with multiple satellite datasets.

This is the issue with reanalyses. They introduce spurious biases. Bengsston explained how (specifically for ERA-40). Trenberth & Smith have already demonstrated it for NCEP/NCAR. And now Dessler & Davis have simply pointed out another reason for taking that point of view.

The blog writer thinks that Dessler is trying to ditch the primary dataset because of an argument from ignorance. I can understand the confusion.

It is still confusion.

One last point to add is that Dessler & Davis also added the very latest in satellite water vapor data – the AIRS instrument from 2003. AIRS is a big step forward in satellite measurement of water vapor, a subject for another day.

AIRS also shows the same trends as the other reanalyses and different from NCEP/NCAR.

A Scenario

Before reaching the conclusion I want to throw a scenario out there. It is imaginary.

Suppose that there were two sources of data for temperature over the surface of the earth – temperature stations and satellite. Suppose the temperature stations were located mainly in mid-latitude northern hemisphere locations. Suppose that there were lots of problems with temperature stations – instrument changes & environmental changes close to the temperature stations (we will call these environmental changes “UHI”).

Suppose the people who had done the most work analyzing the datasets and trying to weed out the real temperature changes from the spurious ones had demonstrated that the temperature had decreased over northern hemisphere mid-latitudes. And that they had claimed that quality southern hemisphere data was too “thin on the ground” to really draw any conclusions from.

Suppose that satellite data from multiple instruments, each using different technology, had also demonstrated that temperatures were decreasing over the oceans.

Suppose that someone fed the data from the (mostly NH) land-based temperature stations – without any human intervention on the UHI and instrument changes – into a computer model.

And suppose this computer model said that temperatures were increasing.

Imagine it, for a minute. I think we can picture the response.

And yet, this is a similar situation that we are confronted with on integrated water vapor (IWV). I have tried to think of a reason why so many people would be huge fans of this particular model output. I did think of one, but had to reject it immediately as being ridiculous.

I hope someone can explain why NCEP/NCAR deserves the fan club it has currently built up.

Conclusion

Radiosonde datasets, despite their problems, have been analyzed. The researchers have found positive water vapor trends for the northern hemisphere with these datasets. As far as I know, no one has used radiosonde datasets to find the opposite.

Radiosonde datasets provide excellent coverage for mid-latitude northern hemisphere land, and, with a few exceptions, poor coverage elsewhere.

Satellites, using IR and microwave, demonstrate increasing water vapor over the oceans for the shorter time periods in which they have been operating.

Reanalysis projects have taken in various data sources and, using models, have produced output values for IWV (total water vapor) with mixed results.

Reanalysis projects all have the benefit of convenience, but none are perfect. The dry mass of the atmosphere, which should be constant within noise errors unless a new theory comes along, demonstrates that NCEP/NCAR is worse than ERA-40.

ERA-40 demonstrates increasing IWV. NCEP/NCAP demonstrates negative IWV.

Some people have taken NCEP/NCAR for a drive around the block and parked it in front of their house and many people have wandered down the street to admire it. But it’s not the data. It’s a model.

Perhaps Paltridge, Arking or Pook can explain why NCEP/NCAR is a quality dataset. Unfortunately, their paper doesn’t demonstrate it.

It seems that some people are really happy if one model output or one dataset or one paper says something different from what 5 or 10 or 100 others are saying. If that makes you, the reader, happy, then at least the world has less deaths from stress.

In any field of science there are outliers.

The question on this blog at least, is what can be proven, what can be demonstrated and what evidence lies behind any given claim. From this blog’s perspective, the fact that outliers exist isn’t really very interesting. It is only interesting to find out if in fact they have merit.

In the world of historical climate datasets nothing is perfect. It seems pretty clear that integrated water vapor has been increasing over the last 20-30 years. But without satellites, even though we have a long history of radiosonde data, we have quite a limited dataset geographically.

If we can only use radiosonde data perhaps we can just say that water vapor has been increasing over northern hemisphere mid-latitude land for nearly 40 years. If we can use satellite as well, perhaps we can say that water vapor has been increasing everywhere for over 20 years.

If we can use the output from reanalysis models and do a lucky dip perhaps we can get a different answer.

And if someone comes along, analyzes the real data and provides a new perspective then we can all have another review.

References

On the Utility of Radiosonde Humidity Archives for Climate Studies, Elliot & Gaffen, Bulletin of the American Meteorological Society (1991)

Relationships between Tropospheric Water Vapor and Surface Temperature as Observed by Radiosondes, Gaffen, Elliott & Robock, Geophysical Research Letters(1992)

Column Water Vapor Content in Clear and Cloudy Skies, Gaffen & Elliott, Journal of Climate (1993)

On Detecting Long Term Changes in Atmospheric Moisture, Elliot, Climate Change (1995)

Tropospheric Water Vapor Climatology and Trends over North America, 1973-1993, Ross & Elliot, Journal of Climate (1996)

An assessment of satellite and radiosonde climatologies of upper-tropospheric water vapor, Soden & Lanzante, Journal of Climate (1996)

The NCEP/NCAR 40-year Reanalysis Project, Kalnay et alBulletin of the American Meteorological Society (1996)

Precise climate monitoring using complementary satellite data sets, Wentz & Schabel, Nature (2000)

Radiosonde-Based Northern Hemisphere Tropospheric Water Vapor Trends, Ross & Elliott, Journal of Climate (2001)

An analysis of satellite, radiosonde, and lidar observations of upper tropospheric water vapor from the Atmospheric Radiation Measurement Program, Soden et al,  Journal of Geophysical Research (2005)

The Radiative Signature of Upper Tropospheric Moistening, Soden et al, Science (2005)

The Mass of the Atmosphere: A Constraint on Global Analyses, Trenberth & Smith, Journal of Climate (2005)

Trends and variability in column-integrated atmospheric water vapor, Trenberth et al, Climate Dynamics (2005)

Can climate trends be calculated from reanalysis data? Bengtsson et al, Journal of Geophysical Research (2005)

Ocean Water Vapor and Cloud Burden Trends Derived from the Topex Microwave Radiometer, Brown et al, Geoscience and Remote Sensing Symposium, 2007. IGARSS 2007. IEEE International (2007)

Radiosonde-based trends in precipitable water over the Northern Hemisphere: An update, Durre et al, Journal of Geophysical Research (2009)

Trends in middle- and upper-level tropospheric humidity from NCEP reanalysis data, Paltridge et al, Theoretical Applied Climatology (2009)

Trends in tropospheric humidity from reanalysis systems, Dessler & Davis, Journal of Geophysical Research (2010)

Notes

Note 1: The radiance measurement in this channel is a result of both the temperature of the atmosphere and the amount of water vapor. If temperature increases radiance increases. If water vapor increases it attenuates the radiance. See the slightly more detailed explanation in their paper.

Note 2: Here is a lengthy extract from Durre et al (2009), partly because it’s not available for free, and especially to give an idea of the issues arising from trying to extract long term climatology from radiosonde data and, therefore, careful approach that needs to be taken.

Emphasis added in each case:

From the IGRA+RE01 data, stations were chosen on the basis of two sets of requirements: (1) criteria that qualified them for use in the homogenization process and (2) temporal completeness requirements for the trend analysis.

In order to be a candidate for homogenization, a 0000 UTC or 1200 UTC time series needed to both contain at least two monthly means in each of the 12 calendar months during 1973–2006 and have at least five qualifying neighbors (see section 2.2). Once adjusted, each time series was tested against temporal completeness requirements analogous to those used by RE01; it was considered sufficiently complete for the calculation of a trend if it contained no more than 60 missing months, and no data gap was longer than 36 consecutive months.

Approximately 700 stations were processed through the pairwise homogenization algorithm (hereinafter abbreviated as PHA) at each of the nominal observation times. Even though the stations were located in many parts of the globe, only a handful of those that qualified for the computation of trends were located in the Southern Hemisphere.

Consequently, the trend analysis itself was restricted to the Northern Hemisphere as in that of RE01. The 305 Northern Hemisphere stations for 0000 UTC and 280 for 1200 UTC that fulfilled the completeness requirements covered mostly North America, Greenland, Europe, Russia, China, and Japan.

Compared to RE01, the number of stations for which trends were computed increased by more than 100, and coverage was enhanced over Greenland, Japan, and parts of interior Asia. The larger number of qualifying
stations was the result of our ability to include stations that were sufficiently complete but contained significant inhomogeneities that required adjustment.

Considering that information on these types of changes tends to be incomplete for the historical record, the successful adjustment for inhomogeneities requires an objective technique that not only uses any available metadata, but also identifies undocumented change points [Gaffen et al., 2000; Durre et al., 2005]. The PHA of MW09 has these capabilities and thus was used here. Although originally developed for homogenizing time series of monthly mean surface temperature, this neighbor-based procedure was designed such that it can be applied to other variables, recognizing that its effectiveness depends on the relative magnitudes of change points compared to the spatial and temporal variability of the variable.

As can be seen from Table 1, change points were identified in 56% of the 0000 UTC and 52% of the 1200 UTC records, for a total of 509 change points in 317 time series.

Of these, 42% occurred around the time of a known metadata event, while the remaining 58% were considered to be ‘‘undocumented’’ relative to the IGRA station history information. On the basis of the visual inspection, it appears that the PHA has a 96% success rate at detecting obvious discontinuities. The algorithm can be effective even when a particular step change is present at the target and a number of its neighbors simultaneously.

In Japan, for instance, a significant drop in PW associated with a change between Meisei radiosondes around 1981 (Figure 1, top) was detected in 16 out of 17 cases, thanks to the inclusion of stations from adjacent tries in the pairwise comparisons Furthermore, when an adjustment is made around the time of a documented change in radiosonde type, its sign tends to agree with that expected from the known biases of the relevant instruments. For example, the decrease in PW at Yap in 1995 (Figure 1, middle) is consistent with the artificial drying expected from the change from a VIZ B to a Vaisala RS80–56 radiosonde that is known to have occurred at this location and time [Elliott et al., 2002; Wang and Zhang, 2008].

Read Full Post »

Water vapor trends is a big subject and so this article is not a comprehensive review – there are a few hundred papers on this subject. However, as most people outside of climate scientists have exposure to blogs where only a few papers have been highlighted, perhaps it will help to provide some additional perspective.

Think of it as an article that opens up some aspects of the subject.

And I recommend reading a few of the papers in the References section below. Most are linked to a free copy of the paper.

Mostly what we will look at in this article is “total precipitable water vapor” (TPW) also known as “column integrated water vapor (IWV)”.

What is this exactly? If we took a 1 m² area at the surface of the earth and then condensed the water vapor all the way up through the atmosphere, what height would it fill in a 1 m² tub?

The average depth (in this tub) from all around the world would be about 2.5 cm. Near the equator the amount would be 5cm and near the poles it would be 0.5 cm.

Averaged globally, about half of this is between sea level and 850 mbar (around 1.5 km above sea level), and only about 5% is above 500 mbar (around 5-6 km above sea level).

Where Does the Data Come From?

How do we find IVW (integrated water vapor)?

  • Radiosondes
  • Satellites

Frequent radiosonde launches were started after the Second World War – prior to that knowledge of water vapor profiles through the atmosphere is very limited.

Satellite studies of water vapor did not start until the late 1970’s.

Unfortunately for climate studies, radiosondes were designed for weather forecasting and so long term trends were not a factor in the overall system design.

Radiosondes were mostly launched over land and are predominantly from the northern hemisphere.

Given that water vapor response to climate is believed to be mostly from the ocean (the source of water vapor), not having significant measurements over the ocean until satellites in the late 1970’s is a major problem.

There is one more answer that could be added to the above list:

  • Reanalyses

As most people might suspect from the name, a reanalysis isn’t a data source. We will take a look at them a little later.

Quick List

Pros and Cons in brief:

Radiosonde Pluses:

  • Long history
  • Good vertical resolution
  • Can measure below clouds

Radiosonde Minuses:

  • Geographically concentrated over northern hemisphere land
  • Don’t measure low temperature or low humidity reliably
  • Changes to radiosonde sensors and radiosonde algorithms have subtly (or obviously) changed the measured values

Satellite Pluses:

  • Global coverage
  • Consistency of measurement globally and temporally
  • Changes in satellite sensors can be more easily checked with inter-comparison tests

Satellite Minuses:

  • Shorter history (since late 1970’s)
  • Vertical resolution of a few kms rather than hundreds of meters
  • Can’t measure under clouds (limit depends on whether infrared or microwave is used)
  • Requires knowledge of temperature profile to convert measured radiances to humidity

Radiosonde Measurements

Three names that come up a lot in papers on radiosonde measurements are Gaffen, Elliott and Ross. Usually pairing up they have provided a some excellent work on radiosonde data and on measurement issues with radiosondes.

From Radiosonde-based Northern Hemisphere Tropospheric Water Vapor Trends, Ross & Elliott (2001):

All the above trend studies considered the homogeneity of the time series in the selection of stations and the choice of data period. Homogeneity of a record can be affected by changes in instrumentation or observing practice. For example, since relative humidity typically decreases with height through the atmosphere, a fast responding humidity sensor would report a lower relative humidity than one with a greater lag in response.

Thus, the change to faster-response humidity sensors at many stations over the last 20 years could produce an apparent, though artificial, drying over time..

Then they have a section discussing various data homogeneity issues, which includes this graphic showing the challenge of identifying instrument changes which affect measurements:

From Ross & Elliott (2001)

From Ross & Elliott (2001)

Figure 1

They comment:

These examples show that the combination of historical and statistical information can identify some known instrument changes. However, we caution that the separation of artificial (e.g., instrument changes) and natural variability is inevitably somewhat subjective. For instance, the same instrument change at one station may not show as large an effect at another location or time of day..

Furthermore, the ability of the statistical method to detect abrupt changes depends on the variability of the record, so that the same effect of an instrument change could be obscured in a very noisy record. In this case, the same change detected at one station may not be detected at another station containing more variability.

Here are their results from 1973-1995 in geographical form. Triangles are positive trends, circles are negative trends. You also get to see the distribution of radiosondes, as each marker indicates one station:

Figure 2

And their summary of time-based trends for each region:

Figure 3

In their summary they make some interesting comments:

We found that a global estimate could not be made because reliable records from the Southern Hemisphere were too sparse; thus we confined our analysis to the Northern Hemisphere. Even there, the analysis was limited by continual changes in instrumentation, albeit improvements, so we were left with relatively few records of total precipitable water over the era of radiosonde observations that were usable.

Emphasis added.

Well, I recommend that readers take the time to read the whole paper for themselves to understand the quality of work that has been done – and learn more about the issues with the available data.

What is Special about 1973?

In their 1991 paper, Elliot and Gaffen showed that pre-1973 radiosonde measurements came with much more problems than post-1973.

From Elliott & Gaffen (1991)

From Elliott & Gaffen (1991)

Figure 4 – Click for larger view

Note that the above is just for the US radiosonde network.

 Our findings suggest caution is appropriate when using the humidity archives or interpreting existing water vapor climatologies so that changes in climate not be confounded by non-climate changes.

And one extract to give a flavor of the whole paper:

The introduction of the new hygristor in 1980 necessitated a new algorithm.. However, the new algorithm also eliminated the possibility of reports of humidities greater than 100% but ensured that humidities of 100% cannot be reported in cold temperatures. The overall effect of these changes is difficult to ascertain. The new algorithm should have led to higher reported humidities compared to the older algorithm, but the elimination of reports of very high values at cold temperatures would act in the opposite sense.

And a nice example of another change in radiosonde measurement and reporting practice. The change below is just an artifact of low humidity values being reported after a certain date:

From Elliott & Gaffen (1991)

From Elliott & Gaffen (1991)

Figure 5

As the worst cases came before 1973, most researchers subsequently reporting on water vapor trends have tended to stick to post-1973 (or report on that separately and add caveats to pre-1973 trends).

But it is important to understand that issues with radiosonde measurements are not confined to pre-1973.

Here are a few more comments, this time from Elliott in his 1995 paper:

Most (but not all) of these changes represent improvements in sensors or other practices and so are to be welcomed. Nevertheless they make it difficult to separate climate changes from changes in the measurement programs..

Since then, there have been several generations of sensors and now sensors have much faster response times. Whatever the improvements for weather forecasting, they do leave the climatologist with problems. Because relative humidity generally decreases with height slower sensors would indicate a higher humidity at a given height than today’s versions (Elliott et al., 1994).

This effect would be particularly noticeable at low temperatures where the differences in lag are greatest. A study by Soden and Lanzante (submitted) finds a moist bias in upper troposphere radiosondes using slower responding humidity sensors relative to more rapid sensors, which supports this conjecture. Such improvements would lead the unwary to conclude that some part of the atmosphere had dried over the years.

And Gaffen, Elliott & Robock (1992) reported that in analyzing data from 50 stations from 1973-1990 they found instrument changes that created “inhomogeneities in the records of about half the stations

Satellite Demonstration

Different countries tend to use different radiosondes, have different algorithms and have different reporting practices in place.

The following comparison is of upper tropospheric water vapor. As an aside this has a focus because water vapor in the upper atmosphere disproportionately affects top of atmosphere radiation – and therefore the radiation balance of the climate.

From Soden & Lanzante (1996), the data below, of the difference between satellite and radiosonde measurements, identifies a significant problem:

Soden & Lanzante (1996)

Soden & Lanzante (1996)

Figure 6

Since the same satellite is used in the comparison at all radiosonde locations, the satellite measurements serve as a fixed but not absolute reference. Thus we can infer that radiosonde values over the former Soviet Union tend to be systematically moister than the satellite measurements, that are in turn systematically moister than radiosonde values over western Europe.

However, it is not obvious from these data which of the three sets of measurements is correct in an absolute sense. That is, all three measurements could be in error with respect to the actual atmosphere..

..However, such a satellite [calibration] error would introduce a systematic bias at all locations and would not be regionally dependent like the bias shown in fig. 3 [=figure 6].

They go on to identify the radiosonde sensor used in different locations as the likely culprit. Yet, as various scientists comment in their papers, countries take on a new radiosonde in piecemeal form, sometimes having a “competitive supply” situation where 70% is from one vendor and 30% from another vendor. Other times radiosonde sensors are changed across a region over a period of a few years. Inter-comparisons are done, but inadequately.

Soden and Lanzante also comment on spatial coverage:

Over data-sparse regions such as the tropics, the limited spatial coverage can introduce systematic errors of 10-20% in terms of the relative humidity. This problem is particularly severe in the eastern tropical Pacific, which is largely void of any radiosonde stations yet is especially critical for monitoring interannual variability (e.g. ENSO).

Before we move onto reanalyses, a summing up on radiosondes from the cautious William P. Elliot (1995):

Thus there is some observational evidence for increases in moisture content in the troposphere and perhaps in the stratosphere over the last 2 decades. Because of limitations of the data sources and the relatively short record length, further observations and careful treatment of existing data will be needed to confirm a global increase.

Reanalysis – or Filling in the Blanks

Weather forecasting and climate modelling is a form of finite element analysis (and see Wikipedia). Essentially in FEA, some kind of grid is created – like this one for a pump impellor:

Stress analysis in an impeller

Stress analysis in an impeller

Figure 7

– and the relevant equations can be solved for each boundary or each element. It’s a numerical solution to a problem that can’t be solved analytically.

Weather forecasting and climate are as tough as they come. Anyway, the atmosphere is divided up into a grid and in each grid we need a value for temperature, pressure, humidity and many other variables.

To calculate what the weather will be like over the next week a value needs to be placed into each and every grid. And just one value. If there is no value in the grid the program can’t run and there’s nowhere to put two values.

By this massive over-simplification, hopefully you will be able to appreciate what a reanalysis does. If no data is available, it has to be created. That’s not so terrible, so long as you realize it:

Figure 8

This is a simple example where the values represent temperatures in °C as we go up through the atmosphere. The first problem is that there is a missing value. It’s not so difficult to see that some formula can be created which will give a realistic value for this missing value. Perhaps the average of all the values surrounding it? Perhaps a similar calculation which includes values further away, but with less weighting.

With some more meteorological knowledge we might develop a more sophisticated algorithm based on the expected physics.

The second problem is that we have an anomaly. Clearly the -50°C is not correct. So there needs to be an algorithm which “fixes” it. Exactly what fix to use presents the problem.

If data becomes sparser then the problems get starker. How do we fill in and correct these values?

Figure 9

It’s not at all impossible. It is done with a model. Perhaps we know surface temperature and the typical temperature profile (“lapse rate”) through the atmosphere. So the model fills in the blanks with “typical climatology” or “basic physics”.

But it is invented data. Not real data.

Even real data is subject to being changed by the model..

NCEP/NCAR Reanalysis Project

There are a number of reanalysis projects. One is the NCEP/NCAR project (NCEP = National Centers for Environmental Prediction, NCAR = National Center for Atmospheric Research).

Kalnay (1996) explains:

The basic idea of the reanalysis project is to use a frozen state-of-the-art analysis/forecast system and perform data assimilation using past data, from 1957 to the present (reanalysis).

The NCEP/NCAR 40-year reanalysis project should be a research quality dataset suitable for many uses, including weather and short-term climate research.

An important consideration is explained:

An important question that has repeatedly arisen is how to handle the inevitable changes in the observing system, especially the availability of new satellite data, which will undoubtedly have an impact on the perceived climate of the reanalysis. Basically the choices are a) to select a subset of the observations that remains stable throughout the 40-year period of the reanalysis, or b) to use all the available data at a given time.

Choice a) would lead to an reanalysis with the most stable climate, and choice b) to an analysis that is as accurate as possible throughout the 40 years. With the guidance of the advisory panel, we have chosen b), that is, to make use of the most data available at any given time.

What are the categories of output data?

  • A = analysis variable is strongly influenced by observed data and hence it is in the most reliable class
  • B = although there are observational data that directly affect the value of the variable, the model also has a very strong influence on the value
  • C = there are no observations directly affecting the variable, so that it is derived solely from the model fields

Humidity is in category B.

Interested people can read Kalnay’s paper. Reanalysis products are very handy and widely used. Those with experience usually know what they are playing around with. Newcomers need to pay attention to the warning labels.

Comparing Reanalysis of Humidity

Bengtsson et al (2004) reviewed another reanalysis project, ERA-40. They provide a good example of how incorrect trends can be introduced (especially the 2nd paragraph):

A bias changing in time can thus introduce a fictitious trend without being eliminated by the data assimilation system. A fictitious trend can be generated by the introduction of new types of observations such as from satellites and by instrumental and processing changes in general. Fictitious trends could also result from increases in observational coverage since this will affect systematic model errors.

Assume, for example, that the assimilating model has a cold bias in the upper troposphere which is a common error in many general circulation models (GCM). As the number of observations increases the weight of the model in the analysis is reduced and the bias will correspondingly become smaller. This will then result in an artificial warming trend.

Bengtsson and his colleagues analyze tropospheric temperature, IWV and kinetic energy.

ERA-40 does have a positive trend in water vapor, something we will return to. The trend from ERA-40 for 1958-2001 is +0.41 mm/decade, and for 1979-2001 = +0.36 mm/decade. They note that NCEP/NCAR has a negative trend of -0.24 mm/decade from 1958-2001 and -0.06mm/decade  for 1979-2001, but it isn’t a focus of their study.

They do an analysis which excludes satellite data and find a lower (but still positive) trend for IWV. They also question the magnitudes of tropospheric temperature trends and kinetic energy on similar grounds.

The point is essentially that the new data has created a bias in the reanalysis.

Their conclusion, following various caveats about the scale of the study so far:

Returning finally to the question in the title of this study an affirmative answer cannot be given, as the indications are that in its present form the ERA40 analyses are not suitable for long-term climate trend calculations.

However, it is believed that there are ways forward as indicated in this study which in the longer term are likely to be successful. The study also stresses the difficulties in detecting long term trends in the atmosphere and major efforts along the lines indicated here are urgently needed.

So, onto Trends and variability in column-integrated atmospheric water vapor by Trenberth, Fasullo & Smith (2005). This paper is well worth reading in full.

For years before 1996, the Ross and Elliott radiosonde dataset is used for validation of European Centre for Medium-range Weather Forecasts (ECMWF) reanalyses ERA-40. Only the special sensor microwave imager (SSM/I) dataset from remote sensing systems (RSS) has credible means, variability and trends for the oceans, but it is available only for the post-1988 period.

Major problems are found in the means, variability and trends from 1988 to 2001 for both reanalyses from National Centers for Environmental Prediction (NCEP) and the ERA-40 reanalysis over the oceans, and for the NASA water vapor project (NVAP) dataset more generally. NCEP and ERA-40 values are reasonable over land where constrained by radiosondes.

Accordingly, users of these data should take great care in accepting results as real.

Here’s a comparison of Ross & Elliott (2001) [already shown above] with ERA-40:

From Trenbert et al (2005)

From Trenberth et al (2005)

Figure 10 – Click for a larger image

Then they consider 1988-2001, the reason being that 1988 was when the SSMI (special sensor microwave imager) data over the oceans became available (more on the satellite data later).

From Trenberth et al (2005)

From Trenberth et al (2005)

Figure 11

At this point we can see that ERA-40 agrees quite well with SSMI (over the oceans, the only place where SSMI operates), but NCEP/NCAR and another reanalysis product, NVAR, produce flat trends.

Now we will take a look at a very interesting paper: The Mass of the Atmosphere: A Constraint on Global Analyses, Trenberth & Smith (2005). Most readers will probably not be aware of this comparison and so it is of “extra” interest.

The total mass of the atmosphere is in fact a fundamental quantity for all atmospheric sciences. It varies in time because of changing constituents, the most notable of which is water vapor. The total mass is directly related to surface pressure while water vapor mixing ratio is measured independently.

Accordingly, there are two sources of information on the mean annual cycle of the total mass and the associated water vapor mass. One is from measurements of surface pressure over the globe; the other is from the measurements of water vapor in the atmosphere.

The main idea is that other atmospheric mass changes have a “noise level” effect on total mass, whereas water vapor has a significant effect. As measurement of surface pressure is a fundamental meteorological value, measured around the world continuously (or, at least, continually), we can calculate the total mass of the atmosphere with high accuracy. We can also – from measurements of IWV – calculate the total mass of water vapor “independently”.

Subtracting water vapor mass from total atmospheric measured mass should give us a constant – the “dry atmospheric pressure”. That’s the idea. So if we use the surface pressure and the water vapor values from various reanalysis products we might find out some interesting bits of data..

from Trenberth & Smith (2005)

from Trenberth & Smith (2005)

Figure 12

In the top graph we see the annual cycle clearly revealed. The bottom graph is the one that should be constant for each reanalysis. This has water vapor mass removed via the values of water vapor in that reanalysis.

Pre-1973 values show up as being erratic in both NCEP and ERA-40. NCEP values show much more variability post-1979, but neither is perfect.

The focus of the paper is the mass of the atmosphere, but is still recommended reading.

Here is the geographical distribution of IWV and the differences between ERA-40 and other datasets (note that only the first graphic is trends, the following graphics are of differences between datasets):

Trenberth et al (2005)

Trenberth et al (2005)

Figure 13 – Click for a larger image

The authors comment:

The NCEP trends are more negative than others in most places, although the patterns appear related. Closer examination reveals that the main discrepancies are over the oceans. There is quite good agreement between ERA-40 and NCEP over most land areas except Africa, i.e. in areas where values are controlled by radiosondes.

There’s a lot more in the data analysis in the paper. Here are the trends from 1988 – 2001 from the various sources including ERA-40 and SSMI:

From Trenberth et al (2005)

From Trenberth et al (2005)

Figure 14 – Click for a larger view

  • SSMI has a trend of +0.37 mm/decade.
  • ERA-40  has a trend of +0.70mm/decade over the oceans.
  • NCEP has a trend of -0.1mm/decade over the oceans.

To be Continued..

As this article is already pretty long, it will be continued in Part Two, which will include Paltridge et al (2009), Dessler & Davis (2010) and some satellite measurements and papers.

Update – Part Two is published

References

On the Utility of Radiosonde Humidity Archives for Climate Studies, Elliot & Gaffen, Bulletin of the American Meteorological Society (1991)

Relationships between Tropospheric Water Vapor and Surface Temperature as Observed by Radiosondes, Gaffen, Elliott & Robock, Geophysical Research Letters (1992)

Column Water Vapor Content in Clear and Cloudy Skies, Gaffen & Elliott, Journal of Climate (1993)

On Detecting Long Term Changes in Atmospheric Moisture, Elliot, Climate Change (1995)

Tropospheric Water Vapor Climatology and Trends over North America, 1973-1993, Ross & Elliot, Journal of Climate (1996)

An assessment of satellite and radiosonde climatologies of upper-tropospheric water vapor, Soden & Lanzante, Journal of Climate (1996)

The NCEP/NCAR 40-year Reanalysis Project, Kalnay et alBulletin of the American Meteorological Society (1996)

Radiosonde-Based Northern Hemisphere Tropospheric Water Vapor Trends, Ross & Elliott, Journal of Climate (2001)

An analysis of satellite, radiosonde, and lidar observations of upper tropospheric water vapor from the Atmospheric Radiation Measurement Program, Soden et al,  Journal of Geophysical Research (2005)

The Radiative Signature of Upper Tropospheric Moistening, Soden et al, Science (2005)

The Mass of the Atmosphere: A Constraint on Global Analyses, Trenberth & Smith, Journal of Climate (2005)

Trends and variability in column-integrated atmospheric water vapor, Trenberth et al, Climate Dynamics (2005)

Can climate trends be calculated from reanalysis data? Bengtsson et al, Journal of Geophysical Research (2005)

Trends in middle- and upper-level tropospheric humidity from NCEP reanalysis data, Paltridge et al, Theoretical Applied Climatology (2009)

Trends in tropospheric humidity from reanalysis systems, Dessler & Davis, Journal of Geophysical Research (2010)

Read Full Post »

« Newer Posts - Older Posts »