Feeds:
Posts
Comments

Archive for the ‘Climate History’ Category

In one stereotypical view of climate, the climate state has some variability over a 30 year period – we could call this multi-decadal variability “noise” – but it is otherwise fixed by the external conditions, the “external forcings”.

This doesn’t really match up with climate history, but climate models have mostly struggled to do much more than reproduce the stereotyped view. See Natural Variability and Chaos – Four – The Thirty Year Myth for a different perspective on (only) the timescale.

In this stereotypical view, the only reason why “long term” (=30 year statistics) can change is because of “external forcing”. Otherwise, where does the “extra energy” come from (we will examine this particular idea in a future article).

One of our commenters recently highlighted a paper from Drijfhout et al (2013) -Spontaneous abrupt climate change due to an atmospheric blocking–sea-ice–ocean feedback in an unforced climate model simulation.

Here is how the paper introduces the subject:

Abrupt climate change is abundant in geological records, but climate models rarely have been able to simulate such events in response to realistic forcing.

Here we report on a spontaneous abrupt cooling event, lasting for more than a century, with a temperature anomaly similar to that of the Little Ice Age. The event was simulated in the preindustrial control run of a high- resolution climate model, without imposing external perturbations.

This is interesting and instructive on many levels so let’s take a look. In later articles we will look at the evidence in climate history for “abrupt” events, for now note that Dansgaard–Oeschger (DO) events are the description of the originally identified form of abrupt change.

The distinction between “abrupt” changes and change that is not “abrupt” is an artificial one, it is more a reflection of the historical order in which we discovered “slow” and “abrupt” change. 

Under a Significance inset box in the paper:

There is a long-standing debate about whether climate models are able to simulate large, abrupt events that characterized past climates. Here, we document a large, spontaneously occurring cold event in a preindustrial control run of a new climate model.

The event is comparable to the Little Ice Age both in amplitude and duration; it is abrupt in its onset and termination, and it is characterized by a long period in which the atmospheric circulation over the North Atlantic is locked into a state with enhanced blocking.

To simulate this type of abrupt climate change, climate models should possess sufficient resolution to correctly represent atmospheric blocking and a sufficiently sensitive sea-ice model.

Here is their graph of the time-series of temperature (left) , and the geographical anomaly (right) expressed as the change during the 100 year event against the background of years 200-400:

From Drijfhout et al 2013

From Drijfhout et al 2013

Figure 1 – Click to expand

In their summary they state:

The lesson learned from this study is that the climate system is capable of generating large, abrupt climate excursions without externally imposed perturbations. Also, because such episodic events occur spontaneously, they may have limited predictability.

Before we look at the “causes” – the climate mechanisms – of this event, let’s briefly look at the climate model.

Their coupled GCM has an atmospheric resolution of just over 1º x 1º with 62 vertical levels, and the ocean has a resolution of 1º in the extra-tropics, increasing to 0.3º near the equator. The ocean has 42 vertical levels, with the top 200m of the ocean represented by 20 equally spaced 10m levels.

The GHGs and aerosols are set at pre-industrial 1860 values and don’t change over the 1,125 year simulation. There are no “flux adjustments” (no need for artificial momentum and energy additions to keep the model stable as with many older models).

See note 1 for a fuller description and the paper in the references for a full description.

The simulated event itself:

After 450 y, an abrupt cooling event occurred, with a clear signal in the Atlantic multidecadal oscillation (AMO). In the instrumental record, the amplitude of the AMO since the 1850s is about 0.4 °C, its SD 0.2 °C. During the event simulated here, the AMO index dropped by 0.8 °C for about a century..

How did this abrupt change take place?

The main mechanism was a change in the Atlantic Meridional Overturning Current (AMOC), also known as the Thermohaline circulation. The AMOC raises a nice example of the sensitivity of climate. The AMOC brings warmer water from the tropics into higher latitudes. A necessary driver of this process is the intensity of deep convection in high latitudes (sinking dense water) which in turn depends on two factors – temperature and salinity. More importantly, more accurately, it depends on the competing differences in anomalies of temperature and salinity

To shut down deep convection, the density of the surface water must decrease. In the temperature range of 7–12 °C, typical for the Labrador Sea, the SST anomaly in degrees Celsius has to be roughly 5 times the sea surface salinity (SSS) anomaly in practical salinity units for density compensation to occur. The SST anomaly was only about twice that of the SSS anomaly; the density anomaly was therefore mostly determined by the salinity anomaly.

In the figure below we see (left) the AMOC time series at two locations with the reduction during the cold century, and (right) the anomaly by depth and latitude for the “cold century” vs the climatology for years 200-400:

From Drijfhout et al 2013

From Drijfhout et al 2013

Figure 2 – Click to expand

What caused the lower salinities? It was more sea ice, melting in the right location. The excess sea ice was caused by positive feedback between atmospheric and ocean conditions “locking in” a particular pattern. The paper has a detailed explanation with graphics of the pressure anomalies which is hard to reduce to anything more succinct, apart from their abstract:

Initial cooling started with a period of enhanced atmospheric blocking over the eastern subpolar gyre.

In response, a southward progression of the sea-ice margin occurred, and the sea-level pressure anomaly was locked to the sea-ice margin through thermal forcing. The cold-core high steered more cold air to the area, reinforcing the sea-ice concentration anomaly east of Greenland.

The sea-ice surplus was carried southward by ocean currents around the tip of Greenland. South of 70°N, sea ice already started melting and the associated freshwater anomaly was carried to the Labrador Sea, shutting off deep convection. There, surface waters were exposed longer to atmospheric cooling and sea surface temperature dropped, causing an even larger thermally forced high above the Labrador Sea.

Conclusion

It is fascinating to see a climate model reproducing an example of abrupt climate change. There are a few contexts to suggest for this result.

1. From the context of timescale we could ask how often these events take place, or what pre-conditions are necessary. The only way to gather meaningful statistics is for large ensemble runs of considerable length – perhaps thousands of “perturbed physics” runs each of 100,000 years length. This is far out of reach for processing power at the moment. I picked some arbitrary numbers – until the statistics start to converge and match what we see from paleoclimatology studies we don’t know if we have covered the “terrain”.

Or perhaps only five runs of 1,000 years are needed to completely solve the problem (I’m kidding).

2. From the context of resolution – as we achieve higher resolution in models we may find new phenomena emerging in climate models that did not appear before. For example, in ice age studies, coarser climate models could not achieve “perennial snow cover” at high latitudes (as a pre-condition for ice age inception), but higher resolution climate models have achieved this first step. (See Ghosts of Climates Past – Part Seven – GCM I & Part Eight – GCM II).

As a comparison on resolution, the 2,000 year El Nino study we saw in Part Six of this series had an atmospheric resolution of 2.5º x 2.0º with 24 levels.

However, we might also find that as the resolution progressively increases (with the inevitable march of processing power) phenomena that appear at one resolution disappear at yet higher resolutions. This is an opinion, but if you ask people who have experience with computational fluid dynamics I expect they will say this would not be surprising.

3. Other models might reach similar or higher resolution and never get this kind of result and demonstrate the flaw in the EC-Earth model that allowed this “Little Ice Age” result to occur. Or the reverse.

As the authors say:

As a result, only coupled climate models that are capable of realistically simulating atmospheric blocking in relation to sea-ice variations feature the enhanced sensitivity to internal fluctuations that may temporarily drive the climate system to a state that is far beyond its standard range of natural variability.

References

Spontaneous abrupt climate change due to an atmospheric blocking–sea-ice–ocean feedback in an unforced climate model simulation, Sybren Drijfhout, Emily Gleeson, Henk A. Dijkstra & Valerie Livina, PNAS (2013) – free paper

EC-Earth V2.2: description and validation of a new seamless earth system prediction model, W. Hazeleger et al, Climate dynamics (2012) – free paper

Notes

Note 1: From the Supporting Information from their paper:

Climate Model and Numerical Simulation. The climate model used in this study is version 2.2 of the EC-Earth earth system model [see references] whose atmospheric component is based on cycle 31r1 of the European Centre for Medium-range Weather Forecasts (ECMWF) Integrated Forecasting System.

The atmospheric component runs at T159 horizontal spectral resolution (roughly 1.125°) and has 62 vertical levels. In the vertical a terrain-following mixed σ/pressure coordinate is used.

The Nucleus for European Modeling of the Ocean (NEMO), version V2, running in a tripolar configuration with a horizontal resolution of nominally 1° and equatorial refinement to 0.3° (2) is used for the ocean component of EC-Earth.

Vertical mixing is achieved by a turbulent kinetic energy scheme. The vertical z coordinate features a partial step implementation, and a bottom boundary scheme mixes dense water down bottom slopes. Tracer advection is accomplished by a positive definite scheme, which does not produce spurious negative values.

The model does not resolve eddies, but eddy-induced tracer advection is parameterized (3). The ocean is divided into 42 vertical levels, spaced by ∼10 m in the upper 200 m, and thereafter increasing with depth. NEMO incorporates the Louvain-la-Neuve sea-ice model LIM2 (4), which uses the same grid as the ocean model. LIM2 treats sea ice as a 2D viscous-plastic continuum that transmits stresses between the ocean and atmosphere. Thermodynamically it consists of a snow and an ice layer.

Heat storage, heat conduction, snow–ice transformation, nonuniform snow and ice distributions, and albedo are accounted for by subgrid-scale parameterizations.

The ocean, ice, land, and atmosphere are coupled through the Ocean, Atmosphere, Sea Ice, Soil 3 coupler (5). No flux adjustments are applied to the model, resulting in a physical consistency between surface fluxes and meridional transports.

The present preindustrial (PI) run was conducted by Met Éireann and comprised 1,125 y. The ocean was initialized from the World Ocean Atlas 2001 climatology (6). The atmosphere used the 40-year ECMWF Re-Analysis of January 1, 1979, as the initial state with permanent PI (1850) greenhouse gas (280 ppm) and aerosol concentrations.

Read Full Post »

In Part Three – Attribution & Fingerprints we looked at an early paper in this field, from 1996. I was led there by following back through many papers referenced from AR5 Chapter 10. The lead author of that paper, Gabriele Hegerl, has made a significant contribution to the 3rd report, 4th and 5th IPCC reports on attribution.

We saw in Part Three that this particular paper ascribed a probability:

We find that the latest observed 30-year trend pattern of near-surface temperature change can be distinguished from all estimates of natural climate variability with an estimated risk of less than 2.5% if the optimal fingerprint is applied.

That paper did note that greatest uncertainty was in understanding the magnitude of natural variability. This is an essential element of attribution.

It wasn’t explicitly stated whether the 97.5% confidence was with the premise that natural variability was accurately understood in 1996. I believe that this was the premise. I don’t know what confidence would have been ascribed to the attribution study if uncertainty over natural variability was included.

IPCC AR5

In this article we will look at the IPCC 5th report, AR5, and see how this field has progressed, specifically in regard to the understanding of natural variability. Chapter 10 covers Detection and Attribution of Climate Change.

From p.881 (the page numbers are from the start of the whole report, chapter 10 has just over 60 pages plus references):

Since the AR4, detection and attribution studies have been carried out using new model simulations with more realistic forcings, and new observational data sets with improved representation of uncertainty (Christidis et al., 2010; Jones et al., 2011, 2013; Gillett et al., 2012, 2013; Stott and Jones, 2012; Knutson et al., 2013; Ribes and Terray, 2013).

Let’s have a look at these papers (see note 1 on CMIP3 & CMIP5).

I had trouble understanding AR5 Chapter 10 because there was no explicit discussion of natural variability. The papers referenced (usually) have their own section on natural variability, but chapter 10 doesn’t actually cover it.

I emailed Geert Jan van Oldenborgh to ask for help. He is the author of one paper we will briefly look at here – his paper was very interesting and he had a video segment explaining his paper. He suggested the problem was more about communication because natural variability was covered in chapter 9 on models. He had written a section in chapter 11 that he pointed me towards, so this article became something that tried to grasp the essence of three chapters (9 – 11), over 200 pages of reports and several pallet loads of papers.

So I’m not sure I can do the synthesis justice, but what I will endeavor to do in this article is demonstrate the minimal focus (in IPCC AR5) on how well models represent natural variability.

That subject deserves a lot more attention, so this article will be less about what natural variability is, and more about how little focus it gets in AR5. I only arrived here because I was determined to understand “fingerprints” and especially the rationale behind the certainties ascribed.

Subsequent articles will continue the discussion on natural variability.

Knutson et al 2013

The models [CMIP5] are found to provide plausible representations of internal climate variability, although there is room for improvement..

..The modeled internal climate variability from long control runs is used to determine whether observed and simulated trends are consistent or inconsistent. In other words, we assess whether observed and simulated forced trends are more extreme than those that might be expected from random sampling of internal climate variability.

Later

The model control runs exhibit long-term drifts. The magnitudes of these drifts tend to be larger in the CMIP3 control runs than in the CMIP5 control runs, although there are exceptions. We assume that these drifts are due to the models not being in equilibrium with the control run forcing, and we remove the drifts by a linear trend analysis (depicted by the orange straight lines in Fig. 1). In some CMIP3 cases, the drift initially proceeds at one rate, but then the trend becomes smaller for the remainder of the run. We approximate the drift in these cases by two separate linear trend segments, which are identified in the figure by the short vertical orange line segments. These long-term drift trends are removed to produce the drift corrected series.

[Emphasis added].

Another paper suggests this assumption might not be correct. Here is Jones, Stott and Christidis (2013) – “piControl” are the natural variability model simulations:

Often a model simulation with no changes in external forcing (piControl) will have a drift in the climate diagnostics due to various flux imbalances in the model [Gupta et al., 2012]. Some studies attempt to account for possible model climate drifts, for instance Figure 9.5 in Hegerl et al. [2007] did not include transient simulations of the 20th century if the long-term trend of the piControl was greater in magnitude than 0.2 K/century (Appendix 9.C in Hegerl et al. [2007]).

Another technique is to remove the trend, from the transient simulations, deduced from a parallel section of piControl [e.g., Knutson et al., 2006]. However whether one should always remove the piControl trend, and how to do it in practice, is not a trivial issue [Taylor et al., 2012; Gupta et al., 2012]..

..We choose not to remove the trend from the piControl from parallel simulations of the same model in this study due to the impact it would have on long-term variability, i.e., the possibility that part of the trend in the piControl may be long-term internal variability that may or may not happen in a parallel experiment when additional forcing has been applied.

Here are further comments from Knutson et al 2013:

Five of the 24 CMIP3 models, identified by “(-)” in Fig. 1, were not used, or practically not used, beyond Fig. 1 in our analysis. For instance, the IAP_fgoals1.0.g model has a strong discontinuity near year 200 of the control run. We judge this as likely an artifact due to some problem with the model simulation, and we therefore chose to exclude this model from further analysis

From Knutson et al 2013

From Knutson et al 2013

Figure 1

Perhaps this is correct. Or perhaps the jump in simulated temperature is the climate model capturing natural climate variability.

The authors do comment:

As noted by Wittenberg (2009) and Vecchi and Wittenberg (2010), long-running control runs suggest that internally generated SST variability, at least in the ENSO region, can vary substantially between different 100-yr periods (approximately the length of record used here for observations), which again emphasizes the caution that must be placed on comparisons of modeled vs. observed internal variability based on records of relatively limited duration.

The first paper referenced, Wittenberg 2009, was the paper we looked at in Part Six – El Nino.

So is the “caution” that comes from that study included in the probability of our models ability to simulate natural variability?

In reality, questions about internal variability are not really discussed. Trends are removed, models with discontinuities are artifacts. What is left? This paper essentially takes the modeling output from the CMIP3 and CMIP5 archives (with and without GHG forcing) as a given and applies some tests.

Ribes & Terray 2013

This was a “Part II” paper and they said:

We use the same estimates of internal variability as in Ribes et al. 2013 [the “Part I”].

These are based on intra-ensemble variability from the above CMIP5 experiments as well as pre-industrial simulations from both the CMIP3 and CMIP5 archives, leading to a much larger sample than previously used (see Ribes et al. 2013 for details about ensembles). We then implicitly assume that the multi-model internal variability estimate is reliable.

[Emphasis added]. The Part I paper said:

An estimate of internal climate variability is required in detection and attribution analysis, for both optimal estimation of the scaling factors and uncertainty analysis.

Estimates of internal variability are usually based on climate simulations, which may be control simulations (i.e. in the present case, simulations with no variations in external forcings), or ensembles of simulations with the same prescribed external forcings.

In the latter case, m – 1 independent realisations of pure internal variability may be obtained by subtracting the ensemble mean from each member (assuming again additivity of the responses) and rescaling the result by a factor √(m/(m-1)) , where m denotes the number of members in the ensemble.

Note that estimation of internal variability usually means estimation of the covariance matrix of a spatio-temporal climate-vector, the dimension of this matrix potentially being high. We choose to use a multi-model estimate of internal climate variability, derived from a large ensemble of climate models and simulations. This multi-model estimate is subject to lower sampling variability and better represents the effects of model uncertainty on the estimate of internal variability than individual model estimates. We then simultaneously consider control simulations from the CMIP3 and CMIP5 archives, and ensembles of historical simulations (including simulations with individual sets of forcings) from the CMIP5 archive.

All control simulations longer than 220 years (i.e. twice the length of our study period) and all ensembles (at least 2 members) are used. The overall drift of control simulations is removed by subtracting a linear trend over the full period.. We then implicitly assume that this multi- model internal variability estimate is reliable.

[Emphasis added]. So two approaches to evaluate internal variability – one approach uses GCM runs with no GHG forcing; and the other approach uses the variation between different runs of the same model (with GHG forcing) to estimate natural variability. Drift is removed as “an error”.

Chapter 10 on Spatial Trends

The IPCC report also reviews the spatial simulations compared with spatial observations, p. 880:

Figure 10.2a shows the pattern of annual mean surface temperature trends observed over the period 1901–2010, based on Hadley Centre/ Climatic Research Unit gridded surface temperature data set 4 (Had- CRUT4). Warming has been observed at almost all locations with sufficient observations available since 1901.

Rates of warming are generally higher over land areas compared to oceans, as is also apparent over the 1951–2010 period (Figure 10.2c), which simulations indicate is due mainly to differences in local feedbacks and a net anomalous heat transport from oceans to land under GHG forcing, rather than differences in thermal inertia (e.g., Boer, 2011). Figure 10.2e demonstrates that a similar pattern of warming is simulated in the CMIP5 simulations with natural and anthropogenic forcing over the 1901–2010 period. Over most regions, observed trends fall between the 5th and 95th percentiles of simulated trends, and van Oldenborgh et al. (2013) find that over the 1950–2011 period the pattern of observed grid cell trends agrees with CMIP5 simulated trends to within a combination of model spread and internal variability..

van Oldenborgh et al (2013)

Let’s take a look at van Oldenborgh et al (2013).

There’s a nice video of (I assume) the lead author talking about the paper and comparing the probabilistic approach used in weather forecasts with that of climate models (see Ensemble Forecasting). I recommend the video for a good introduction to the topic of ensemble forecasting.

With weather forecasting the probability comes from running ensembles of weather models and seeing, for example, how many simulations predict rain vs how many do not. The proportion is the probability of rain. With weather forecasting we can continually review how well the probabilities given by ensembles match the reality. Over time we will build up a set of statistics of “probability of rain” and compare with the frequency of actual rainfall. It’s pretty easy to see if the models are over-confident or under-confident.

Here is what the authors say about the problem and how they approached it:

The ensemble is considered to be an estimate of the probability density function (PDF) of a climate forecast. This is the method used in weather and seasonal forecasting (Palmer et al 2008). Just like in these fields it is vital to verify that the resulting forecasts are reliable in the definition that the forecast probability should be equal to the observed probability (Joliffe and Stephenson 2011).

If outcomes in the tail of the PDF occur more (less) frequently than forecast the system is overconfident (underconfident): the ensemble spread is not large enough (too large).

In contrast to weather and seasonal forecasts, there is no set of hindcasts to ascertain the reliability of past climate trends per region. We therefore perform the verification study spatially, comparing the forecast and observed trends over the Earth. Climate change is now so strong that the effects can be observed locally in many regions of the world, making a verification study on the trends feasible. Spatial reliability does not imply temporal reliability, but unreliability does imply that at least in some areas the forecasts are unreliable in time as well. In the remainder of this letter we use the word ‘reliability’ to indicate spatial reliability.

[Emphasis added]. The paper first shows the result for one location, the Netherlands, with the spread of model results vs the actual result from 1950-2011:

from van Oldenborgh et al 2013

from van Oldenborgh et al 2013

Figure 2

We can see that the models are overall mostly below the observation. But this is one data point. So if we compared all of the datapoints – and this is on a grid of 2.5º – how do the model spreads compare with the results? Are observations above 95% of the model results only 5% of the time? Or more than 5% of the time? And are observations below 5% of the model results only 5% of the time?

We can see that the frequency of observations in the bottom 5% of model results is about 13% and the frequency of observations in the top 5% of model results is about 20%. Therefore the models are “overconfident” in spatial representation of the last 60 year trends:

van Oldenborgh-2013-fig3

From van Oldenborgh et al 2013

Figure 3

We investigated the reliability of trends in the CMIP5 multi-model ensemble prepared for the IPCC AR5. In agreement with earlier studies using the older CMIP3 ensemble, the temperature trends are found to be locally reliable. However, this is due to the differing global mean climate response rather than a correct representation of the spatial variability of the climate change signal up to now: when normalized by the global mean temperature the ensemble is overconfident. This agrees with results of Sakaguchi et al (2012) that the spatial variability in the pattern of warming is too small. The precipitation trends are also overconfident. There are large areas where trends in both observational dataset are (almost) outside the CMIP5 ensemble, leading us to conclude that this is unlikely due to faulty observations.

It’s probably important to note that the author comments in the video “on the larger scale the models are not doing so badly”.

It’s an interesting paper. I’m not clear whether the brief note in AR5 reflects the paper’s conclusions.

Jones et al 2013

It was reassuring to finally find a statement that confirmed what seemed obvious from the “omissions”:

A basic assumption of the optimal detection analysis is that the estimate of internal variability used is comparable with the real world’s internal variability.

Surely I can’t be the only one reading Chapter 10 and trying to understand the assumptions built into the “with 95% confidence” result. If Chapter 10 is only aimed at climate scientists who work in the field of attribution and detection it is probably fine not to actually mention this minor detail in the tight constraints of only 60 pages.

But if Chapter 10 is aimed at a wider audience it seems a little remiss not to bring it up in the chapter itself.

I probably missed the stated caveat in chapter 10’s executive summary or introduction.

The authors continue:

As the observations are influenced by external forcing, and we do not have a non-externally forced alternative reality to use to test this assumption, an alternative common method is to compare the power spectral density (PSD) of the observations with the model simulations that include external forcings.

We have already seen that overall the CMIP5 and CMIP3 model variability compares favorably across different periodicities with HadCRUT4-observed variability (Figure 5). Figure S11 (in the supporting information) includes the PSDs for each of the eight models (BCC-CSM1-1, CNRM-CM5, CSIRO- Mk3-6-0, CanESM2, GISS-E2-H, GISS-E2-R, HadGEM2- ES and NorESM1-M) that can be examined in the detection analysis.

Variability for the historical experiment in most of the models compares favorably with HadCRUT4 over the range of periodicities, except for HadGEM2-ES whose very long period variability is lower due to the lower overall trend than observed and for CanESM2 and bcc-cm1-1 whose decadal and higher period variability are larger than observed.

While not a strict test, Figure S11 suggests that the models have an adequate representation of internal variability—at least on the global mean level. In addition, we use the residual test from the regression to test whether there are any gross failings in the models representation of internal variability.

Figure S11 is in the supplementary section of the paper:

From Jones et al 2013, figure S11

From Jones et al 2013, figure S11

Figure 4

From what I can see, this demonstrates that the spectrum of the models’ internal variability (“historicalNat”) is different from the spectrum of the models’ forced response with GHG changes (“historical”).

It feels like my quantum mechanics classes all over again. I’m probably missing something obvious, and hopefully knowledgeable readers can explain.

Chapter 9 of AR5 – Climate Models’ Representation of Internal Variability

Chapter 9, reviewing models, stretches to over 80 pages. The section on internal variability is section 9.5.1:

However, the ability to simulate climate variability, both unforced internal variability and forced variability (e.g., diurnal and seasonal cycles) is also important. This has implications for the signal-to-noise estimates inherent in climate change detection and attribution studies where low-frequency climate variability must be estimated, at least in part, from long control integrations of climate models (Section 10.2).

Section 9.5.3:

In addition to the annual, intra-seasonal and diurnal cycles described above, a number of other modes of variability arise on multi-annual to multi-decadal time scales (see also Box 2.5). Most of these modes have a particular regional manifestation whose amplitude can be larger than that of human-induced climate change. The observational record is usually too short to fully evaluate the representation of variability in models and this motivates the use of reanalysis or proxies, even though these have their own limitations.

Figure 9.33a shows simulated internal variability of mean surface temperature from CMIP5 pre-industrial control simulations. Model spread is largest in the tropics and mid to high latitudes (Jones et al., 2012), where variability is also large; however, compared to CMIP3, the spread is smaller in the tropics owing to improved representation of ENSO variability (Jones et al., 2012). The power spectral density of global mean temperature variance in the historical simulations is shown in Figure 9.33b and is generally consistent with the observational estimates. At longer time scale of the spectra estimated from last millennium simulations, performed with a subset of the CMIP5 models, can be assessed by comparison with different NH temperature proxy records (Figure 9.33c; see Chapter 5 for details). The CMIP5 millennium simulations include natural and anthropogenic forcings (solar, volcanic, GHGs, land use) (Schmidt et al., 2012).

Significant differences between unforced and forced simulations are seen for time scale larger than 50 years, indicating the importance of forced variability at these time scales (Fernandez-Donado et al., 2013). It should be noted that a few models exhibit slow background climate drift which increases the spread in variance estimates at multi-century time scales.

Nevertheless, the lines of evidence above suggest with high confidence that models reproduce global and NH temperature variability on a wide range of time scales.

[Emphasis added]. Here is fig 9.33:

From IPCC AR5 Chapter 10

From IPCC AR5 Chapter 10

Figure 5 – Click to Expand

The bottom graph shows the spectra of the last 1,000 years – black line is observations (reconstructed from proxies), dashed lines are without GHG forcings, and solid lines are with GHG forcings.

In later articles we will review this in more detail.

Conclusion

The IPCC report on attribution is very interesting. Most attribution studies compare observations of the last 100 – 150 years with model simulations using anthropogenic GHG changes and model simulations without (note 3).

The results show a much better match for the case of the anthropogenic forcing.

The primary method is with global mean surface temperature, with more recent studies also comparing the spatial breakdown. We saw one such comparison with van Oldenborgh et al (2013). Jones et al (2013) also reviews spatial matching, finding a better fit (of models & observations) for the last half of the 20th century than the first half. (As with van Oldenborgh’s paper, the % match outside 90% of model results was greater than 10%).

My question as I first read Chapter 10 was how was the high confidence attained and what is a fingerprint?

I was led back, by following the chain of references, to one of the early papers on the topic (1996) that also had similar high confidence. (We saw this in Part Three). It was intriguing that such confidence could be attained with just a few “no forcing” model runs as comparison, all of which needed “flux adjustment”. Current models need much less, or often zero, flux adjustment.

In later papers reviewed in AR5, “no forcing” model simulations that show temperature trends or jumps are often removed or adjusted.

I’m not trying to suggest that “no forcing” GCM simulations of the last 150 years have anything like the temperature changes we have observed. They don’t.

But I was trying to understand what assumptions and premises were involved in attribution. Chapter 10 of AR5 has been valuable in suggesting references to read, but poor at laying out the assumptions and premises of attribution studies.

For clarity, as I stated in Part Three:

..as regular readers know I am fully convinced that the increases in CO2, CH4 and other GHGs over the past 100 years or more can be very well quantified into “radiative forcing” and am 100% in agreement with the IPCCs summary of the work of atmospheric physics over the last 50 years on this topic. That is, the increases in GHGs have led to something like a “radiative forcing” of 2.8 W/m²..

..Therefore, it’s “very likely” that the increases in GHGs over the last 100 years have contributed significantly to the temperature changes that we have seen.

So what’s my point?

Chapter 10 of the IPCC report fails to highlight the important assumptions in the attribution studies. Chapter 9 of the IPCC report has a section on centennial/millennial natural variability with a “high confidence” conclusion that comes with little evidence and appears to be based on a cursory comparison of the spectral results of the last 1,000 years proxy results with the CMIP5 modeling studies.

In chapter 10, the executive summary states:

..given that observed warming since 1951 is very large compared to climate model estimates of internal variability (Section 10.3.1.1.2), which are assessed to be adequate at global scale (Section 9.5.3.1), we conclude that it is virtually certain [99-100%] that internal variability alone cannot account for the observed global warming since 1951.

[Emphasis added]. I agree, and I don’t think anyone who understands radiative forcing and climate basics would disagree. To claim otherwise would be as ridiculous as, for example, claiming that tiny changes in solar insolation from eccentricity modifications over 100 kyrs cause the end of ice ages, whereas large temperature changes during these ice ages have no effect (see note 2).

The executive summary also says:

It is extremely likely [95–100%] that human activities caused more than half of the observed increase in GMST from 1951 to 2010.

The idea is plausible, but the confidence level is dependent on a premise that is claimed via one graph (fig 9.33) of the spectrum of the last 1,000 years. High confidence (“that models reproduce global and NH temperature variability on a wide range of time scales”) is just an opinion.

It’s crystal clear, by inspection of CMIP3 and CMIP5 model results, that models with anthropogenic forcing match the last 150 years of temperature changes much better than models held at constant pre-industrial forcing.

I believe natural variability is a difficult subject which needs a lot more than a cursory graph of the spectrum of the last 1,000 years to even achieve low confidence in our understanding.

Chapters 9 & 10 of AR5 haven’t investigated “natural variability” at all. For interest, some skeptic opinions are given in note 4.

I propose an alternative summary for Chapter 10 of AR5:

It is extremely likely [95–100%] that human activities caused more than half of the observed increase in GMST from 1951 to 2010, but this assessment is subject to considerable uncertainties.

References

Multi-model assessment of regional surface temperature trends, TR Knutson, F Zeng & AT Wittenberg, Journal of Climate (2013) – free paper

Attribution of observed historical near surface temperature variations to anthropogenic and natural causes using CMIP5 simulations, Gareth S Jones, Peter A Stott & Nikolaos Christidis, Journal of Geophysical Research Atmospheres (2013) – paywall paper

Application of regularised optimal fingerprinting to attribution. Part II: application to global near-surface temperature, Aurélien Ribes & Laurent Terray, Climate Dynamics (2013) – free paper

Application of regularised optimal fingerprinting to attribution. Part I: method, properties and idealised analysis, Aurélien Ribes, Serge Planton & Laurent Terray, Climate Dynamics (2013) – free paper

Reliability of regional climate model trends, GJ van Oldenborgh, FJ Doblas Reyes, SS Drijfhout & E Hawkins, Environmental Research Letters (2013) – free paper

Notes

Note 1: CMIP = Coupled Model Intercomparison Project. CMIP3 was for AR4 and CMIP5 was for AR5.

Read about CMIP5:

At a September 2008 meeting involving 20 climate modeling groups from around the world, the WCRP’s Working Group on Coupled Modelling (WGCM), with input from the IGBP AIMES project, agreed to promote a new set of coordinated climate model experiments. These experiments comprise the fifth phase of the Coupled Model Intercomparison Project (CMIP5). CMIP5 will notably provide a multi-model context for

1) assessing the mechanisms responsible for model differences in poorly understood feedbacks associated with the carbon cycle and with clouds

2) examining climate “predictability” and exploring the ability of models to predict climate on decadal time scales, and, more generally

3) determining why similarly forced models produce a range of responses…

From the website link above you can read more. CMIP5 is a substantial undertaking, with massive output of data from the latest climate models. Anyone can access this data, similar to CMIP3. Here is the Getting Started page.

And CMIP3:

In response to a proposed activity of the World Climate Research Programme (WCRP) Working Group on Coupled Modelling (WGCM), PCMDI volunteered to collect model output contributed by leading modeling centers around the world. Climate model output from simulations of the past, present and future climate was collected by PCMDI mostly during the years 2005 and 2006, and this archived data constitutes phase 3 of the Coupled Model Intercomparison Project (CMIP3). In part, the WGCM organized this activity to enable those outside the major modeling centers to perform research of relevance to climate scientists preparing the Fourth Asssessment Report (AR4) of the Intergovernmental Panel on Climate Change (IPCC). The IPCC was established by the World Meteorological Organization and the United Nations Environmental Program to assess scientific information on climate change. The IPCC publishes reports that summarize the state of the science.

This unprecedented collection of recent model output is officially known as the “WCRP CMIP3 multi-model dataset.” It is meant to serve IPCC’s Working Group 1, which focuses on the physical climate system — atmosphere, land surface, ocean and sea ice — and the choice of variables archived at the PCMDI reflects this focus. A more comprehensive set of output for a given model may be available from the modeling center that produced it.

With the consent of participating climate modelling groups, the WGCM has declared the CMIP3 multi-model dataset open and free for non-commercial purposes. After registering and agreeing to the “terms of use,” anyone can now obtain model output via the ESG data portal, ftp, or the OPeNDAP server.

As of July 2009, over 36 terabytes of data were in the archive and over 536 terabytes of data had been downloaded among the more than 2500 registered users

Note 2: This idea is explained in Ghosts of Climates Past -Eighteen – “Probably Nonlinearity” of Unknown Origin – what is believed and what is put forward as evidence for the theory that ice age terminations were caused by orbital changes, see especially the section under the heading: Why Theory B is Unsupportable.

Note 3: Some studies use just fixed pre-industrial values, and others compare “natural forcings” with “no forcings”.

“Natural forcings” = radiative changes due to solar insolation variations (which are not known with much confidence) and aerosols from volcanos. “No forcings” is simply fixed pre-industrial values.

Note 4: Chapter 11 (of AR5), p.982:

For the remaining projections in this chapter the spread among the CMIP5 models is used as a simple, but crude, measure of uncertainty. The extent of agreement between the CMIP5 projections provides rough guidance about the likelihood of a particular outcome. But—as partly illustrated by the discussion above—it must be kept firmly in mind that the real world could fall outside of the range spanned by these particular models. See Section 11.3.6 for further discussion.

And p. 1004:

It is possible that the real world might follow a path outside (above or below) the range projected by the CMIP5 models. Such an eventuality could arise if there are processes operating in the real world that are missing from, or inadequately represented in, the models. Two main possibilities must be considered: (1) Future radiative and other forcings may diverge from the RCP4.5 scenario and, more generally, could fall outside the range of all the RCP scenarios; (2) The response of the real climate system to radiative and other forcing may differ from that projected by the CMIP5 models. A third possibility is that internal fluctuations in the real climate system are inadequately simulated in the models. The fidelity of the CMIP5 models in simulating internal climate variability is discussed in Chapter 9..

..The response of the climate system to radiative and other forcing is influenced by a very wide range of processes, not all of which are adequately simulated in the CMIP5 models (Chapter 9). Of particular concern for projections are mechanisms that could lead to major ‘surprises’ such as an abrupt or rapid change that affects global-to-continental scale climate.

Several such mechanisms are discussed in this assessment report; these include: rapid changes in the Arctic (Section 11.3.4 and Chapter 12), rapid changes in the ocean’s overturning circulation (Chapter 12), rapid change of ice sheets (Chapter 13) and rapid changes in regional monsoon systems and hydrological climate (Chapter 14). Additional mechanisms may also exist as synthesized in Chapter 12. These mechanisms have the potential to influence climate in the near term as well as in the long term, albeit the likelihood of substantial impacts increases with global warming and is generally lower for the near term.

And p. 1009 (note that we looked at Rowlands et al 2012 in Part Five – Why Should Observations match Models?):

The CMIP3 and CMIP5 projections are ensembles of opportunity, and it is explicitly recognized that there are sources of uncertainty not simulated by the models. Evidence of this can be seen by comparing the Rowlands et al. (2012) projections for the A1B scenario, which were obtained using a very large ensemble in which the physics parameterizations were perturbed in a single climate model, with the corresponding raw multi-model CMIP3 projections. The former exhibit a substantially larger likely range than the latter. A pragmatic approach to addressing this issue, which was used in the AR4 and is also used in Chapter 12, is to consider the 5 to 95% CMIP3/5 range as a ‘likely’ rather than ‘very likely’ range.

Replacing ‘very likely’ = 90–100% with ‘likely 66–100%’ is a good start. How does this recast chapter 10?

And Chapter 1 of AR5, p. 138:

Model spread is often used as a measure of climate response uncertainty, but such a measure is crude as it takes no account of factors such as model quality (Chapter 9) or model independence (e.g., Masson and Knutti, 2011; Pennell and Reichler, 2011), and not all variables of interest are adequately simulated by global climate models..

..Climate varies naturally on nearly all time and space scales, and quantifying precisely the nature of this variability is challenging, and is characterized by considerable uncertainty.

[Emphasis added in all bold sections above]

Read Full Post »

In (still) writing what was to be Part Six (Attribution in AR5 from the IPCC), I was working through Knutson et al 2013, one of the papers referenced by AR5. That paper in turn referenced Are historical records sufficient to constrain ENSO simulations? [link corrected] by Andrew Wittenberg (2009). This is a very interesting paper and I was glad to find it because it illustrates some of the points we have been looking at.

It’s an easy paper to read (and free) and so I recommend reading the whole paper.

The paper uses NOAA’s Geophysical Fluid Dynamics Laboratory (GFDL) CM2.1 global coupled atmosphere/ocean/land/ice GCM (see note 1 for reference and description):

CM2.1 played a prominent role in the third Coupled Model Intercomparison Project (CMIP3) and the Fourth Assessment of the Intergovernmental Panel on Climate Change (IPCC), and its tropical and ENSO simulations have consistently ranked among the world’s top GCMs [van Oldenborgh et al., 2005; Wittenberg et al., 2006; Guilyardi, 2006; Reichler and Kim, 2008].

The coupled pre-industrial control run is initialized as by Delworth et al. [2006], and then integrated for 2220 yr with fixed 1860 estimates of solar irradiance, land cover, and atmospheric composition; we focus here on just the last 2000 yr. This simulation required one full year to run on 60 processors at GFDL.

First of all we see the challenge for climate models – a reasonable resolution coupled GCM running just one 2000-year simulation consumed one year of multiple processor time.

Wittenberg shows the results in the graph below. At the top is our observational record going back 140 years, then below are the simulation results of the SST variation in the El Nino region broken into 20 century-long segments.

From Wittenberg 2009

From Wittenberg 2009

 Figure 1 – Click to Expand

What we see is that different centuries have very different results:

There are multidecadal epochs with hardly any variability (M5); epochs with intense, warm-skewed ENSO events spaced five or more years apart (M7); epochs with moderate, nearly sinusoidal ENSO events spaced three years apart (M2); and epochs that are highly irregular in amplitude and period (M6). Occasional epochs even mimic detailed temporal sequences of observed ENSO events; e.g., in both R2 and M6, there are decades of weak, biennial oscillations, followed by a large warm event, then several smaller events, another large warm event, and then a long quiet period. Although the model’s NINO3 SST variations are generally stronger than observed, there are long epochs (like M1) where the ENSO amplitude agrees well with observations (R1).

Wittenberg comments on the problem for climate modelers:

An unlucky modeler – who by chance had witnessed only M1-like variability throughout the first century of simulation – might have erroneously inferred that the model’s ENSO amplitude matched observations, when a longer simulation would have revealed a much stronger ENSO.

If the real-world ENSO is similarly modulated, then there is a more disturbing possibility. Had the research community been unlucky enough to observe an unrepresentative ENSO over the past 150 yr of measurements, then it might collectively have misjudged ENSO’s longer-term natural behavior. In that case, historically-observed statistics could be a poor guide for modelers, and observed trends in ENSO statistics might simply reflect natural variations..

..A 200 yr epoch of consistently strong variability (M3) can be followed, just one century later, by a 200 yr epoch of weak variability (M4). Documenting such extremes might thus require a 500+ yr record. Yet few modeling centers currently attempt simulations of that length when evaluating CGCMs under development – due to competing demands for high resolution, process completeness, and quick turnaround to permit exploration of model sensitivities.

Model developers thus might not even realize that a simulation manifested long-term ENSO modulation, until long after freezing the model development. Clearly this could hinder progress. An unlucky modeler – unaware of centennial ENSO modulation and misled by comparisons between short, unrepresentative model runs – might erroneously accept a degraded model or reject an improved model.

[Emphasis added].

Wittenberg shows the same data in the frequency domain and has presented the data in a way that illustrates the different perspective you might have depending upon your period of observation or period of model run. It’s worth taking the time to understand what is in these graphs:

From Wittenberg 2009

From Wittenberg 2009

Figure 2 – Click to Expand

The first graph, 2a:

..time-mean spectra of the observations for epochs of length 20 yr – roughly the duration of observations from satellites and the Tropical Atmosphere Ocean (TAO) buoy array. The spectral power is fairly evenly divided between the seasonal cycle and the interannual ENSO band, the latter spanning a broad range of time scales between 1.3 to 8 yr.

So the different colored lines indicate the spectral power for each period. The black dashed line is the observed spectral power over the 140 year (observational) period. This dashed line is repeated in figure 2c.

The second graph, 2b shows the modeled results if we break up the 2000 years into 100 x 20-year periods.

The third graph, 2c, shows the modeled results broken up into 100 year periods. The probability number in the bottom right, 90%, is the likelihood of observations falling outside the range of the model results – if “the simulated subspectra independent and identically distributed.. at bottom right is the probability that an interval so constructed would bracket the next subspectrum to emerge from the model.

So what this says, paraphrasing and over-simplifying: “we are 90% sure that the observations can’t be explained by the models”.

Of course, this independent and identically distributed assumption is not valid, but as we will hopefully get onto many articles further in this series, most of these statistical assumptions – stationary, gaussian, AR1 – are problematic for real world non-linear systems.

To be clear, the paper’s author is demonstrating a problem in such a statistical approach.

Conclusion

Models are not reality. This is a simulation with the GFDL model. It doesn’t mean ENSO is like this. But it might be.

The paper illustrates a problem I highlighted in Part Five – observations are only one “realization” of possible outcomes. The last century or century and a half of surface observations could be an outlier. The last 30 years of satellite data could equally be an outlier. Even if our observational periods are not an outlier and are right there on the mean or median, matching climate models to observations may still greatly under-represent natural climate variability.

Non-linear systems can demonstrate variability over much longer time-scales than the the typical period between characteristic events. We will return to this in future articles in more detail. Such systems do not have to be “chaotic” (where chaotic means that tiny changes in initial conditions cause rapidly diverging results).

What period of time is necessary to capture natural climate variability?

I will give the last word to the paper’s author:

More worryingly, if nature’s ENSO is similarly modulated, there is no guarantee that the 150 yr historical SST record is a fully representative target for model development..

..In any case, it is sobering to think that even absent any anthropogenic changes, the future of ENSO could look very different from what we have seen so far.

References

Are historical records sufficient to constrain ENSO simulations? Andrew T. Wittenberg, GRL (2009) – free paper

GFDL’s CM2 Global Coupled Climate Models. Part I: Formulation and Simulation Characteristics, Delworth et al, Journal of Climate, 2006 – free paper

Notes

Note 1: The paper referenced for the GFDL model is GFDL’s CM2 Global Coupled Climate Models. Part I: Formulation and Simulation Characteristics, Delworth et al, 2006:

The formulation and simulation characteristics of two new global coupled climate models developed at NOAA’s Geophysical Fluid Dynamics Laboratory (GFDL) are described.

The models were designed to simulate atmospheric and oceanic climate and variability from the diurnal time scale through multicentury climate change, given our computational constraints. In particular, an important goal was to use the same model for both experimental seasonal to interannual forecasting and the study of multicentury global climate change, and this goal has been achieved.

Two versions of the coupled model are described, called CM2.0 and CM2.1. The versions differ primarily in the dynamical core used in the atmospheric component, along with the cloud tuning and some details of the land and ocean components. For both coupled models, the resolution of the land and atmospheric components is 2° latitude x 2.5° longitude; the atmospheric model has 24 vertical levels.

The ocean resolution is 1° in latitude and longitude, with meridional resolution equatorward of 30° becoming progressively finer, such that the meridional resolution is 1/3° at the equator. There are 50 vertical levels in the ocean, with 22 evenly spaced levels within the top 220 m. The ocean component has poles over North America and Eurasia to avoid polar filtering. Neither coupled model employs flux adjustments.

The control simulations have stable, realistic climates when integrated over multiple centuries. Both models have simulations of ENSO that are substantially improved relative to previous GFDL coupled models. The CM2.0 model has been further evaluated as an ENSO forecast model and has good skill (CM2.1 has not been evaluated as an ENSO forecast model). Generally reduced temperature and salinity biases exist in CM2.1 relative to CM2.0. These reductions are associated with 1) improved simulations of surface wind stress in CM2.1 and associated changes in oceanic gyre circulations; 2) changes in cloud tuning and the land model, both of which act to increase the net surface shortwave radiation in CM2.1, thereby reducing an overall cold bias present in CM2.0; and 3) a reduction of ocean lateral viscosity in the extra- tropics in CM2.1, which reduces sea ice biases in the North Atlantic.

Both models have been used to conduct a suite of climate change simulations for the 2007 Intergovern- mental Panel on Climate Change (IPCC) assessment report and are able to simulate the main features of the observed warming of the twentieth century. The climate sensitivities of the CM2.0 and CM2.1 models are 2.9 and 3.4 K, respectively. These sensitivities are defined by coupling the atmospheric components of CM2.0 and CM2.1 to a slab ocean model and allowing the model to come into equilibrium with a doubling of atmospheric CO2. The output from a suite of integrations conducted with these models is freely available online (see http://nomads.gfdl.noaa.gov/).

There’s a brief description of the newer model version CM3.0 on the GFDL page.

Read Full Post »

In Part Three we looked at attribution in the early work on this topic by Hegerl et al 1996. I started to write Part Four as the follow up on Attribution as explained in the 5th IPCC report (AR5), but got caught up in the many volumes of AR5.

And instead for this article I decided to focus on what might seem like an obscure point. I hope readers stay with me because it is important.

Here is a graphic from chapter 11 of IPCC AR5:

From IPCC AR5 Chapter 11

From IPCC AR5 Chapter 11

Figure 1

And in the introduction, chapter 1:

Climate in a narrow sense is usually defined as the average weather, or more rigorously, as the statistical description in terms of the mean and variability of relevant quantities over a period of time ranging from months to thousands or millions of years. The relevant quantities are most often surface variables such as temperature, precipitation and wind.

Classically the period for averaging these variables is 30 years, as defined by the World Meteorological Organization.

Climate in a wider sense also includes not just the mean conditions, but also the associated statistics (frequency, magnitude, persistence, trends, etc.), often combining parameters to describe phenomena such as droughts. Climate change refers to a change in the state of the climate that can be identified (e.g., by using statistical tests) by changes in the mean and/or the variability of its properties, and that persists for an extended period, typically decades or longer.

[Emphasis added].

Weather is an Initial Value Problem, Climate is a Boundary Value Problem

The idea is fundamental, the implementation is problematic.

As explained in Natural Variability and Chaos – Two – Lorenz 1963, there are two key points about a chaotic system:

  1. With even a minute uncertainty in the initial starting condition, the predictability of future states is very limited
  2. Over a long time period the statistics of the system are well-defined

(Being technical, the statistics are well-defined in a transitive system).

So in essence, we can’t predict the exact state of the future – from the current conditions – beyond a certain timescale which might be quite small. In fact, in current weather prediction this time period is about one week.

After a week we might as well say either “the weather on that day will be the same as now” or “the weather on that day will be the climatological average” – and either of these will be better than trying to predict the weather based on the initial state.

No one disagrees on this first point.

In current climate science and meteorology the term used is the skill of the forecast. Skill means, not how good is the forecast, but how much better is it than a naive approach like, “it’s July in New York City so the maximum air temperature today will be 28ºC”.

What happens in practice, as can be seen in the simple Lorenz system shown in Part Two, is a tiny uncertainty about the starting condition gets amplified. Two almost identical starting conditions will diverge rapidly – the “butterfly effect”. Eventually these two conditions are no more alike than one of the conditions and a time chosen at random from the future.

The wide divergence doesn’t mean that the future state can be anything. Here’s an example from the simple Lorenz system for three slightly different initial conditions:

Lorenz63-5ksecs-x-vs-time-zoom-499px

Figure 2

We can see that the three conditions that looked identical for the first 20 seconds (see figure 2 in Part Two) have diverged. The values are bounded but at any given time we can’t predict what the value will be.

On the second point – the statistics of the system, there is a tiny hiccup.

But first let’s review what is agreed upon. Climate is the statistics of weather. Weather is unpredictable more than a week ahead. Climate, as the statistics of weather, might be predictable. That is, just because weather is unpredictable, it doesn’t mean (or prove) that climate is also unpredictable.

This is what we find with simple chaotic systems.

So in the endeavor of climate modeling the best we can hope for is a probabilistic forecast. We have to run “a lot” of simulations and review the statistics of the parameter we are trying to measure.

To give a concrete example, we might determine from model simulations that the mean sea surface temperature in the western Pacific (between a certain latitude and longitude) in July has a mean of 29ºC with a standard deviation of 0.5ºC, while for a certain part of the north Atlantic it is 6ºC with a standard deviation of 3ºC. In the first case the spread of results tells us – if we are confident in our predictions – that we know the western Pacific SST quite accurately, but the north Atlantic SST has a lot of uncertainty. We can’t do anything about the model spread. In the end, the statistics are knowable (in theory), but the actual value on a given day or month or year are not.

Now onto the hiccup.

With “simple” chaotic systems that we can perfectly model (note 1) we don’t know in advance the timescale of “predictable statistics”. We have to run lots of simulations over long time periods until the statistics converge on the same result. If we have parameter uncertainty (see Ensemble Forecasting) this means we also have to run simulations over the spread of parameters.

Here’s my suggested alternative of the initial value vs boundary value problem:

Suggested replacement for AR5, Box 11.1, Figure 2

Figure 3

So one body made an ad hoc definition of climate as the 30-year average of weather.

If this definition is correct and accepted then “climate” is not a “boundary value problem” at all. Climate is an initial value problem and therefore a massive problem given our ability to forecast only one week ahead.

Suppose, equally reasonably, that the statistics of weather (=climate), given constant forcing (note 2), are predictable over a 10,000 year period.

In that case we can be confident that, with near perfect models, we have the ability to be confident about the averages, standard deviations, skews, etc of the temperature at various locations on the globe over a 10,000 year period.

Conclusion

The fact that chaotic systems exhibit certain behavior doesn’t mean that 30-year statistics of weather can be reliably predicted.

30-year statistics might be just as dependent on the initial state as the weather three weeks from today.

Notes

Note 1: The climate system is obviously imperfectly modeled by GCMs, and this will always be the case. The advantage of a simple model is we can state that the model is a perfect representation of the system – it is just a definition for convenience. It allows us to evaluate how slight changes in initial conditions or parameters affect our ability to predict the future.

The IPCC report also has continual reminders that the model is not reality, for example, chapter 11, p. 982:

For the remaining projections in this chapter the spread among the CMIP5 models is used as a simple, but crude, measure of uncertainty. The extent of agreement between the CMIP5 projections provides rough guidance about the likelihood of a particular outcome. But — as partly illustrated by the discussion above — it must be kept firmly in mind that the real world could fall outside of the range spanned by these particular models.

[Emphasis added].

Chapter 1, p.138:

Model spread is often used as a measure of climate response uncertainty, but such a measure is crude as it takes no account of factors such as model quality (Chapter 9) or model independence (e.g., Masson and Knutti, 2011; Pennell and Reichler, 2011), and not all variables of interest are adequately simulated by global climate models..

..Climate varies naturally on nearly all time and space scales, and quantifying precisely the nature of this variability is challenging, and is characterized by considerable uncertainty.

I haven’t yet been able to determine how these firmly noted and challenging uncertainties have been factored into the quantification of 95-100%, 99-100%, etc, in the various chapters of the IPCC report.

Note 2:  There are some complications with defining exactly what system is under review. For example, do we take the current solar output, current obliquity,precession and eccentricity as fixed? If so, then any statistics will be calculated for a condition that will anyway be changing. Alternatively, we can take these values as changing inputs in so far as we know the changes – which is true for obliquity, precession and eccentricity but not for solar output.

The details don’t really alter the main point of this article.

Read Full Post »

I’ve been somewhat sidetracked on this series, mostly by starting up a company and having no time, but also by the voluminous distractions of IPCC AR5. The subject of attribution could be a series by itself but as I started the series Natural Variability and Chaos it makes sense to weave it into that story.

In Part One and Part Two we had a look at chaotic systems and what that might mean for weather and climate. I was planning to develop those ideas a lot more before discussing attribution, but anyway..

AR5, Chapter 10: Attribution is 85 pages on the idea that the changes over the last 50 or 100 years in mean surface temperature – and also some other climate variables – can be attributed primarily to anthropogenic greenhouse gases.

The technical side of the discussion fascinated me, but has a large statistical component. I’m a rookie with statistics, and maybe because of this, I’m often suspicious about statistical arguments.

Digression on Statistics

The foundation of a lot of statistics is the idea of independent events. For example, spin a roulette wheel and you get a number between 0 and 36 and a color that is red, black – or if you’ve landed on a zero, neither.

The statistics are simple – each spin of the roulette wheel is an independent event – that is, it has no relationship with the last spin of the roulette wheel. So, looking ahead, what is the chance of getting 5 two times in a row? The answer (with a 0 only and no “00” as found in some roulette tables) is 1/37 x 1/37 = 0.073%.

However, after you have spun the roulette wheel and got a 5, what is the chance of a second 5? It’s now just 1/37 = 2.7%. The past has no impact on the future statistics. Most of real life doesn’t correspond particularly well to this idea, apart from playing games of chance like poker and so on.

I was in the gym the other day and although I try and drown it out with music from my iPhone, the Travesty (aka “the News”) was on some of the screens in the gym – with text of the “high points” on the screen aimed at people trying to drown out the annoying travestyreaders. There was a report that a new study had found that autism was caused by “Cause X” – I have blanked it out to avoid any unpleasant feeling for parents of autistic kids – or people planning on having kids who might worry about “Cause X”.

It did get me thinking – if you have let’s say 10,000 potential candidates for causing autism, and you set the bar at 95% probability of rejecting the hypothesis that a given potential cause is a factor, what is the outcome? Well, if there is a random spread of autism among the population with no actual cause (let’s say it is caused by a random genetic mutation with no link to any parental behavior, parental genetics or the environment) then you will expect to find about 500 “statistically significant” factors for autism simply by testing at the 95% level. That’s 500, when none of them are actually the real cause. It’s just chance. Plenty of fodder for pundits though.

That’s one problem with statistics – the answer you get unavoidably depends on your frame of reference.

The questions I have about attribution are unrelated to this specific point about statistics, but there are statistical arguments in the attribution field that seem fatally flawed. Luckily I’m a statistical novice so no doubt readers will set me straight.

On another unrelated point about statistical independence, only slightly more relevant to the question at hand, Pirtle, Meyer & Hamilton (2010) said:

In short, we note that GCMs are commonly treated as independent from one another, when in fact there are many reasons to believe otherwise. The assumption of independence leads to increased confidence in the ‘‘robustness’’ of model results when multiple models agree. But GCM independence has not been evaluated by model builders and others in the climate science community. Until now the climate science literature has given only passing attention to this problem, and the field has not developed systematic approaches for assessing model independence.

.. end of digression

Attribution History

In my efforts to understand Chapter 10 of AR5 I followed up on a lot of references and ended up winding my way back to Hegerl et al 1996.

Gabriele Hegerl is one of the lead authors of Chapter 10 of AR5, was one of the two coordinating lead authors of the Attribution chapter of AR4, and one of four lead authors on the relevant chapter of AR3 – and of course has a lot of papers published on this subject.

As is often the case, I find that to understand a subject you have to start with a focus on the earlier papers because the later work doesn’t make a whole lot of sense without this background.

This paper by Hegerl and her colleagues use the work of one of the co-authors, Klaus Hasselmann – his 1993 paper “Optimal fingerprints for detection of time dependent climate change”.

Fingerprints, by the way, seems like a marketing term. Fingerprints evokes the idea that you can readily demonstrate that John G. Doe of 137 Smith St, Smithsville was at least present at the crime scene and there is no possibility of confusing his fingerprints with John G. Dode who lives next door even though their mothers could barely tell them apart.

This kind of attribution is more in the realm of “was it the 6ft bald white guy or the 5’5″ black guy”?

Well, let’s set aside questions of marketing and look at the details.

Detecting GHG Climate Change with Optimal Fingerprint Methods in 1996

The essence of the method is to compare observations (measurements) with:

  • model runs with GHG forcing
  • model runs with “other anthropogenic” and natural forcings
  • model runs with internal variability only

Then based on the fit you can distinguish one from the other. The statistical basis is covered in detail in Hasselmann 1993 and more briefly in this paper: Hegerl et al 1996 – both papers are linked below in the References.

At this point I make another digression.. as regular readers know I am fully convinced that the increases in CO2, CH4 and other GHGs over the past 100 years or more can be very well quantified into “radiative forcing” and am 100% in agreement with the IPCCs summary of the work of atmospheric physics over the last 50 years on this topic. That is, the increases in GHGs have led to something like a “radiative forcing” of 2.8 W/m² [corrected, thanks to niclewis].

And there isn’t any scientific basis for disputing this “pre-feedback” value. It’s simply the result of basic radiative transfer theory, well-established, and well-demonstrated in observations both in the lab and through the atmosphere. People confused about this topic are confused about science basics and comments to the contrary may be allowed or more likely will be capriciously removed due to the fact that there have been more than 50 posts on this topic (post your comments on those instead). See The “Greenhouse” Effect Explained in Simple Terms and On Uses of A 4 x 2: Arrhenius, The Last 15 years of Temperature History and Other Parodies.

Therefore, it’s “very likely” that the increases in GHGs over the last 100 years have contributed significantly to the temperature changes that we have seen.

To say otherwise – and still accept physics basics – means believing that the radiative forcing has been “mostly” cancelled out by feedbacks while internal variability has been amplified by feedbacks to cause a significant temperature change.

Yet this work on attribution seems to be fundamentally flawed.

Here was the conclusion:

We find that the latest observed 30-year trend pattern of near-surface temperature change can be distinguished from all estimates of natural climate variability with an estimated risk of less than 2.5% if the optimal fingerprint is applied.

With the caveats, that to me, eliminated the statistical basis of the previous statement:

The greatest uncertainty of our analysis is the estimate of the natural variability noise level..

..The shortcomings of the present estimates of natural climate variability cannot be readily overcome. However, the next generation of models should provide us with better simulations of natural variability. In the future, more observations and paleoclimatic information should yield more insight into natural variability, especially on longer timescales. This would enhance the credibility of the statistical test.

Earlier in the paper the authors said:

..However, it is generally believed that models reproduce the space-time statistics of natural variability on large space and long time scales (months to years) reasonably realistic. The verification of variability of CGMCs [coupled GCMs] on decadal to century timescales is relatively short, while paleoclimatic data are sparce and often of limited quality.

..We assume that the detection variable is Gaussian with zero mean, that is, that there is no long-term nonstationarity in the natural variability.

[Emphasis added].

The climate models used would be considered rudimentary by today’s standards. Three different coupled atmosphere-ocean GCMs were used. However, each of them required “flux corrections”.

This method was pretty much the standard until the post 2000 era. The climate models “drifted”, unless, in deity-like form, you topped up (or took out) heat and momentum from various grid boxes.

That is, the models themselves struggled (in 1996) to represent climate unless the climate modeler knew, and corrected for, the long term “drift” in the model.

Conclusion

In the next article we will look at more recent work in attribution and fingerprints and see whether the field has developed.

But in this article we see that the conclusion of an attribution study in 1996 was that there was only a “2.5% chance” that recent temperature changes could be attributed to natural variability. At the same time, the question of how accurate the models were in simulating natural variability was noted but never quantified. And the models were all “flux corrected”. This means that some aspects of the long term statistics of climate were considered to be known – in advance.

So I find it difficult to accept any statistical significance in the study at all.

If the finding instead was introduced with the caveat “assuming the accuracy of our estimates of long term natural variability of climate is correct..” then I would probably be quite happy with the finding. And that question is the key.

The question should be:

What is the likelihood that climate models accurately represent the long-term statistics of natural variability?

  • Virtually certain
  • Very likely
  • Likely
  • About as likely as not
  • Unlikely
  • Very unlikely
  • Exceptionally unlikely

So far I am yet to run across a study that poses this question.

References

Bindoff, N.L., et al, 2013: Detection and Attribution of Climate Change: from Global to Regional. In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change

Detecting greenhouse gas induced climate change with an optimal fingerprint method, Hegerl, von Storch, Hasselmann, Santer, Cubasch & Jones, Journal of Climate (1996)

What does it mean when climate models agree? A case for assessing independence among general circulation models, Zachary Pirtle, Ryan Meyer & Andrew Hamilton, Environ. Sci. Policy (2010)

Optimal fingerprints for detection of time dependent climate change, Klaus Hasselmann, Journal of Climate (1993)

Read Full Post »

In Part Seven – GCM I  through Part Ten – GCM IV we looked at GCM simulations of ice ages.

These were mostly attempts at “glacial inception”, that is, starting an ice age. But we also saw a simulation of the last 120 kyrs which attempted to model a complete ice age cycle including the last termination. As we saw, there were lots of limitations..

One condition for glacial inception, “perennial snow cover at high latitudes”, could be produced with a high-resolution coupled atmosphere-ocean GCM (AOGCM), but that model did suffer from the problem of having a cold bias at high latitudes.

The (reasonably accurate) simulation of a whole cycle including inception and termination came by virtue of having the internal feedbacks (ice sheet size & height and CO2 concentration) prescribed.

Just to be clear to new readers, these comments shouldn’t indicate that I’ve uncovered some secret that climate scientists are trying to hide, these points are all out in the open and usually highlighted by the authors of the papers.

In Part Nine – GCM III, one commenter highlighted a 2013 paper by Ayako Abe-Ouchi and co-workers, where the journal in question, Nature, had quite a marketing pitch on the paper. I made brief comment on it in a later article in response to another question, including that I had emailed the lead author asking a question about the modeling work (how was a 120 kyr cycle actually simulated?).

Most recently, in Eighteen – “Probably Nonlinearity” of Unknown Origin, another commented highlighted it, which rekindled my enthusiasm, and I went back and read the paper again. It turns out that my understanding of the paper had been wrong. It wasn’t really a GCM paper at all. It was an ice sheet paper.

There is a whole field of papers on ice sheet models deserving attention.

GCM review

Let’s review GCMs first of all to help us understand where ice sheet models fit in the hierarchy of climate simulations.

GCMs consist of a number of different modules coupled together. The first GCMs were mostly “atmospheric GCMs” = AGCMs, and either they had a “swamp ocean” = a mixed layer of fixed depth, or had prescribed ocean boundary conditions set from an ocean model or from an ocean reconstruction.

Less commonly, unless you worked just with oceans, there were ocean GCMs with prescribed atmospheric boundary conditions (prescribed heat and momentum flux from the atmosphere).

Then coupled atmosphere-ocean GCMs came along = AOGCMs. It was a while before these two parts matched up to the point where there was no “flux drift”, that is, no disappearing heat flux from one part of the model.

Why so difficult to get these two models working together? One important reason comes down to the time-scales involved, which result from the difference in heat capacity and momentum of the two parts of the climate system. The heat capacity and momentum of the ocean is much much higher than that of the atmosphere.

And when we add ice sheets models – ISMs – we have yet another time scale to consider.

  • the atmosphere changes in days, weeks and months
  • the ocean changes in years, decades and centuries
  • the ice sheets changes in centuries, millennia and tens of millenia

This creates a problem for climate scientists who want to apply the fundamental equations of heat, mass & momentum conservation along with parameterizations for “stuff not well understood” and “stuff quite-well-understood but whose parameters are sub-grid”. To run a high resolution AOGCM for a 1,000 years simulation might consume 1 year of supercomputer time and the ice sheet has barely moved during that period.

Ice Sheet Models

Scientists who study ice sheets have a whole bunch of different questions. They want to understand how the ice sheets developed.

What makes them grow, shrink, move, slide, melt.. What parameters are important? What parameters are well understood? What research questions are most deserving of attention? And:

Does our understanding of ice sheet dynamics allow us to model the last glacial cycle?

To answer that question we need a model for ice sheet dynamics, and to that we need to apply some boundary conditions from some other “less interesting” models, like GCMs. As a result, there are a few approaches to setting the boundary conditions so we can do our interesting work of modeling ice sheets.

Before we look at that, let’s look at the dynamics of ice sheets themselves.

Ice Sheet Dynamics

First, in the theme of the last paper, Eighteen – “Probably Nonlinearity” of Unknown Origin, here is Marshall & Clark 2002:

The origin of the dominant 100-kyr ice-volume cycle in the absence of substantive radiation forcing remains one of the most vexing questions in climate dynamics

We can add that to the 34 papers reviewed in that previous article. This paper by Marshall & Clark is definitely a good quick read for people who want to understand ice sheets a little more.

Ice doesn’t conduct a lot of heat – it is a very good insulator. So the important things with ice sheets happen at the top and the bottom.

At the top, ice melts, and the water refreezes, runs off or evaporates. In combination, the loss is called ablation. Then we have precipitation that adds to the ice sheet. So the net effect determines what happens at the top of the ice sheet.

At the bottom, when the ice sheet is very thin, heat can be conducted through from the atmosphere to the base and make it melt – if the atmosphere is warm enough. As the ice sheet gets thicker, very little heat is conducted through. However, there are two important sources of heat for surface heating which results in “basal sliding”. One source is geothermal energy. This is around 0.1 W/m² which is very small unless we are dealing with an insulating material (like ice) and lots of time (like ice sheets). The other source is the shear stress in the ice sheet which can create a lot of heat via the mechanics of deformation.

Once the ice sheet is able to start sliding, the dynamics create a completely different result compared to an ice sheet “cold-pinned” to the rock underneath.

Some comments from Marshall and Clark:

Ice sheet deglaciation involves an amount of energy larger than that provided directly from high-latitude radiation forcing associated with orbital variations. Internal glaciologic, isostatic, and climatic feedbacks are thus essential to explain the deglaciation.

..Moreover, our results suggest that thermal enabling of basal flow does not occur in response to surface warming, which may explain why the timing of the Termination II occurred earlier than predicted by orbital forcing [Gallup et al., 2002].

Results suggest that basal temperature evolution plays an important role in setting the stage for glacial termination. To confirm this hypothesis, model studies need improved basal process physics to incorporate the glaciological mechanisms associated with ice sheet instability (surging, streaming flow).

..Our simulations suggest that a substantial fraction (60% to 80%) of the ice sheet was frozen to the bed for the first 75 kyr of the glacial cycle, thus strongly limiting basal flow. Subsequent doubling of the area of warm-based ice in response to ice sheet thickening and expansion and to the reduction in downward advection of cold ice may have enabled broad increases in geologically- and hydrologically-mediated fast ice flow during the last deglaciation.

Increased dynamical activity of the ice sheet would lead to net thinning of the ice sheet interior and the transport of large amounts of ice into regions of intense ablation both south of the ice sheet and at the marine margins (via calving). This has the potential to provide a strong positive feedback on deglaciation.

The timescale of basal temperature evolution is of the same order as the 100-kyr glacial cycle, suggesting that the establishment of warm-based ice over a large enough area of the ice sheet bed may have influenced the timing of deglaciation. Our results thus reinforce the notion that at a mature point in their life cycle, 100-kyr ice sheets become independent of orbital forcing and affect their own demise through internal feedbacks.

[Emphasis added]

In this article we will focus on a 2007 paper by Ayako Abe-Ouchi, T Segawa & Fuyuki Saito. This paper is essentially the same modeling approach used in Abe-Ouchi’s 2013 Nature paper.

The Ice Model

The ice sheet model has a time step of 2 years, with 1° grid from 30°N to the north pole, 1° longitude and 20 vertical levels.

Equations for the ice sheet include sliding velocity, ice sheet deformation, the heat transfer through the lithosphere, the bedrock elevation and the accumulation rate on the ice sheet.

Note, there is a reference that some of the model is based on work described in Sensitivity of Greenland ice sheet simulation to the numerical procedure employed for ice sheet dynamics, F Saito & A Abe-Ouchi, Ann. Glaciol., (2005) – but I don’t have access to this journal. (If anyone does, please email the paper to me at scienceofdoom – you know what goes here – gmail.com).

How did they calculate the accumulation on the ice sheet? There is an equation:

Acc=Aref×(1+dP)Ts

Ts is the surface temperature, dP is a measure of aridity and Aref is a reference value for accumulation. This is a highly parameterized method of calculating how much thicker or thinner the ice sheet is growing. The authors reference Marshall et al 2002 for this equation, and that paper is very instructive in how poorly understood ice sheet dynamics actually are.

Here is one part of the relevant section in Marshall et al 2002:

..For completeness here, note that we have also experimented with spatial precipitation patterns that are based on present-day distributions.

Under this treatment, local precipitation rates diminish exponentially with local atmospheric cooling, reflecting the increased aridity that can be expected under glacial conditions (Tarasov and Peltier, 1999).

Paleo-precipitation under this parameterization has the form:

P(λ,θ,t) = Pobs(λ,θ)(1+dp)ΔT(λ,θ,t) x exp[βp.max[hs(λ,θ,t)-ht,0]]       (18)

The parameter dP in this equation represents the percentage of drying per 1C; Tarasov and Peltier (1999) choose a value of 3% per °C; dp = 0:03.

[Emphasis added, color added to highlight the relevant part of the equation]

So dp is a parameter that attempts to account for increasing aridity in colder glacial conditions, and in their 2002 paper Marshall et al describe it as 1 of 4 “free parameters” that are investigated to see what effect they have on ice sheet development around the LGM.

Abe-Ouchi and co-authors took a slightly different approach that certainly seems like an improvement over Marshall et al 2002:

Abe-Ouchi-eqn11

So their value of aridity is just a linear function of ice sheet area – from zero to a fixed value, rather than a fixed value no matter the ice sheet size.

How is Ts calculated? That comes, in a way, from the atmospheric GCM, but probably not in a way that readers might expect. So let’s have a look at the GCM then come back to this calculation of Ts.

Atmospheric GCM Simulations

There were three groups of atmospheric GCM simulations, with parameters selected to try and tease out which factors have the most impact.

Group One: high resolution GCM – 1.1º latitude and longitude and 20 atmospheric vertical levels with fixed sea surface temperature. So there is no ocean model, the ocean temperature are prescribed. Within this group, four experiments:

  • A control experiment – modern day values
  • LGM (last glacial maximum) conditions for CO2 (note 1) and orbital parameters with
    • no ice
    • LGM ice extent but zero thickness
    • LGM ice extent and LGM thickness

So the idea is to compare results with and without the actual ice sheet so see how much impact orbital and CO2 values have vs the effect of the ice sheet itself – and then for the ice sheet to see whether the albedo or the elevation has the most impact. Why the elevation? Well, if an ice sheet is 1km thick then the surface temperature will be something like 6ºC colder. (Exactly how much colder is an interesting question because we don’t know what the lapse rate actually was). There will also be an effect on atmospheric circulation – you’ve stuck a “mountain range” in the path of wind so this changes the circulation.

Each of the four simulations was run for 11 or 13 years and the last 10 years’ results used:

From Abe-Ouchi et al 2007

From Abe-Ouchi et al 2007

Figure 1

It’s clear from this simulation that the full result (left graphic) is mostly caused by the ice sheet (right graphic) rather than CO2, orbital parameters and the SSTs (middle graphic). And the next figure in the paper shows the breakdown between the albedo effect and the height of the ice sheet:

From Abe-Ouchi et al 2007

From Abe-Ouchi et al 2007

Figure 2 – same color legend as figure 1

Now a lapse rate of 5K/km was used. What happens if the lapse rate of 9K/km was used instead? There were no simulations done with different lapse rates.

..Other lapse rates could be used which vary depending on the altitude or location, while a lapse rate larger than 7 K/km or smaller than 4 K/km is inconsistent with the overall feature. This is consistent with the finding of Krinner and Genthon (1999), who suggest a lapse rate of 5.5 K/km, but is in contrast with other studies which have conventionally used lapse rates of 8 K/km or 6.5 K/km to drive the ice sheet models..

Group Two – medium resolution GCM 2.8º latitude and longitude and 11 atmospheric vertical levels, with a “slab ocean” – this means the ocean is treated as one temperature through the depth of some fixed layer, like 50m. So it is allowing the ocean to be there as a heat sink/source responding to climate, but no heat transfer through to a deeper ocean.

There were five simulations in this group, one control (modern day everything) and four with CO2 & orbital parameters at the LGM:

  • no ice sheet
  • LGM ice extent, but flat
  • 12 kyrs ago ice extent, but flat
  • 12 kyrs ago ice extent and height

So this group takes a slightly more detailed look at ice sheet impact. Not surprisingly the simulation results give intermediate values for the ice sheet extent at 12 kyrs ago.

Group Three – medium resolution GCM as in group two, and ice sheets either at present day or LGM, with nine simulations covering different orbital values, different CO2 values of present day, 280 or 200 ppm.

There was also some discussion of the impact of different climate models. I found this fascinating because the difference between CCSM and the other models appears to be as great as the difference in figure 2 (above) which identifies the albedo effect as more significant than the lapse rate effect:

From Abe-Ouchi et al 2007

From Abe-Ouchi et al 2007

Figure 3

And this naturally has me wondering about how much significance to put on the GCM simulation results shown in the paper. The authors also comment:

Based on these GCM results we conclude there remains considerable uncertainty over the actual size of the albedo effect.

Given there is also uncertainty over the lapse rate that actually occurred, it seems there is considerable uncertainty over everything.

Now let’s return to the ice sheet model, because so far we haven’t seen any output from the ice sheet model.

GCM Inputs into the Ice Sheet Model

The equation which calculates the change in accumulation on the ice sheet used a fairly arbitrary parameter dp, with (1+dp) raised to the power of Ts.

The ice sheet model has a 2 year time step. The GCM results don’t provide Ts across the surface grid every 2 years, they are snapshots for certain conditions. The ice sheet model uses this calculation for Ts:

Ts = Tref + ΔTice + ΔTco2 + ΔTinsol + ΔTnonlinear

Tref is the reference temperature which is present day climatology. The other ΔT (change in temperature) values are basically a linear interpolation from two values of the GCM simulations. Here is the ΔTCo2 value:

Abe-Ouchi-2007-eqn6

 

So think of it like this – we have found Ts at one value of CO2 higher and one value of CO2 lower from some snapshot GCM simulations. We plot a graph with Co2 on the x-axis and Ts on the y-axis with just two points on the graph from these two experiments and we draw a straight line between the two points.

To calculate Ts at say 50 kyrs ago we look up the CO2 value at 50 kyrs from ice core data, and read the value of TCO2 from the straight line on the graph.

Likewise for the other parameters. Here is ΔTinsol:

Abe-Ouchi-eqn7

 

So the method is extremely basic. Of course the model needs something..

Now, given that we have inputs for accumulation on the ice sheet, the ice sheet model can run. Here are the results. The third graph (3) is the sea level from proxy results so is our best estimate of reality, with (4) providing model outputs for different parameters of d0 (“desertification” or aridity) and lapse rate, and (5) providing outputs for different parameters of albedo and lapse rate:

From Abe-Ouchi et al 2007

From Abe-Ouchi et al 2007

Figure 4

There are three main points of interest.

Firstly, small changes in the parameters cause huge changes in the final results. The idea of aridity over ice sheets as just linear function of ice sheet size is very questionable itself. The idea of a constant lapse rate is extremely questionable. Together, using values that appear realistic, we can model much less ice sheet growth (sea level drop) or many times greater ice sheet growth than actually occurred.

Secondly, notice that the time of maximum ice sheet (lowest sea level) for realistic results show sea level starting to rise around 12 kyrs, rather than the actual 18 kyrs. This might be due to the impact of orbital factors which were at quite a low level (i.e., high latitude summer insolation was at quite a low level) when the last ice age finished, but have quite an impact in the model. Of course, we have covered this “problem” in a few previous articles in this series. In the context of this model it might be that the impact of the southern hemisphere leading the globe out of the last ice age is completely missing.

Thirdly – while this might be clear to some people, but for many new to this kind of model it won’t be obvious – the inputs for the model are some limits of the actual history. The model doesn’t simulate the actual start and end of the last ice age “by itself”. We feed into the GCM model a few CO2 values. We feed into the GCM model a few ice sheet extent and heights that (as best as can be reconstructed) actually occurred. The GCM gives us some temperature values for these snapshot conditions.

In the case of this ice sheet model, every 2 years (each time step of the ice sheet model) we “look up” the actual value of ice sheet extent and atmospheric CO2 and we linearly interpolate the GCM output temperatures for the current year. And then we crudely parameterize these values into some accumulation rate on the ice sheet.

Conclusion

This is our first foray into ice sheet models. It should be clear that the results are interesting but we are at a very early stage in modeling ice sheets.

The problems are:

  • the computational load required to run a GCM coupled with an ice sheet model over 120 kyrs is much too high, so it can’t be done
  • the resulting tradeoff uses a few GCM snapshot values to feed linearly interpolated temperatures into a parameterized accumulation equation
  • the effect of lapse rate on the results is extremely large and the actual value for lapse rate over ice sheets is very unlikely to be a constant and is also not known
  • our understanding of ice sheet fundamental equations are still at an early stage, as readers can see by reviewing the first two papers below, especially the second one

 Articles in this Series

Part One – An introduction

Part Two – Lorenz – one point of view from the exceptional E.N. Lorenz

Part Three – Hays, Imbrie & Shackleton – how everyone got onto the Milankovitch theory

Part Four – Understanding Orbits, Seasons and Stuff – how the wobbles and movements of the earth’s orbit affect incoming solar radiation

Part Five – Obliquity & Precession Changes – and in a bit more detail

Part Six – “Hypotheses Abound” – lots of different theories that confusingly go by the same name

Part Seven – GCM I – early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Seven and a Half – Mindmap – my mind map at that time, with many of the papers I have been reviewing and categorizing plus key extracts from those papers

Part Eight – GCM II – more recent work from the “noughties” – GCM results plus EMIC (earth models of intermediate complexity) again trying to produce perennial snow cover

Part Nine – GCM III – very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age – a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Eleven – End of the Last Ice age – latest data showing relationship between Southern Hemisphere temperatures, global temperatures and CO2

Twelve – GCM V – Ice Age Termination – very recent work from He et al 2013, using a high resolution GCM (CCSM3) to analyze the end of the last ice age and the complex link between Antarctic and Greenland

Thirteen – Terminator II – looking at the date of Termination II, the end of the penultimate ice age – and implications for the cause of Termination II

Fourteen – Concepts & HD Data – getting a conceptual feel for the impacts of obliquity and precession, and some ice age datasets in high resolution

Fifteen – Roe vs Huybers – reviewing In Defence of Milankovitch, by Gerard Roe

Sixteen – Roe vs Huybers II – remapping a deep ocean core dataset and updating the previous article

Seventeen – Proxies under Water I – explaining the isotopic proxies and what they actually measure

Eighteen – “Probably Nonlinearity” of Unknown Origin – what is believed and what is put forward as evidence for the theory that ice age terminations were caused by orbital changes

References

Basal temperature evolution of North American ice sheets and implications for the 100-kyr cycle, SJ Marshall & PU Clark, GRL (2002) – free paper

North American Ice Sheet reconstructions at the Last Glacial Maximum, SJ Marshall, TS James, GKC Clarke, Quaternary Science Reviews (2002) – free paper

Climatic Conditions for modelling the Northern Hemisphere ice sheets throughout the ice age cycle, A Abe-Ouchi, T Segawa, and F Saito, Climate of the Past (2007) – free paper

Insolation-driven 100,000-year glacial cycles and hysteresis of ice-sheet volume, Ayako Abe-Ouchi, Fuyuki Saito, Kenji Kawamura, Maureen E. Raymo, Jun’ichi Okuno, Kunio Takahashi & Heinz Blatter, Nature (2013) – paywall paper

Notes

Note 1 – the value of CO2 used in these simulations was 200 ppm, while CO2 at the LGM was actually 180 ppm. Apparently this value of 200 ppm was used in a major inter-comparison project (the PMIP), but I don’t know the reason why. PMIP = Paleoclimate Modelling Intercomparison Project, Joussaume and Taylor, 1995.

Read Full Post »

A while ago, in Part Three – Hays, Imbrie & Shackleton we looked at a seminal paper from 1976.

In that paper, the data now stretched back far enough in time for the authors to demonstrate something of great importance. They showed that changes in ice volume recorded by isotopes in deep ocean cores (see Seventeen – Proxies under Water I) had significant signals at the frequencies of obliquity, precession and one of the frequencies of eccentricity.

Obliquity is the changes in the tilt of the earth’s axis, on a period around 40 kyrs. Precession is the change in the closest approach to the sun through the year (right now the closest approach is in NH winter), on a period around 20 kyrs (see Four – Understanding Orbits, Seasons and Stuff).

Both of these involve significant redistributions of solar energy. Obliquity changes the amount of solar insolation received by the poles versus the tropics. Precession changes the amount of solar insolation at high latitudes in summer versus winter. (Neither changes total solar insolation). This was nicely in line with Milankovitch’s theory – for a recap see Part Three.

I’m going to call this part Theory A, and paraphrase it like this:

The waxing and waning of the ice sheets has 40 kyr and 20 kyr periods which is caused by the changing distribution of solar insolation due to obliquity and precession.

The largest signal in ocean cores over the last 800 kyrs has a component of about 100 kyrs (with some variability). That is, the ice ages start and end with a period of about 100 kyrs. Eccentricity varies on time periods of 100 kyrs and 400 kyrs, but with a very small change in total insolation (see Part Four).

Hays et al produced a completely separate theory, which I’m going to call Theory B, and paraphrase it like this:

The start and end of the ice ages has 100 kyr periods which is caused by the changing eccentricity of the earth’s orbit.

Theory A and Theory B are both in the same paper and are both theories that “link ice ages to orbital changes”. In their paper they demonstrated Theory A but did not prove or demonstrate Theory B. Unfortunately, Theory B is the much more important one.

Here is what they said:

The dominant 100,000 year climatic component has an average period close to, and is in phase with, orbital eccentricity. Unlike the correlations between climate and the higher frequency orbital variations (which can be explained on the assumption that the climate system responds linearly to orbital forcing) an explanation of the correlations between climate and eccentricity probably requires an assumption of non-linearity.

[Emphasis added].

The only quibble I have with the above paragraph is the word “probably”. This word should have been removed. There is no doubt. An assumption of non-linearity is required as a minimum.

Now why does it “probably” or “definitely” require an assumption of non-linearity? And what does that mean?

A linearity assumption is one where the output is proportional to the input. For example: double the weight of a vehicle and the acceleration halves. Most things in the real world, and most things in climate are non-linear. So for example, double the temperature (absolute temperature) and the emitted radiation goes up by a factor of 16.

However, there isn’t a principle, an energy balance equation or even a climate model that can take this tiny change in incoming solar insolation over a 100 kyr period and cause the end of an ice age.

In fact, their statement wasn’t so much “an assumption of non-linearity” but “some non-linearity relationship that we are not currently able to model or demonstrate, some non-linearity relationship we have yet to discover”.

There is nothing wrong with their original statement as such (apart from “probably”), but an alternative way of writing from the available evidence could be:

The dominant 100,000 year climatic component has an average period close to, and is in phase with, orbital eccentricity. Unlike the correlations between climate and the higher frequency orbital variations.. an explanation of the correlations between climate and eccentricity is as yet unknown, remains to be demonstrated and there may in fact be no relationship at all.

Unfortunately, because Theory A and Theory B were in the same paper and because Theory A is well demonstrated and because there is no accepted alternative on the cause of the start and end of ice ages (there are alternative hypotheses around natural resonance) Theory B has become “well accepted”.

And because everyone familiar with climate science knows that Theory A is almost certainly true, when you point out that Theory B doesn’t have any evidence, many people are confused and wonder why you are rejecting well-proven theories.

In the series so far, except in occasional comments, I haven’t properly explained the separation between the two theories and this article is an attempt to clear that up.

Now I will produce a sufficient quantity of papers and quote their “summary of the situation so far” to demonstrate that there isn’t any support for Theory B. The only support is the fact that one component frequency of eccentricity is “similar” to the frequency of the ice age terminations/inceptions, plus the safety in numbers support of everyone else believing it.

One other comment on paleoclimate papers attempts to explain the 100 kyr period. It is the norm for published papers to introduce a new hypothesis. That doesn’t make the new hypothesis correct.

So if I produce a paper, and quote the author’s summary of “the state of work up to now” and that paper then introduces their new hypothesis which claims to perhaps solve the mystery, I haven’t quoted the author’s summary out of context.

Let’s take it as read that lots of climate scientists think they have come up with something new. What we are interesting in is their review of the current state of the field and their evidence cited in support of Theory B.

Before producing the papers I also want to explain why I think the idea behind Theory B is so obviously flawed, and not just because 38 years after Hays, Imbrie & Shackleton the mechanism is still a mystery.

Why Theory B is Unsupportable

If a non-linear relationship can be established between a 0.1% change in insolation over a long period, it must also explain why significant temperature fluctuations in high latitude regions during glacials do not cause a termination.

Here are two high resolution examples from a Greenland ice core (NGRIP) during the last glaciation:

From Wolff et al 2010

From Wolff et al 2010

The “non-linearity” hypothesis has more than one hill to climb. This second challenge is even more difficult than the first.

A tiny change in total insolation causes, via a yet to be determined non-linear effect, the end of each ice age, but this same effect does not amplify frequent large temperature changes of long duration to end an ice age (note 1).

Food for thought.

Theory C Family

Many papers which propose orbital reasons for ice age terminations do not propose eccentricity variations as the cause. Instead, they attribute terminations to specific insolation changes at specific latitudes, or various combinations of orbital factors completely unrelated to eccentricity variations. See Part Six – “Hypotheses Abound”.

Of course, one of these might be right. For now I will call them the family, so we remember that Theory C is not one theory, but a whole range of mostly incompatible theories.

But remember where the orbital hypothesis for ice age termination came from – the 100,000 year period of eccentricity variation “matching” (kind of matching) the 100,000 year period of the ice ages.

The Theory C Family does not have that starting point.

Papers

So let’s move onto papers. I started by picking off papers from the right category in my mind map that might have something to say, then I opened up every one of about 300 papers in my ice ages folder (alphabetical by author) and checked to see whether they had something to say on the cause of ice ages in the abstract or introduction. Most papers don’t have a comment because they are about details like d18O proxies, or the CO2 concentration in the Vostok ice core, etc. That’s why there aren’t 300 citations here.

And bold text within a citation is added by me for emphasis.

I looked for their citations (evidence) to back up any claim that orbital variations caused ice age terminations. In some cases I pull up what the citations said.

—–

Last Interglacial Climates, Kukla et al (2002), by a cast of many including the famous Wallace S. Broecker, John Imbrie and Nicholas J. Shackleton:

At the end of the last interglacial period, over 100,000 yr ago, the Earth’s environments, similar to those of today, switched into a profoundly colder glacial mode. Glaciers grew, sea level dropped, and deserts expanded. The same transition occurred many times earlier, linked to periodic shifts of the Earth’s orbit around the Sun. The mechanism of this change, the most important puzzle of climatology, remains unsolved.

Note that “linked to periodic shifts of the Earth’s orbit” is followed by an “unknown mechanism”. Two of the authors were the coauthors of the classic 1976 paper that is most commonly cited as evidence for Theory B.

———

Millennial-scale variability during the last glacial: The ice core record, Wolff, Chappellaz, Blunier, Rasmussen & Svensson (2010)

The most significant climate variability in the Quaternary record is the alternation between glacial and interglacial, occurring at approximately 100 ka periodicity in the most recent 800 ka. This signal is of global scale, and observed in all climate records, including the long Antarctic ice cores (Jouzel et al., 2007a) and marine sediments (Lisiecki and Raymo, 2005). There is a strong consensus that the underlying cause of these changes is orbital (i.e. due to external forcing from changes in the seasonal and latitudinal pattern of insolation), but amplified by a whole range of internal factors (such as changes in greenhouse gas concentration and in ice extent).

Note the lack of citation for the underlying causes being orbital. However, as we will see, there is “strong consensus”. In this specific paper from the words used I believe the authors are supporting the Theory C Family, not Theory B.

———

The last glacial cycle: transient simulations with an AOGCM, Robin Smith & Jonathan Gregory (2012)

It is generally accepted that the timing of glacials is linked to variations in solar insolation that result from the Earth’s orbit around the sun (Hays et al. 1976; Huybers and Wunsch 2005). These solar radiative anomalies must have been amplified by feedback processes within the climate system, including changes in atmospheric greenhouse gas (GHG) concentrations (Archer et al. 2000) and ice-sheet growth (Clark et al. 1999), and whilst hypotheses abound as to the details of these feedbacks, none is without its detractors and we cannot yet claim to know how the Earth system produced the climate we see recorded in numerous proxy records.

I think I will classify this one as “Still a mystery”.

Note that support for “linkage to variations in solar insolation” consists of Hays et al 1976 – Theory B – and Huybers and Wunsch 2005 who propose a contradictory theory (obliquity) – Theory C Family. In this case they absolve themselves by pointing out that all the theories have flaws.

———

The timing of major climate terminations, ME Raymo (1997)

For the past 20 years, the Milankovitch hypothesis, which holds that the Earth’s climate is controlled by variations in incoming solar radiation tied to subtle yet predictable changes in the Earth’s orbit around the Sun [Hays et al., 1976], has been widely accepted by the scientific community. However, the degree to which and the mechanisms by which insolation variations control regional and global climate are poorly understood. In particular, the “100-kyr” climate cycle, the dominant feature of nearly all climate records of the last 900,000 years, has always posed a problem to the Milankovitch hypothesis..

..time interval between terminations is not constant; it varies from 84 kyr between Terminations IV and V to 120 kyr between Terminations III and II.

“Still a mystery”. (Maureen Raymo has written many papers on ice ages, is the coauthor of the LR04 ocean core database and cannot be considered an outlier). Her paper claims she solves the problem:

In conclusion, it is proposed that the interaction between obliquity and the eccentricity-modulation of precession as it controls northern hemisphere summer radiation is responsible for the pattern of ice volume growth and decay observed in the late Quaternary.

Solution was unknown, but new proposed solution is from the Theory C Family.

———

Glacial termination: sensitivity to orbital and CO2 forcing in a coupled climate system model, Yoshimori, Weaver, Marshall & Clarke (2001)

Glaciation (deglaciation) is one of the most extreme and fundamental climatic events in Earth’s history.. As a result, fluctuations in orbital forcing (e.g. Berger 1978; Berger and Loutre 1991) have been widely recognised as the primary triggers responsible for the glacial-interglacial cycles (Berger 1988; Bradley 1999; Broecker and Denton 1990; Crowley and North 1991; Imbrie and Imbrie 1979). At the same time, these studies revealed the complexity of the climate system, and produced several paradoxes which cannot be explained by a simple linear response of the climate system to orbital forcing.

At this point I was interested to find out how well these 4 papers cited (Berger 1988; Bradley 1999; Broecker and Denton 1990; Crowley and North 1991; Imbrie and Imbrie 1979) backed up the evidence for orbital forcing being the primary triggers for glacial cycles.

Broecker & Denton (1990) is in Scientific American which I don’t think counts as a peer-reviewed journal (even though a long time ago I subscribed to it and thought it was a great magazine). I was able to find the abstract only, which coincides with their peer-reviewed paper The Role of Ocean-Atmosphere Reorganization in Glacial Cycles the same year in Quaternary Science Reviews, so I’ll assume they are media hounds promoting their peer-reviewed paper for a wider audience and look at the peer-reviewed paper. After commenting on the problems:

Such a linkage cannot explain synchronous climate changes of similar severity in both polar hemispheres. Also, it cannot account for the rapidity of the transition from full glacial toward full interglacial conditions. If glacial climates are driven by changes in seasonality, then another linkage must exist.

they state:

We propose that Quaternary glacial cycles were dominated by abrupt reorganizations of the ocean- atmosphere system driven by orbitally induced changes in fresh water transports which impact salt structure in the sea. These reorganizations mark switches between stable modes of operation of the ocean-atmosphere system. Although we think that glacial cycles were driven by orbital change, we see no basis for rejecting the possibility that the mode changes are part of a self- sustained internal oscillation that would operate even in the absence of changes in the Earth’s orbital parameters. If so, as pointed out by Saltzman et al. (1984), orbital cycles can merely modulate and pace a self-oscillating climate system.

So this paper is evidence for Theory B or Theory C Family? “..we think that..” “..we see no basis for rejecting the possibility ..self-sustained internal oscillation”. This is evidence for the astronomical theory?

I can’t access Milankovitch theory and climate, Berger 1988 (thanks, Reviews of Geophysics!). If someone has it, please email it to me at scienceofdoom – you know what goes here – gmail.com. The other two references are books, so I can’t access them. Crowley & North 1991 is Paleoclimatology. Vol 16 of Oxford Monograph on Geology and Geophysics, OUP. Imbrie & Imbrie 1979 is Ice Ages: solving the mystery.

———-

Glacial terminations as southern warmings without northern control, E. W. Wolff, H. Fischer and R. Röthlisberger (2009)

However, the reason for the spacing and timing of interglacials, and the sequence of events at major warmings, remains obscure.

“Still a mystery”. This is a little different from Wolff’s comment in the paper above. Elsewhere (see his comments cited in Eleven – End of the Last Ice age) he has stated that ice age terminations are not understood:

Between about 19,000 and 10,000 years ago, Earth emerged from the last glacial period. The whole globe warmed, ice sheets retreated from Northern Hemisphere continents and atmospheric composition changed significantly. Many theories try to explain what triggered and sustained this transformation (known as the glacial termination), but crucial evidence to validate them is lacking.

———-

The Last Glacial Termination, Denton, Anderson, Toggweiler, Edwards, Schaefer & Putnam (2009)

A major puzzle of paleoclimatology is why, after a long interval of cooling climate, each late Quaternary ice age ended with a relatively short warming leg called a termination. We here offer a comprehensive hypothesis of how Earth emerged from the last global ice age..

“Still a mystery”

———–

Global warming preceded by increasing carbon dioxide concentrations during the last deglaciation, Shakun, Clark, He, Marcott, Mix, Zhengyu Liu, Otto-Bliesner,  Schmittner & Bard (2012)

Understanding the causes of the Pleistocene ice ages has been a significant question in climate dynamics since they were discovered in the mid-nineteenth century. The identification of orbital frequencies in the marine 18O/16O record, a proxy for global ice volume, in the 1970s demonstrated that glacial cycles are ultimately paced by astronomical forcing.

The citation is Hays, Imbrie & Shackleton 1976. Theory B with no support.

————

Northern Hemisphere forcing of Southern Hemisphere climate during the last deglaciation, He, Shakun, Clark, Carlson, Liu, Otto-Bliesner & Kutzbach (2013)

According to the Milankovitch theory, changes in summer insolation in the high-latitude Northern Hemisphere caused glacial cycles through their impact on ice-sheet mass balance. Statistical analyses of long climate records supported this theory, but they also posed a substantial challenge by showing that changes in Southern Hemisphere climate were in phase with or led those in the north.

The citation is Hays, Imbrie & Shackleton 1976. (Many of the same authors in this and the paper above).

————-

Eight glacial cycles from an Antarctic ice core, EPICA Community Members (2004)

The climate of the last 500,000 years (500 kyr) was characterized by extremely strong 100-kyr cyclicity, as seen particularly in ice-core and marine-sediment records. During the earlier part of the Quaternary (before 1 million years ago; 1 Myr BP), cycles of 41 kyr dominated. The period in between shows intermediate behaviour, with marine records showing both frequencies and a lower amplitude of the climate signal. However, the reasons for the dominance of the 100-kyr (eccentricity) over the 41-kyr (obliquity) band in the later part of the record, and the amplifiers that allow small changes in radiation to cause large changes in global climate, are not well understood.

Is this accepting Theory B or not?

————–

Now onto the alphabetical order..

Climatic Conditions for modelling the Northern Hemisphere ice sheets throughout the ice age cycle, Abe-Ouchi, Segawa & Saito (2007)

To explain why the ice sheets in the Northern Hemisphere grew to the size and extent that has been observed, and why they retreated quickly at the termination of each 100 kyr cycle is still a challenge (Tarasov and Peltier, 1997a; Berger et al., 1998; Paillard, 1998; Paillard and Parrenin, 2004). Although it is now broadly accepted that the orbital variations of the Earth influence climate changes (Milankovitch, 1930; Hays et al., 1976; Berger, 1978), the large amplitude of the ice volume changes and the geographical extent need to be reproduced by comprehensive models which include nonlinear mechanisms of ice sheet dynamics (Raymo, 1997; Tarasov and Peltier, 1997b; Paillard, 2001; Raymo et al., 2006).

The papers cited for this broad agreement are Hays et al 1976 once again. And Berger 1978 who says:

It is not the aim of this paper to draw definitive conclusions about the astronomical theory of paleoclimates but simply to provide geologists with accurate theoretical values of the earth’s orbital elements and insolation..

Berger does go on to comment on eccentricity:

Berger 1978

Berger 1978

And this is simply again noting that the period for eccentricity is “similar” to the period for the ice age terminations.

Theory B with no support.

——

Insolation-driven 100,000-year glacial cycles and hysteresis of ice-sheet volume, Abe-Ouchi, Saito, Kawamura, Raymo, Okuno, Takahashi & Blatter (2013)

Milankovitch theory proposes that summer insolation at high northern latitudes drives the glacial cycles, and statistical tests have demonstrated that the glacial cycles are indeed linked to eccentricity, obliquity and precession cycles. Yet insolation alone cannot explain the strong 100,000-year cycle, suggesting that internal climatic feedbacks may also be at work. Earlier conceptual models, for example, showed that glacial terminations are associated with the build-up of Northern Hemisphere ‘excess ice’, but the physical mechanisms underpinning the 100,000-year cycle remain unclear.

The citations for the statistical tests are Lisiecki 2010 and Huybers 2011.

Huybers 2011 claims that obliquity and precession (not eccentricity) are linked to deglaciations. This is development of his earlier, very interesting 2007 hypothesis (Glacial variability over the last two million years: an extended depth-derived agemodel, continuous obliquity pacing, and the Pleistocene progression – to which we will return) that obliquity is the prime factor (not necessarily the cause) in deglaciations.

Here is what Huybers says in his 2011 paper, Combined obliquity and precession pacing of late Pleistocene deglaciations:

The cause of these massive shifts in climate remains unclear not for lack of models, of which there are now over thirty, but for want of means to choose among them. Previous statistical tests have demonstrated that obliquity paces the 100-kyr glacial cycles [citations are his 2005 paper with Carl Wunsch and his 2007 paper], helping narrow the list of viable mechanisms, but have been inconclusive with respect to precession (that is, P > 0.05) because of small sample sizes and uncertain timing..

In Links between eccentricity forcing and the 100,000-year glacial cycle, (2010), Lisiecki says:

Variations in the eccentricity (100,000 yr), obliquity (41,000 yr) and precession (23,000 yr) of Earth’s orbit have been linked to glacial–interglacial climate cycles. It is generally thought that the 100,000-yr glacial cycles of the past 800,000 yr are a result of orbital eccentricity [1–4] . However, the eccentricity cycle produces negligible 100-kyr power in seasonal or mean annual insolation, although it does modulate the amplitude of the precession cycle.

Alternatively, it has been suggested that the recent glacial cycles are driven purely by the obliquity cycle [5–7]. Here I use statistical analyses of insolation and the climate of the past five million years to characterize the link between eccentricity and the 100,000-yr glacial cycles. Using cross-wavelet phase analysis, I show that the relative phase of eccentricity and glacial cycles has been stable since 1.2 Myr ago, supporting the hypothesis that 100,000-yr glacial cycles are paced [8–10] by eccentricity [4,11]. However, I find that the time-dependent 100,000-yr power of eccentricity has been anticorrelated with that of climate since 5 Myr ago, with strong eccentricity forcing associated with weaker power in the 100,000-yr glacial cycle.

I propose that the anticorrelation arises from the strong precession forcing associated with strong eccentricity forcing, which disrupts the internal climate feedbacks that drive the 100,000-yr glacial cycle. This supports the hypothesis that internally driven climate feedbacks are the source of the 100,000-yr climate variations.

So she accepts that Theory B is generally accepted, although some Theory C Family advocates are out there, but provides a new hybrid solution of her own.

References for the orbital eccentricity hypothesis [1-4] include Hays et al 1976 and Raymo 1997 cited above. However, Raymo didn’t think it had been demonstrated prior to her 1997 paper and in her 1997 paper introduces the hypothesis that is primarily ice sheet size, obliquity and precession modulated by eccentricity.

References for the obliquity hypothesis [5-7] include the Huybers & Wunsch 2005 and Huybers 2007 covered just before this reference.

So in summary – going back to how we dragged up these references – Abe-Ouchi and co-authors provide two citations in support of the statistical link between orbital variations and deglaciation. One citation claims primarily obliquity with maybe a place for precession – no link to eccentricity. Another citation claims a new theory for eccentricity as a phase-locking mechanism to an internal climate process.

These are two mutually exclusive ideas. But at least both papers attempted to prove their (exclusive) ideas.

——

Equatorial insolation: from precession harmonics to eccentricity frequencies, Berger, Loutre, & Mélice (2006):

Since the paper by Hays et al. (1976), spectral analyses of climate proxy records provide substantial evidence that a fraction of the climatic variance is driven by insolation changes in the frequency ranges of obliquity and precession variations. However, it is the variance components centered near 100 kyr which dominate most Upper Pleistocene climatic records, although the amount of insolation perturbation at the eccentricity driven periods close to 100-kyr (mainly the 95 kyr- and 123 kyr-periods) is much too small to cause directly a climate change of ice-age amplitude. Many attempts to find an explanation to this 100-kyr cycle in climatic records have been made over the last decades.

“Still a mystery”.

——

Multistability and hysteresis in the climate-cryosphere system under orbital forcing, Calov & Ganopolski (2005)

In spite of considerable progress in studies of past climate changes, the nature of vigorous climate variations observed during the past several million years remains elusive. A variety of different astronomical theories, among which the Milankovitch theory [Milankovitch, 1941] is the best known, suggest changes in Earth’s orbital parameters as a driver or, at least, a pacemaker of glacial-interglacial climate transitions. However, the mechanisms which translate seasonal and strongly latitude-dependent variations in the insolation into the global-scale climate shifts between glacial and interglacial climate states are the subject of debate.

“Still a mystery”

——

Ice Age Terminations, Cheng, Edwards, Broecker, Denton, Kong, Wang, Zhang, Wang (2009)

The ice-age cycles have been linked to changes in Earth’s orbital geometry (the Milankovitch or Astronomical theory) through spectral analysis of marine oxygen-isotope records (3), which demonstrate power in the ice-age record at the same three spectral periods as orbitally driven changes in insolation. However, explaining the 100 thousand- year (ky)–recurrence period of ice ages has proved to be problematic because although the 100-ky cycle dominates the ice-volume power spectrum, it is small in the insolation spectrum. In order to understand what factors control ice age cycles, we must know the extent to which terminations are systematically linked to insolation and how any such linkage can produce a non- linear response by the climate system at the end of ice ages.

“Still a mystery”. This paper claims (their new work) that terminations are all about high latitude NH insolation. They state, for the hypothesis of the paper:

In all four cases, observations are consistent with a classic Northern Hemisphere summer insolation intensity trigger for an initial retreat of northern ice sheets.

This is similar to Northern Hemisphere forcing of climatic cycles in Antarctica over the past 360,000 years, Kawamura et al (2007) – not cited here because they didn’t make a statement about “the problem so far”.

——

Orbital forcing and role of the latitudinal insolation/temperature gradient, Basil Davis & Simon Brewer (2009)

Orbital forcing of the climate system is clearly shown in the Earths record of glacial–interglacial cycles, but the mechanism underlying this forcing is poorly understood.

Not sure whether this is classified as “Still a mystery” or Theory B or Theory C Family.

——

Evidence for Obliquity Forcing of Glacial Termination II, Drysdale, Hellstrom, Zanchetta, Fallick, Sánchez Goñi, Couchoud, McDonald, Maas, Lohmann & Isola (2009)

During the Late Pleistocene, the period of glacial-to-interglacial transitions (or terminations) has increased relative to the Early Pleistocene [~100 thousand years (ky) versus 40 ky]. A coherent explanation for this shift still eludes paleoclimatologists (3). Although many different models have been proposed (4), the most widely accepted one invokes changes in the intensity of high-latitude Northern Hemisphere summer insolation (NHSI). These changes are driven largely by the precession of the equinoxes (5), which produces relatively large seasonal and hemispheric insolation intensity anomalies as the month of perihelion shifts through its ~23-ky cycle.

Their “widely accepted” theory is from the Theory C Family. This is a different theory from the “widely accepted” theory B. Perhaps both are “widely accepted”, hopefully by different groups of scientists.

——

The role of orbital forcing, carbon dioxide and regolith in 100 kyr glacial cycles, Ganopolski & Calov (2011)

The origin of the 100 kyr cyclicity, which dominates ice volume variations and other climate records over the past million years, remains debatable..

..One of the major challenges to the classical Milankovitch theory is the presence of 100 kyr cycles that dominate global ice volume and climate variability over the past million years (Hays et al., 1976; Imbrie et al., 1993; Paillard, 2001).

This periodicity is practically absent in the principal “Milankovitch forcing” – variations of summer insolation at high latitudes of the Northern Hemisphere (NH).

The eccentricity of Earth’s orbit does contain periodicities close to 100 kyr and the robust phase relationship between glacial cycles and 100-kyr eccentricity cycles has been found in the paleoclimate records (Hays et al., 1976; Berger et al., 2005; Lisiecki, 2010). However, the direct effect of the eccentricity on Earth’s global energy balance is very small.

Moreover, eccentricity variations are dominated by a 400 kyr cycle which is also seen in some older geological records (e.g. Zachos et al., 1997), but is practically absent in the frequency spectrum of the ice volume variations for the last million years.

In view of this long-standing problem, it was proposed that the 100 kyr cycles do not originate directly from the orbital forcing but rather represent internal oscillations in the climate-cryosphere (Gildor and Tziperman, 2001) or climate-cryosphere-carbonosphere system (e.g. Saltzman and Maasch, 1988; Paillard and Parrenin, 2004), which can be synchronized (phase locked) to the orbital forcing (Tziperman et al., 2006).

Alternatively, it was proposed that the 100 kyr cycles result from the terminations of ice sheet buildup by each second or third obliquity cycle (Huybers and Wunsch, 2005) or each fourth or fifth precessional cycle (Ridgwell et al., 1999) or they originate directly from a strong, nonlinear, climate-cryosphere system response to a combination of precessional and obliquity components of the orbital forcing (Paillard, 1998).

“Still a mystery”.

——–

Modeling the Climatic Response to Orbital Variations, Imbrie & Imbrie (1980)

This is not to say that all important questions have been answered. In fact, one purpose of this article is to contribute to the solution of one of the remaining major problems: the origin and history of the 100,000-year climatic cycle.

At least over the past 600,000 years, almost all climatic records are dominated by variance components in a narrow frequency band centered near a 100,000-year cycle (5-8, 12, 21, 38). Yet a climatic response at these frequencies is not predicted by the Milankovitch version of the astronomical theory – or any other version that involves a linear response (5, 6).

This paper was worth citing because the first author is the coathor of Hays et al 1976. For interest let’s look at what they attempt to demonstrate in their paper. They take the approach of producing different (simple) models with orbital forcing, to try to reproduce the geological record:

The goal of our modeling effort has been to simulate the climatic response to orbital variations over the past 500 kyrs. The resulting model fails to simulate four important aspects of this record. It fails to produce sufficient 100k power; it produces too much 23K and 19K power; it produces too much 413k power and it loses its match with the record ardoun the time of the last 413k eccentricity minimum..

All of these failures are related to a fundamental shortcoming in the generation of 100k power.. Indeed it is possible that no function will yield a good simulation of the entire 500 kyr record under consideration here, because nonorbitally forced high-frequency fluctuations may have caused the system to flip or flop in an unpredictable fashion. This would be an example of Lorenz’s concept of an almost intransitive system..

..Progress in this direction will indicate what long-term variations need to be explained within the framework of a stochastic model and provide a basis for estimating the degree of unpredictability in climate.

——

On the structure and origin of major glaciation cycles, Imbrie, Boyle, Clemens, Duffy, Howard, Kukla, Kutzbach, Martinson, McIntyre, Mix, Molfino, Morley, Peterson, Pisias, Prell, Raymo, Shackleton & Toggweiler (1992)

It is now widely believed that these astronomical influences, through their control of the seasonal and latitudinal distribution of incident solar radiation, either drive the major climate cycles externally or set the phase of oscillations that are driven internally..

..In this paper we concentrate on the 23-kyr and 41- kyr cycles of glaciation. These prove to be so strongly correlated with large changes in seasonal radiation that we regard them as continuous, essentially linear responses to the Milankovitch forcing. In a subsequent paper we will remove these linearly forced components from each time series and examine the residual response. The residual response is dominated by a 100-kyr cycle, which has twice the amplitude of the 23- and 41-kyr cycles combined. In the band of periods near 100 kyr, variations in radiation correlated with climate are so small, compared with variations correlated with the two shorter climatic cycles, that the strength of the 100-kyr climate cycle must result from the channeling of energy into this band by mechanisms operating within the climate system itself.

In Part 2, Imbrie et al (same authors) 1993 they highlight in more detail the problem of explaining the 100 kyr period:

1. One difficulty in finding a simple Milankovitch explanation is that the amplitudes of all 100-kyr radiation signals are very small [Hays et al., 1976]. As an example, the amplitude of the 100-kyr radiation cycle at June 65N (a signal often used as a forcing in Milankovitch theories) is only 2W/m² (Figure 1). This is 1 order of magnitude smaller than the same insolation signal in the 23- and 41- kyr bands, yet the system’s response in these two bands combined has about half the amplitude observed at 100 kyr.

2. Another fundamental difficulty is that variations in eccentricity are not confined to periods near 100 kyr. In fact, during the late Pleistocene, eccentricity variations at periods near 100 kyr are of the same order of magnitude as those at 413 kyr.. yet the d18O record for this time interval has no corresponding spectral peak near 400 kyr..

3. The high coherency observed between 100 kyr eccentricity and d18O signals is an average that hides significant mismatches, notably about 400 kyrs ago.

Their proposed solution:

In our model, the coupled system acts as a nonlinear amplifier that is particularly sensitive to eccentricity-driven modulations in the 23,000-year sea level cycle. During an interval when sea level is forced upward from a major low stand by a Milankovitch response acting either alone or in combination with an internally driven, higher-frequency process, ice sheets grounded on continental shelves become unstable, mass wasting accelerates, and the resulting deglaciation sets the phase of one wave in the train of 100 kyr oscillations.

This doesn’t really appear to be Theory B.

——

Orbital forcing of Arctic climate: mechanisms of climate response and implications for continental glaciation, Jackson & Broccoli (2003)

The growth and decay of terrestrial ice sheets during the Quaternary ultimately result from the effects of changes in Earth’s orbital geometry on climate system processes. This link is convincingly established by Hays et al. (1976) who find a correlation between variations of terrestrial ice volume and variations in Earth’s orbital eccentricity, obliquity, and longitude of the perihelion.

Hays et al 1976. Theory B with no support.

——

A causality problem for Milankovitch, Karner & Muller (2000)

We can conclude that the standard Milankovitch insolation theory does not account for the terminations of the ice ages. That is a serious and disturbing conclusion by itself. We can conclude that models that attribute the terminations to large insolation peaks (or, equivalently, to peaks in the precession parameter), such as the recent one by Raymo (23), are incompatible with the observations.

I’ll take this as “Still a mystery”.

——

Linear and non-linear response of late Neogene glacial cycles to obliquity forcing and implications for the Milankovitch theory, Lourens, Becker, Bintanja, Hilgen, Tuenter & van de Wal, Ziegler (2010)

Through the spectral analyses of marine oxygen isotope (d18O) records it has been shown that ice-sheets respond both linearly and non-linearly to astronomical forcing.

References in support of this statement include Imbrie et al 1992 & Imbrie et al 1993 that we reviewed above, and Pacemaking the Ice Ages by Frequency Modulation of Earth’s Orbital Eccentricity, JA Rial (1999):

The theory finds support in the fact that the spectra of the d18O records contain some of the same frequencies as the astronomical variations (2– 4), but a satisfactory explanation of how the changes in orbital eccentricity are transformed into the 100-ky quasi-periodic fluctuations in global ice volume indicated by the data has not yet been found (5).

For interest, the claim for the new work in this paper:

Evidence from power spectra of deep-sea oxygen isotope time series suggests that the climate system of Earth responds nonlinearly to astronomical forcing by frequency modulating eccentricity-related variations in insolation. With the help of a simple model, it is shown that frequency modulation of the approximate 100,000-year eccentricity cycles by the 413,000-year component accounts for the variable duration of the ice ages, the multiple-peak character of the time series spectra, and the notorious absence of significant spectral amplitude at the 413,000-year period. The observed spectra are consistent with the classic Milankovitch theories of insolation..

So if we consider the 3 references the provide in support of the “astronomical hypothesis”, the latest one says that a solution to the 100 kyr problem has not yet been found – of course this 1999 paper gives it their own best shot. Rial (1999) clearly doesn’t think that Imbrie et al 1992 / 1993 solved the problem.

And, of course, Rial (1999) proposes a different solution to Imbrie et al 1992/1993.

——

Dynamics between order and chaos in conceptual models of glacial cycles, Takahito Mitsui & Kazuyuki Aihara, Climate Dynamics (2013)

Hays et al. (1976) presented strong evidence for astronomical theories of ice ages. They found the primary frequencies of astronomical forcing in the geological spectra of marine sediment cores. However, the dominant frequency in geological spectra is approximately 1/100 kyr-1, although this frequency component is negligible in the astronomical forcing. This is referred to as the ‘100 kyr problem.’

However, the linear response cannot appropriately account for the 100 kyr periodicity (Hays et al. 1976).

Ghil (1994) explained the appearance of the 100 kyr periodicity as a nonlinear resonance to the combination tone 1/109 kyr-1 between precessional frequencies 1/19 and 1/23 kyr-1. Contrary to the linear resonance, the nonlinear resonance can occur even if the forcing frequencies are far from the internal frequency of the response system.

Benzi et al. (1982) proposed stochastic resonance as a mechanism of the 100 kyr periodicity, where the response to small external forcing is amplified by the effect of noise.

Tziperman et al. (2006) proposed that the timing of deglaciations is set by the astronomical forcing via the phase- locking mechanism.. De Saedeleer et al. (2013) suggested generalized synchronization (GS) to describe the relation between the glacial cycles and the astronomical forcing. GS means that there is a functional relation between the climate state and the state of the astronomical forcing. They also showed that the functional relation may not be unique for a certain model.

However, the nature of the relation remains to be elucidated.

“Still a mystery”.

——

Glacial cycles and orbital inclination, Richard Muller & Gordon MacDonald, Nature (1995)

According to the Milankovitch theory, the 100 kyr glacial cycle is caused by changes in insolation (solar heating) brought about by variations in the eccentricity of the Earth’s orbit. There are serious difficulties with this theory: the insolation variations appear to be too small to drive the cycles and a strong 400 kyr modulation predicted by the theory is not present..

We suggest that a radical solution is necessary to solve these problems, and we propose that the 100 kyr glacial cycle is caused, not by eccentricity, but by a previously ignored parameter: the orbital inclination, the tilt of the Earth’s orbital plane..

“Still a mystery”, with the new solution of a member of the Theory C Family.

——

Terminations VI and VIII (∼ 530 and ∼ 720 kyr BP) tell us the importance of obliquity and precession in the triggering of deglaciations, F. Parrenin & D. Paillard (2012)

The main variations of ice volume of the last million years can be explained from orbital parameters by assuming climate oscillates between two states: glaciations and deglaciations (Parrenin and Paillard, 2003; Imbrie et al., 2011) (or terminations). An additional combination of ice volume and orbital parameters seems to form the trigger of a deglaciation, while only orbital parameters seem to play a role in the triggering of glaciations. Here we present an optimized conceptual model which realistically reproduce ice volume variations during the past million years and in partic- ular the timing of the 11 canonical terminations. We show that our model looses sensitivity to initial conditions only after ∼ 200 kyr at maximum: the ice volume observations form a strong attractor. Both obliquity and precession seem necessary to reproduce all 11 terminations and both seem to play approximately the same role.

Note that eccentricity variations are not cited as the cause.

The support for orbital parameters explaining the ice age glaciation/deglaciation are two papers. First, Parrenin & Paillard: Amplitude and phase of glacial cycles from a conceptual model (2003):

Although we find astronomical frequencies in almost all paleoclimatic records [1,2], it is clear that the climatic system does not respond linearly to insolation variations [3]. The first well-known paradox of the astronomical theory of climate is the ‘100 kyr problem’: the largest variations over the past million years occurred approximately every 100 kyr, but the amplitude of the insolation signal at this frequency is not significant. Although this problem remains puzzling in many respects, multiple equilibria and thresholds in the climate system seem to be key notions to explain this paradoxical frequency.

Their solution:

To explain these paradoxical amplitude and phase modulations, we suggest here that deglaciations started when a combination of insolation and ice volume was large enough. To illustrate this new idea, we present a simple conceptual model that simulates the sea level curve of the past million years with very realistic amplitude modulations, and with good phase modulations.

The other paper cited in support of an astronomical solution is A phase-space model for Pleistocene ice volume, Imbrie, Imbrie-Moore & Lisiecki, Earth and Planetary Science Letters (2011)

Numerous studies have demonstrated that Pleistocene glacial cycles are linked to cyclic changes in Earth’s orbital parameters (Hays et al., 1976; Imbrie et al., 1992; Lisiecki and Raymo, 2007); however, many questions remain about how orbital cycles in insolation produce the observed climate response. The most contentious problem is why late Pleistocene climate records are dominated by 100-kyr cyclicity.

Insolation changes are dominated by 41-kyr obliquity and 23-kyr precession cycles whereas the 100-kyr eccentricity cycle produces negligible 100-kyr power in seasonal or mean annual insolation. Thus, various studies have proposed that 100-kyr glacial cycles are a response to the eccentricity-driven modulation of precession (Raymo, 1997; Lisiecki, 2010b), bundling of obliquity cycles (Huybers and Wunsch, 2005; Liu et al., 2008), and/or internal oscillations (Saltzman et al., 1984; Gildor and Tziperman, 2000; Toggweiler, 2008).

Their new solution:

We present a new, phase-space model of Pleistocene ice volume that generates 100-kyr cycles in the Late Pleistocene as a response to obliquity and precession forcing. Like Parrenin and Paillard, (2003), we use a threshold for glacial terminations. However, ours is a phase-space threshold: a function of ice volume and its rate of change. Our model the first to produce an orbitally driven increase in 100-kyr power during the mid-Pleistocene transition without any change in model parameters.

Theory C Family – two (relatively) new papers (2003 & 2011) with similar theories are presented as support of the astronomical theory causing the ice ages. Note that the theory in Imbrie et al 2013 is not the 100 kyr eccentricity variation proposed by Hays, Imbrie and Shackleton 1976.

——

Coherence resonance and ice ages, Jon D. Pelletier, JGR (2003)

The processes and feedbacks responsible for the 100-kyr cycle of Late Pleistocene global climate change are still being debated. This paper presents a numerical model that integrates (1) long-wavelength outgoing radiation, (2) the ice-albedo feedback, and (3) lithospheric deflection within the simple conceptual framework of coherence resonance. Coherence resonance is a dynamical process that results in the amplification of internally generated variability at particular periods in a system with bistability and delay feedback..

..The 100-kyr cycle is a free oscillation in the model, present even in the absence of external forcing.

“Still a mystery” – with the new solution that is not astronomical forcing.

——

The 41 kyr world: Milankovitch’s other unsolved mystery, Maureen E. Raymo & Kerim Nisancioglu (2003)

All serious students of Earth’s climate history have heard of the ‘‘100 kyr problem’’ of Milankovitch orbital theory, namely the lack of an obvious explanation of the dominant 100 kyr periodicity in climate records of the last 800,000 years.

“Still a mystery” – except that Raymo thinks she has found the solution (see earlier)

——

Is the spectral signature of the 100 kyr glacial cycle consistent with a Milankovitch origin, Ridgwell, Watson & Raymo (1999)

Global ice volume proxy records obtained from deep-sea sediment cores, when analyzed in this way produce a narrow peak corresponding to a period of ~100 kyr that dominates the low frequency part of the spectrum. This contrasts with the spectrum of orbital eccentricity variation, often assumed to be the main candidate to pace the glaciations [Hays et al 1980], which shows two distinct peaks near 100 kyr and substantial power near the 413 kyr period.

Then their solution:

Milankovitch theory seeks to explain the Quaternary glaciations via changes in seasonal insolation caused by periodic changes in the Earth’s obliquity, orbital precession and eccentricity. However, recent high-resolution spectral analysis of d18O proxy climate records have cast doubt on the theory.. Here we show that the spectral signature of d18O records are entirely consistent with Milankovitch mechanisms in which deglaciations are triggered every fourth or fifth precessional cycle. Such mechanisms may involve the buildup of excess ice due to low summertime insolation at the previous precessional high.

So they don’t accept Theory B. They don’t claim the theory has been previously solved and they introduce a Theory C Family.

——

In defense of Milankovitch, Gerard Roe (2006) – we reviewed this paper in Fifteen – Roe vs Huybers:

The Milankovitch hypothesis is widely held to be one of the cornerstones of climate science. Surprisingly, the hypothesis remains not clearly defined despite an extensive body of research on the link between global ice volume and insolation changes arising from variations in the Earth’s orbit.

And despite his interesting efforts at solving the problem he states towards the end of his paper:

The Milankovitch hypothesis as formulated here does not explain the large rapid deglaciations that occurred at the end of some of the ice age cycles.

Was it still a mystery or just not well defined. And from his new work, I’m not sure whether that means he thinks he has solved the reason for some ice age terminations, or that terminations are still a mystery.

——

The 100,000-Year Ice-Age Cycle Identified and Found to Lag Temperature, Carbon Dioxide, and Orbital Eccentricity, Nicholas J. Shackleton (the Shackleton from Hays et al 1976), (2000)

It is generally accepted that this 100-ky cycle represents a major component of the record of changes in total Northern Hemisphere ice volume (3). It is difficult to explain this predominant cycle in terms of orbital eccentricity because “the 100,000-year radiation cycle (arising from eccentricity variations) is much too small in amplitude and too late in phase to produce the corresponding climatic cycle by direct forcing”

So the Hays, Imbrie & Shackleton 1976 Theory B is not correct.

He does state:

Hence, the 100,000-year cycle does not arise from ice sheet dynamics; instead, it is probably the response of the global carbon cycle that generates the eccentricity signal by causing changes in atmospheric carbon dioxide concentration.

Note that this is in opposition to the papers by Imbrie et al (2011) and Parrenin & Paillard (2003) that were cited by Parrenin & Paillard (2012) in support of the astronomical theory of the ice ages.

——

Consequences of pacing the Pleistocene 100 kyr ice ages by nonlinear phase locking to Milankovitch forcing, Tziperman, Raymo, Huybers & Wunsch (2006)

Hays et al. [1976] established that Milankovitch forcing (i.e., variations in orbital parameters and their effect on the insolation at the top of the atmosphere) plays a role in glacial cycle dynamics. However, precisely what that role is, and what is meant by ‘‘Milankovitch theories’’ remains unclear despite decades of work on the subject [e.g., Wunsch, 2004; Rial and Anaclerio, 2000]. Current views vary from the inference that Milankovitch variations in insolation drives the glacial cycle (i.e., the cycles would not exist without Milankovitch variations), to the Milankovitch forcing causing only weak climate perturbations superimposed on the glacial cycles. A further possibility is that the primary influence of the Milankovitch forcing is to set the frequency and phase of the cycles (e.g., controlling the timing of glacial terminations or of glacial inceptions). In the latter case, glacial cycles would exist even in the absence of the insolation changes, but with different timing.

“Still a mystery” – but now solved with a Theory C Family (in their paper).

——

Quantitative estimate of the Milankovitch-forced contribution to observed Quaternary climate change, Carl Wunsch (2004)

The so-called Milankovitch hypothesis, that much of inferred past climate change is a response to near- periodic variations in the earth’s position and orientation relative to the sun, has attracted a great deal of attention. Numerous textbooks (e.g., Bradley, 1999; Wilson et al., 2000; Ruddiman, 2001) of varying levels and sophistication all tell the reader that the insolation changes are a major element controlling climate on time scales beyond about 10,000 years.

A recent paper begins ‘‘It is widely accepted that climate variability on timescales of 10 kyrs to 10 kyrs is driven primarily by orbital, or so-called Milankovitch, forcing.’’ (McDermott et al., 2001). To a large extent, embrace of the Milankovitch hypothesis can be traced to the pioneering work of Hays et al. (1976), who showed, convincingly, that the expected astronomical periods were visible in deep-sea core records..

..The long-standing question of how the slight Milankovitch forcing could possibly force such an enormous glacial–interglacial change is then answered by concluding that it does not do so.

“Still a mystery” – Wunsch does not accept Theory B and in this year didn’t accept Theory C Family (later co-authors a Theory C Family paper with Huybers). I cited this before in Part Six – “Hypotheses Abound”.

——

Individual contribution of insolation and CO2 to the interglacial climates of the past 800,000 years, Qiu Zhen Yin & André Berger (2012)

Climate variations of the last 3 million years are characterized by glacial-interglacial cycles which are generally believed to be driven by astronomically induced insolation changes.

No citation for the claim. Of course I agree that it is “generally believed”. Is this theory B? Or theory C? Or not sure?

——

Summary of the Papers

Out of about 300 papers checked, I found 34 papers (I might have missed a few) with a statement on the major cause of the ice ages separate from what they attempted to prove in their paper. These 34 papers were reviewed, with a further handful of cited papers examined to see what support they offered for the claim of the paper in question.

In respect of “What has been demonstrated up until our paper” – I count:

  • 19 “still a mystery”
  • 9 propose theory B
  • 6 supporting theory C

I have question marks over my own classification of about 10 of these because they lack clarity on what they believe is the situation to date.

Of course, from the point of view of the papers reviewed each believes they have some solution for the mystery. That’s not primarily what I was interested in.

I wanted to see what all papers accept as the story so far, and what evidence they bring for this belief.

I found only one paper claiming theory B that attempted to produce any significant evidence in support.

Conclusion

Hays, Imbrie & Shackleton (1976) did not prove Theory B. They suggested it. Invoking “probably non-linearity” does not constitute proof for an apparent frequency correlation. Specifically, half an apparent frequency correlation – given that eccentricity has a 413 kyr component as well as a 100 kyr component.

Some physical mechanism is necessary. Of course, I’m certain Hays, Imbrie & Shackleton understood this (I’ve read many of their later papers).

Of the papers we reviewed, over half indicate that the solution is still a mystery. That is fine. I agree it is a mystery.

Some papers indicate that the theory is widely believed but not necessarily that they do. That’s probably fine. Although it is confusing for non-specialist readers of their paper.

Some papers cite Hays et al 1976 as support for theory B. This is amazing.

Some papers claim “astronomical forcing” and in support cite Hays et al 1976 plus a paper with a different theory from the Theory C Family. This is also amazing.

Some papers cite support for Theory C Family – an astronomical theory to explain the ice ages with a different theory than Hays et al 1976. Sometimes their cited papers align. However, between papers that accept something in the Theory C Family there is no consensus on which version of Theory C Family, and obviously therefore, on the papers which support it.

How can papers cite Hays et al for support of the astronomical theory of ice age inception/termination?

It is required to put forward citations for just about every claim in a paper even if the entire world has known it from childhood. It seems to be a journal convention/requirement:

The sun rises each day [see Kepler 1596; Newton 1687, Plato 370 BC]

Really? Newton didn’t actually prove it in his paper? Oh, you know what, I just had a quick look at the last few papers in my field and copied their citations so I could get on with putting forward my theory. Come on, we all know the sun rises every day, look out the window (unless you live in England). Anyway, so glad you called, let me explain my new theory, it solves all those other problems, I’ve really got something here..

Well, that might be part of the answer. It isn’t excusable, but introductions don’t have the focus they should have.

Why the Belief in Theory B?

This part I can’t answer. Lots of people have put forward theories, none is generally accepted. The reason for the ice age terminations is unknown. Or known by a few people and not yet accepted by the climate science community.

Is it ok to accept something that everyone else seems to believe even though they all actually have a different theory. Is it ok to accept something as proven that is not really proven because it is from a famous paper with 2500 citations?

Finally, the fact that most papers have some vague words at the start about the “orbital” or “astronomical” theory for the ice ages doesn’t mean that this theory has any support. Being scientific, being skeptical, means asking for evidence and definitely not accepting an idea just because “everyone else” appears to accept it.

I am sure people will take issue with me. In another blog I was told that scientists were just “dotting the i’s and crossing the t’s” and none of this was seriously in doubt. Apparently, I was following creationist tactics of selective and out-of-context quoting..

Well, I will be delighted and no doubt entertained to read these comments, but don’t forget to provide evidence for the astronomical theory of the ice ages.

Articles in this Series

Part One – An introduction

Part Two – Lorenz – one point of view from the exceptional E.N. Lorenz

Part Three – Hays, Imbrie & Shackleton – how everyone got onto the Milankovitch theory

Part Four – Understanding Orbits, Seasons and Stuff – how the wobbles and movements of the earth’s orbit affect incoming solar radiation

Part Five – Obliquity & Precession Changes – and in a bit more detail

Part Six – “Hypotheses Abound” – lots of different theories that confusingly go by the same name

Part Seven – GCM I – early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Seven and a Half – Mindmap – my mind map at that time, with many of the papers I have been reviewing and categorizing plus key extracts from those papers

Part Eight – GCM II – more recent work from the “noughties” – GCM results plus EMIC (earth models of intermediate complexity) again trying to produce perennial snow cover

Part Nine – GCM III – very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age – a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Eleven – End of the Last Ice age – latest data showing relationship between Southern Hemisphere temperatures, global temperatures and CO2

Twelve – GCM V – Ice Age Termination – very recent work from He et al 2013, using a high resolution GCM (CCSM3) to analyze the end of the last ice age and the complex link between Antarctic and Greenland

Thirteen – Terminator II – looking at the date of Termination II, the end of the penultimate ice age – and implications for the cause of Termination II

Fourteen – Concepts & HD Data – getting a conceptual feel for the impacts of obliquity and precession, and some ice age datasets in high resolution

Fifteen – Roe vs Huybers – reviewing In Defence of Milankovitch, by Gerard Roe

Sixteen – Roe vs Huybers II – remapping a deep ocean core dataset and updating the previous article

Seventeen – Proxies under Water I – explaining the isotopic proxies and what they actually measure

Nineteen – Ice Sheet Models I – looking at the state of ice sheet models

Notes

Note 1: The temperature fluctuations measured in Antarctica are a lot smaller than Greenland but still significant and still present for similar periods. There are also some technical challenges with calculating the temperature change in Antarctica (the relationship between d18O and local temperature) that have been better resolved in Greenland.

Read Full Post »

Older Posts »

Follow

Get every new post delivered to your Inbox.

Join 345 other followers