Feeds:
Posts
Comments

Archive for the ‘Statistics’ Category

In #5 we examined a statement in the 6th Assessment Report (AR6) and some comments from their main reference, Imbers and co-authors from 2014.

Imbers experimented with a couple of simple models of natural variability and drew some conclusions about attribution studies.

We’ll have a look at their models. I’ll try and explain them in simple terms as well as some technical details.

Autoregressive or AR(1) Model

One model for natural variability they looked at goes by the name of first-order autoregressive or AR(1). In principle it’s pretty simple.

Let’s suppose the temperature tomorrow in London was random. Obviously, it wouldn’t be 1000°C. It wouldn’t be 100°C. There’s a range that you expect.

But if it were random, there would be no correlation between yesterday’s temperature and today’s. Like two spins of a roulette wheel or two dice rolls. The past doesn’t influence the present or the future.

We know from personal experience, and we can also see it in climate records, that the temperature today is correlated with the temperature from yesterday. The same applies for this year and last year.

If the temperature yesterday was 15°C, you expect that today it will be closer to 15°C than to the entire range of temperatures in London for this month for the past 50 years.

Essentially, we know that there is some kind of persistence of temperatures (and other climate variables). Yesterday influences today.

AR(1) is a simple model of random variation but includes persistence. It’s possibly the simplest model of random noise with persistence.

To see the whole article, visit the new Science of Doom on Substack page and please consider suscribing, for notifications on new articles.

Read Full Post »

In Part Three – Attribution & Fingerprints we looked at an early paper in this field, from 1996. I was led there by following back through many papers referenced from AR5 Chapter 10. The lead author of that paper, Gabriele Hegerl, has made a significant contribution to the 3rd report, 4th and 5th IPCC reports on attribution.

We saw in Part Three that this particular paper ascribed a probability:

We find that the latest observed 30-year trend pattern of near-surface temperature change can be distinguished from all estimates of natural climate variability with an estimated risk of less than 2.5% if the optimal fingerprint is applied.

That paper did note that greatest uncertainty was in understanding the magnitude of natural variability. This is an essential element of attribution.

It wasn’t explicitly stated whether the 97.5% confidence was with the premise that natural variability was accurately understood in 1996. I believe that this was the premise. I don’t know what confidence would have been ascribed to the attribution study if uncertainty over natural variability was included.

IPCC AR5

In this article we will look at the IPCC 5th report, AR5, and see how this field has progressed, specifically in regard to the understanding of natural variability. Chapter 10 covers Detection and Attribution of Climate Change.

From p.881 (the page numbers are from the start of the whole report, chapter 10 has just over 60 pages plus references):

Since the AR4, detection and attribution studies have been carried out using new model simulations with more realistic forcings, and new observational data sets with improved representation of uncertainty (Christidis et al., 2010; Jones et al., 2011, 2013; Gillett et al., 2012, 2013; Stott and Jones, 2012; Knutson et al., 2013; Ribes and Terray, 2013).

Let’s have a look at these papers (see note 1 on CMIP3 & CMIP5).

I had trouble understanding AR5 Chapter 10 because there was no explicit discussion of natural variability. The papers referenced (usually) have their own section on natural variability, but chapter 10 doesn’t actually cover it.

I emailed Geert Jan van Oldenborgh to ask for help. He is the author of one paper we will briefly look at here – his paper was very interesting and he had a video segment explaining his paper. He suggested the problem was more about communication because natural variability was covered in chapter 9 on models. He had written a section in chapter 11 that he pointed me towards, so this article became something that tried to grasp the essence of three chapters (9 – 11), over 200 pages of reports and several pallet loads of papers.

So I’m not sure I can do the synthesis justice, but what I will endeavor to do in this article is demonstrate the minimal focus (in IPCC AR5) on how well models represent natural variability.

That subject deserves a lot more attention, so this article will be less about what natural variability is, and more about how little focus it gets in AR5. I only arrived here because I was determined to understand “fingerprints” and especially the rationale behind the certainties ascribed.

Subsequent articles will continue the discussion on natural variability.

Knutson et al 2013

The models [CMIP5] are found to provide plausible representations of internal climate variability, although there is room for improvement..

..The modeled internal climate variability from long control runs is used to determine whether observed and simulated trends are consistent or inconsistent. In other words, we assess whether observed and simulated forced trends are more extreme than those that might be expected from random sampling of internal climate variability.

Later

The model control runs exhibit long-term drifts. The magnitudes of these drifts tend to be larger in the CMIP3 control runs than in the CMIP5 control runs, although there are exceptions. We assume that these drifts are due to the models not being in equilibrium with the control run forcing, and we remove the drifts by a linear trend analysis (depicted by the orange straight lines in Fig. 1). In some CMIP3 cases, the drift initially proceeds at one rate, but then the trend becomes smaller for the remainder of the run. We approximate the drift in these cases by two separate linear trend segments, which are identified in the figure by the short vertical orange line segments. These long-term drift trends are removed to produce the drift corrected series.

[Emphasis added].

Another paper suggests this assumption might not be correct. Here is Jones, Stott and Christidis (2013) – “piControl” are the natural variability model simulations:

Often a model simulation with no changes in external forcing (piControl) will have a drift in the climate diagnostics due to various flux imbalances in the model [Gupta et al., 2012]. Some studies attempt to account for possible model climate drifts, for instance Figure 9.5 in Hegerl et al. [2007] did not include transient simulations of the 20th century if the long-term trend of the piControl was greater in magnitude than 0.2 K/century (Appendix 9.C in Hegerl et al. [2007]).

Another technique is to remove the trend, from the transient simulations, deduced from a parallel section of piControl [e.g., Knutson et al., 2006]. However whether one should always remove the piControl trend, and how to do it in practice, is not a trivial issue [Taylor et al., 2012; Gupta et al., 2012]..

..We choose not to remove the trend from the piControl from parallel simulations of the same model in this study due to the impact it would have on long-term variability, i.e., the possibility that part of the trend in the piControl may be long-term internal variability that may or may not happen in a parallel experiment when additional forcing has been applied.

Here are further comments from Knutson et al 2013:

Five of the 24 CMIP3 models, identified by “(-)” in Fig. 1, were not used, or practically not used, beyond Fig. 1 in our analysis. For instance, the IAP_fgoals1.0.g model has a strong discontinuity near year 200 of the control run. We judge this as likely an artifact due to some problem with the model simulation, and we therefore chose to exclude this model from further analysis

From Knutson et al 2013

From Knutson et al 2013

Figure 1

Perhaps this is correct. Or perhaps the jump in simulated temperature is the climate model capturing natural climate variability.

The authors do comment:

As noted by Wittenberg (2009) and Vecchi and Wittenberg (2010), long-running control runs suggest that internally generated SST variability, at least in the ENSO region, can vary substantially between different 100-yr periods (approximately the length of record used here for observations), which again emphasizes the caution that must be placed on comparisons of modeled vs. observed internal variability based on records of relatively limited duration.

The first paper referenced, Wittenberg 2009, was the paper we looked at in Part Six – El Nino.

So is the “caution” that comes from that study included in the probability of our models ability to simulate natural variability?

In reality, questions about internal variability are not really discussed. Trends are removed, models with discontinuities are artifacts. What is left? This paper essentially takes the modeling output from the CMIP3 and CMIP5 archives (with and without GHG forcing) as a given and applies some tests.

Ribes & Terray 2013

This was a “Part II” paper and they said:

We use the same estimates of internal variability as in Ribes et al. 2013 [the “Part I”].

These are based on intra-ensemble variability from the above CMIP5 experiments as well as pre-industrial simulations from both the CMIP3 and CMIP5 archives, leading to a much larger sample than previously used (see Ribes et al. 2013 for details about ensembles). We then implicitly assume that the multi-model internal variability estimate is reliable.

[Emphasis added]. The Part I paper said:

An estimate of internal climate variability is required in detection and attribution analysis, for both optimal estimation of the scaling factors and uncertainty analysis.

Estimates of internal variability are usually based on climate simulations, which may be control simulations (i.e. in the present case, simulations with no variations in external forcings), or ensembles of simulations with the same prescribed external forcings.

In the latter case, m – 1 independent realisations of pure internal variability may be obtained by subtracting the ensemble mean from each member (assuming again additivity of the responses) and rescaling the result by a factor √(m/(m-1)) , where m denotes the number of members in the ensemble.

Note that estimation of internal variability usually means estimation of the covariance matrix of a spatio-temporal climate-vector, the dimension of this matrix potentially being high. We choose to use a multi-model estimate of internal climate variability, derived from a large ensemble of climate models and simulations. This multi-model estimate is subject to lower sampling variability and better represents the effects of model uncertainty on the estimate of internal variability than individual model estimates. We then simultaneously consider control simulations from the CMIP3 and CMIP5 archives, and ensembles of historical simulations (including simulations with individual sets of forcings) from the CMIP5 archive.

All control simulations longer than 220 years (i.e. twice the length of our study period) and all ensembles (at least 2 members) are used. The overall drift of control simulations is removed by subtracting a linear trend over the full period.. We then implicitly assume that this multi- model internal variability estimate is reliable.

[Emphasis added]. So two approaches to evaluate internal variability – one approach uses GCM runs with no GHG forcing; and the other approach uses the variation between different runs of the same model (with GHG forcing) to estimate natural variability. Drift is removed as “an error”.

Chapter 10 on Spatial Trends

The IPCC report also reviews the spatial simulations compared with spatial observations, p. 880:

Figure 10.2a shows the pattern of annual mean surface temperature trends observed over the period 1901–2010, based on Hadley Centre/ Climatic Research Unit gridded surface temperature data set 4 (Had- CRUT4). Warming has been observed at almost all locations with sufficient observations available since 1901.

Rates of warming are generally higher over land areas compared to oceans, as is also apparent over the 1951–2010 period (Figure 10.2c), which simulations indicate is due mainly to differences in local feedbacks and a net anomalous heat transport from oceans to land under GHG forcing, rather than differences in thermal inertia (e.g., Boer, 2011). Figure 10.2e demonstrates that a similar pattern of warming is simulated in the CMIP5 simulations with natural and anthropogenic forcing over the 1901–2010 period. Over most regions, observed trends fall between the 5th and 95th percentiles of simulated trends, and van Oldenborgh et al. (2013) find that over the 1950–2011 period the pattern of observed grid cell trends agrees with CMIP5 simulated trends to within a combination of model spread and internal variability..

van Oldenborgh et al (2013)

Let’s take a look at van Oldenborgh et al (2013).

There’s a nice video of (I assume) the lead author talking about the paper and comparing the probabilistic approach used in weather forecasts with that of climate models (see Ensemble Forecasting). I recommend the video for a good introduction to the topic of ensemble forecasting.

With weather forecasting the probability comes from running ensembles of weather models and seeing, for example, how many simulations predict rain vs how many do not. The proportion is the probability of rain. With weather forecasting we can continually review how well the probabilities given by ensembles match the reality. Over time we will build up a set of statistics of “probability of rain” and compare with the frequency of actual rainfall. It’s pretty easy to see if the models are over-confident or under-confident.

Here is what the authors say about the problem and how they approached it:

The ensemble is considered to be an estimate of the probability density function (PDF) of a climate forecast. This is the method used in weather and seasonal forecasting (Palmer et al 2008). Just like in these fields it is vital to verify that the resulting forecasts are reliable in the definition that the forecast probability should be equal to the observed probability (Joliffe and Stephenson 2011).

If outcomes in the tail of the PDF occur more (less) frequently than forecast the system is overconfident (underconfident): the ensemble spread is not large enough (too large).

In contrast to weather and seasonal forecasts, there is no set of hindcasts to ascertain the reliability of past climate trends per region. We therefore perform the verification study spatially, comparing the forecast and observed trends over the Earth. Climate change is now so strong that the effects can be observed locally in many regions of the world, making a verification study on the trends feasible. Spatial reliability does not imply temporal reliability, but unreliability does imply that at least in some areas the forecasts are unreliable in time as well. In the remainder of this letter we use the word ‘reliability’ to indicate spatial reliability.

[Emphasis added]. The paper first shows the result for one location, the Netherlands, with the spread of model results vs the actual result from 1950-2011:

from van Oldenborgh et al 2013

from van Oldenborgh et al 2013

Figure 2

We can see that the models are overall mostly below the observation. But this is one data point. So if we compared all of the datapoints – and this is on a grid of 2.5º – how do the model spreads compare with the results? Are observations above 95% of the model results only 5% of the time? Or more than 5% of the time? And are observations below 5% of the model results only 5% of the time?

We can see that the frequency of observations in the bottom 5% of model results is about 13% and the frequency of observations in the top 5% of model results is about 20%. Therefore the models are “overconfident” in spatial representation of the last 60 year trends:

van Oldenborgh-2013-fig3

From van Oldenborgh et al 2013

Figure 3

We investigated the reliability of trends in the CMIP5 multi-model ensemble prepared for the IPCC AR5. In agreement with earlier studies using the older CMIP3 ensemble, the temperature trends are found to be locally reliable. However, this is due to the differing global mean climate response rather than a correct representation of the spatial variability of the climate change signal up to now: when normalized by the global mean temperature the ensemble is overconfident. This agrees with results of Sakaguchi et al (2012) that the spatial variability in the pattern of warming is too small. The precipitation trends are also overconfident. There are large areas where trends in both observational dataset are (almost) outside the CMIP5 ensemble, leading us to conclude that this is unlikely due to faulty observations.

It’s probably important to note that the author comments in the video “on the larger scale the models are not doing so badly”.

It’s an interesting paper. I’m not clear whether the brief note in AR5 reflects the paper’s conclusions.

Jones et al 2013

It was reassuring to finally find a statement that confirmed what seemed obvious from the “omissions”:

A basic assumption of the optimal detection analysis is that the estimate of internal variability used is comparable with the real world’s internal variability.

Surely I can’t be the only one reading Chapter 10 and trying to understand the assumptions built into the “with 95% confidence” result. If Chapter 10 is only aimed at climate scientists who work in the field of attribution and detection it is probably fine not to actually mention this minor detail in the tight constraints of only 60 pages.

But if Chapter 10 is aimed at a wider audience it seems a little remiss not to bring it up in the chapter itself.

I probably missed the stated caveat in chapter 10’s executive summary or introduction.

The authors continue:

As the observations are influenced by external forcing, and we do not have a non-externally forced alternative reality to use to test this assumption, an alternative common method is to compare the power spectral density (PSD) of the observations with the model simulations that include external forcings.

We have already seen that overall the CMIP5 and CMIP3 model variability compares favorably across different periodicities with HadCRUT4-observed variability (Figure 5). Figure S11 (in the supporting information) includes the PSDs for each of the eight models (BCC-CSM1-1, CNRM-CM5, CSIRO- Mk3-6-0, CanESM2, GISS-E2-H, GISS-E2-R, HadGEM2- ES and NorESM1-M) that can be examined in the detection analysis.

Variability for the historical experiment in most of the models compares favorably with HadCRUT4 over the range of periodicities, except for HadGEM2-ES whose very long period variability is lower due to the lower overall trend than observed and for CanESM2 and bcc-cm1-1 whose decadal and higher period variability are larger than observed.

While not a strict test, Figure S11 suggests that the models have an adequate representation of internal variability—at least on the global mean level. In addition, we use the residual test from the regression to test whether there are any gross failings in the models representation of internal variability.

Figure S11 is in the supplementary section of the paper:

From Jones et al 2013, figure S11

From Jones et al 2013, figure S11

Figure 4

From what I can see, this demonstrates that the spectrum of the models’ internal variability (“historicalNat”) is different from the spectrum of the models’ forced response with GHG changes (“historical”).

It feels like my quantum mechanics classes all over again. I’m probably missing something obvious, and hopefully knowledgeable readers can explain.

Chapter 9 of AR5 – Climate Models’ Representation of Internal Variability

Chapter 9, reviewing models, stretches to over 80 pages. The section on internal variability is section 9.5.1:

However, the ability to simulate climate variability, both unforced internal variability and forced variability (e.g., diurnal and seasonal cycles) is also important. This has implications for the signal-to-noise estimates inherent in climate change detection and attribution studies where low-frequency climate variability must be estimated, at least in part, from long control integrations of climate models (Section 10.2).

Section 9.5.3:

In addition to the annual, intra-seasonal and diurnal cycles described above, a number of other modes of variability arise on multi-annual to multi-decadal time scales (see also Box 2.5). Most of these modes have a particular regional manifestation whose amplitude can be larger than that of human-induced climate change. The observational record is usually too short to fully evaluate the representation of variability in models and this motivates the use of reanalysis or proxies, even though these have their own limitations.

Figure 9.33a shows simulated internal variability of mean surface temperature from CMIP5 pre-industrial control simulations. Model spread is largest in the tropics and mid to high latitudes (Jones et al., 2012), where variability is also large; however, compared to CMIP3, the spread is smaller in the tropics owing to improved representation of ENSO variability (Jones et al., 2012). The power spectral density of global mean temperature variance in the historical simulations is shown in Figure 9.33b and is generally consistent with the observational estimates. At longer time scale of the spectra estimated from last millennium simulations, performed with a subset of the CMIP5 models, can be assessed by comparison with different NH temperature proxy records (Figure 9.33c; see Chapter 5 for details). The CMIP5 millennium simulations include natural and anthropogenic forcings (solar, volcanic, GHGs, land use) (Schmidt et al., 2012).

Significant differences between unforced and forced simulations are seen for time scale larger than 50 years, indicating the importance of forced variability at these time scales (Fernandez-Donado et al., 2013). It should be noted that a few models exhibit slow background climate drift which increases the spread in variance estimates at multi-century time scales.

Nevertheless, the lines of evidence above suggest with high confidence that models reproduce global and NH temperature variability on a wide range of time scales.

[Emphasis added]. Here is fig 9.33:

From IPCC AR5 Chapter 10

From IPCC AR5 Chapter 10

Figure 5 – Click to Expand

The bottom graph shows the spectra of the last 1,000 years – black line is observations (reconstructed from proxies), dashed lines are without GHG forcings, and solid lines are with GHG forcings.

In later articles we will review this in more detail.

Conclusion

The IPCC report on attribution is very interesting. Most attribution studies compare observations of the last 100 – 150 years with model simulations using anthropogenic GHG changes and model simulations without (note 3).

The results show a much better match for the case of the anthropogenic forcing.

The primary method is with global mean surface temperature, with more recent studies also comparing the spatial breakdown. We saw one such comparison with van Oldenborgh et al (2013). Jones et al (2013) also reviews spatial matching, finding a better fit (of models & observations) for the last half of the 20th century than the first half. (As with van Oldenborgh’s paper, the % match outside 90% of model results was greater than 10%).

My question as I first read Chapter 10 was how was the high confidence attained and what is a fingerprint?

I was led back, by following the chain of references, to one of the early papers on the topic (1996) that also had similar high confidence. (We saw this in Part Three). It was intriguing that such confidence could be attained with just a few “no forcing” model runs as comparison, all of which needed “flux adjustment”. Current models need much less, or often zero, flux adjustment.

In later papers reviewed in AR5, “no forcing” model simulations that show temperature trends or jumps are often removed or adjusted.

I’m not trying to suggest that “no forcing” GCM simulations of the last 150 years have anything like the temperature changes we have observed. They don’t.

But I was trying to understand what assumptions and premises were involved in attribution. Chapter 10 of AR5 has been valuable in suggesting references to read, but poor at laying out the assumptions and premises of attribution studies.

For clarity, as I stated in Part Three:

..as regular readers know I am fully convinced that the increases in CO2, CH4 and other GHGs over the past 100 years or more can be very well quantified into “radiative forcing” and am 100% in agreement with the IPCCs summary of the work of atmospheric physics over the last 50 years on this topic. That is, the increases in GHGs have led to something like a “radiative forcing” of 2.8 W/m²..

..Therefore, it’s “very likely” that the increases in GHGs over the last 100 years have contributed significantly to the temperature changes that we have seen.

So what’s my point?

Chapter 10 of the IPCC report fails to highlight the important assumptions in the attribution studies. Chapter 9 of the IPCC report has a section on centennial/millennial natural variability with a “high confidence” conclusion that comes with little evidence and appears to be based on a cursory comparison of the spectral results of the last 1,000 years proxy results with the CMIP5 modeling studies.

In chapter 10, the executive summary states:

..given that observed warming since 1951 is very large compared to climate model estimates of internal variability (Section 10.3.1.1.2), which are assessed to be adequate at global scale (Section 9.5.3.1), we conclude that it is virtually certain [99-100%] that internal variability alone cannot account for the observed global warming since 1951.

[Emphasis added]. I agree, and I don’t think anyone who understands radiative forcing and climate basics would disagree. To claim otherwise would be as ridiculous as, for example, claiming that tiny changes in solar insolation from eccentricity modifications over 100 kyrs cause the end of ice ages, whereas large temperature changes during these ice ages have no effect (see note 2).

The executive summary also says:

It is extremely likely [95–100%] that human activities caused more than half of the observed increase in GMST from 1951 to 2010.

The idea is plausible, but the confidence level is dependent on a premise that is claimed via one graph (fig 9.33) of the spectrum of the last 1,000 years. High confidence (“that models reproduce global and NH temperature variability on a wide range of time scales”) is just an opinion.

It’s crystal clear, by inspection of CMIP3 and CMIP5 model results, that models with anthropogenic forcing match the last 150 years of temperature changes much better than models held at constant pre-industrial forcing.

I believe natural variability is a difficult subject which needs a lot more than a cursory graph of the spectrum of the last 1,000 years to even achieve low confidence in our understanding.

Chapters 9 & 10 of AR5 haven’t investigated “natural variability” at all. For interest, some skeptic opinions are given in note 4.

I propose an alternative summary for Chapter 10 of AR5:

It is extremely likely [95–100%] that human activities caused more than half of the observed increase in GMST from 1951 to 2010, but this assessment is subject to considerable uncertainties.

Articles in the Series

Natural Variability and Chaos – One – Introduction

Natural Variability and Chaos – Two – Lorenz 1963

Natural Variability and Chaos – Three – Attribution & Fingerprints

Natural Variability and Chaos – Four – The Thirty Year Myth

Natural Variability and Chaos – Five – Why Should Observations match Models?

Natural Variability and Chaos – Six – El Nino

Natural Variability and Chaos – Seven – Attribution & Fingerprints Or Shadows?

Natural Variability and Chaos – Eight – Abrupt Change

References

Multi-model assessment of regional surface temperature trends, TR Knutson, F Zeng & AT Wittenberg, Journal of Climate (2013) – free paper

Attribution of observed historical near surface temperature variations to anthropogenic and natural causes using CMIP5 simulations, Gareth S Jones, Peter A Stott & Nikolaos Christidis, Journal of Geophysical Research Atmospheres (2013) – paywall paper

Application of regularised optimal fingerprinting to attribution. Part II: application to global near-surface temperature, Aurélien Ribes & Laurent Terray, Climate Dynamics (2013) – free paper

Application of regularised optimal fingerprinting to attribution. Part I: method, properties and idealised analysis, Aurélien Ribes, Serge Planton & Laurent Terray, Climate Dynamics (2013) – free paper

Reliability of regional climate model trends, GJ van Oldenborgh, FJ Doblas Reyes, SS Drijfhout & E Hawkins, Environmental Research Letters (2013) – free paper

Notes

Note 1: CMIP = Coupled Model Intercomparison Project. CMIP3 was for AR4 and CMIP5 was for AR5.

Read about CMIP5:

At a September 2008 meeting involving 20 climate modeling groups from around the world, the WCRP’s Working Group on Coupled Modelling (WGCM), with input from the IGBP AIMES project, agreed to promote a new set of coordinated climate model experiments. These experiments comprise the fifth phase of the Coupled Model Intercomparison Project (CMIP5). CMIP5 will notably provide a multi-model context for

1) assessing the mechanisms responsible for model differences in poorly understood feedbacks associated with the carbon cycle and with clouds

2) examining climate “predictability” and exploring the ability of models to predict climate on decadal time scales, and, more generally

3) determining why similarly forced models produce a range of responses…

From the website link above you can read more. CMIP5 is a substantial undertaking, with massive output of data from the latest climate models. Anyone can access this data, similar to CMIP3. Here is the Getting Started page.

And CMIP3:

In response to a proposed activity of the World Climate Research Programme (WCRP) Working Group on Coupled Modelling (WGCM), PCMDI volunteered to collect model output contributed by leading modeling centers around the world. Climate model output from simulations of the past, present and future climate was collected by PCMDI mostly during the years 2005 and 2006, and this archived data constitutes phase 3 of the Coupled Model Intercomparison Project (CMIP3). In part, the WGCM organized this activity to enable those outside the major modeling centers to perform research of relevance to climate scientists preparing the Fourth Asssessment Report (AR4) of the Intergovernmental Panel on Climate Change (IPCC). The IPCC was established by the World Meteorological Organization and the United Nations Environmental Program to assess scientific information on climate change. The IPCC publishes reports that summarize the state of the science.

This unprecedented collection of recent model output is officially known as the “WCRP CMIP3 multi-model dataset.” It is meant to serve IPCC’s Working Group 1, which focuses on the physical climate system — atmosphere, land surface, ocean and sea ice — and the choice of variables archived at the PCMDI reflects this focus. A more comprehensive set of output for a given model may be available from the modeling center that produced it.

With the consent of participating climate modelling groups, the WGCM has declared the CMIP3 multi-model dataset open and free for non-commercial purposes. After registering and agreeing to the “terms of use,” anyone can now obtain model output via the ESG data portal, ftp, or the OPeNDAP server.

As of July 2009, over 36 terabytes of data were in the archive and over 536 terabytes of data had been downloaded among the more than 2500 registered users

Note 2: This idea is explained in Ghosts of Climates Past -Eighteen – “Probably Nonlinearity” of Unknown Origin – what is believed and what is put forward as evidence for the theory that ice age terminations were caused by orbital changes, see especially the section under the heading: Why Theory B is Unsupportable.

Note 3: Some studies use just fixed pre-industrial values, and others compare “natural forcings” with “no forcings”.

“Natural forcings” = radiative changes due to solar insolation variations (which are not known with much confidence) and aerosols from volcanos. “No forcings” is simply fixed pre-industrial values.

Note 4: Chapter 11 (of AR5), p.982:

For the remaining projections in this chapter the spread among the CMIP5 models is used as a simple, but crude, measure of uncertainty. The extent of agreement between the CMIP5 projections provides rough guidance about the likelihood of a particular outcome. But—as partly illustrated by the discussion above—it must be kept firmly in mind that the real world could fall outside of the range spanned by these particular models. See Section 11.3.6 for further discussion.

And p. 1004:

It is possible that the real world might follow a path outside (above or below) the range projected by the CMIP5 models. Such an eventuality could arise if there are processes operating in the real world that are missing from, or inadequately represented in, the models. Two main possibilities must be considered: (1) Future radiative and other forcings may diverge from the RCP4.5 scenario and, more generally, could fall outside the range of all the RCP scenarios; (2) The response of the real climate system to radiative and other forcing may differ from that projected by the CMIP5 models. A third possibility is that internal fluctuations in the real climate system are inadequately simulated in the models. The fidelity of the CMIP5 models in simulating internal climate variability is discussed in Chapter 9..

..The response of the climate system to radiative and other forcing is influenced by a very wide range of processes, not all of which are adequately simulated in the CMIP5 models (Chapter 9). Of particular concern for projections are mechanisms that could lead to major ‘surprises’ such as an abrupt or rapid change that affects global-to-continental scale climate.

Several such mechanisms are discussed in this assessment report; these include: rapid changes in the Arctic (Section 11.3.4 and Chapter 12), rapid changes in the ocean’s overturning circulation (Chapter 12), rapid change of ice sheets (Chapter 13) and rapid changes in regional monsoon systems and hydrological climate (Chapter 14). Additional mechanisms may also exist as synthesized in Chapter 12. These mechanisms have the potential to influence climate in the near term as well as in the long term, albeit the likelihood of substantial impacts increases with global warming and is generally lower for the near term.

And p. 1009 (note that we looked at Rowlands et al 2012 in Part Five – Why Should Observations match Models?):

The CMIP3 and CMIP5 projections are ensembles of opportunity, and it is explicitly recognized that there are sources of uncertainty not simulated by the models. Evidence of this can be seen by comparing the Rowlands et al. (2012) projections for the A1B scenario, which were obtained using a very large ensemble in which the physics parameterizations were perturbed in a single climate model, with the corresponding raw multi-model CMIP3 projections. The former exhibit a substantially larger likely range than the latter. A pragmatic approach to addressing this issue, which was used in the AR4 and is also used in Chapter 12, is to consider the 5 to 95% CMIP3/5 range as a ‘likely’ rather than ‘very likely’ range.

Replacing ‘very likely’ = 90–100% with ‘likely 66–100%’ is a good start. How does this recast chapter 10?

And Chapter 1 of AR5, p. 138:

Model spread is often used as a measure of climate response uncertainty, but such a measure is crude as it takes no account of factors such as model quality (Chapter 9) or model independence (e.g., Masson and Knutti, 2011; Pennell and Reichler, 2011), and not all variables of interest are adequately simulated by global climate models..

..Climate varies naturally on nearly all time and space scales, and quantifying precisely the nature of this variability is challenging, and is characterized by considerable uncertainty.

[Emphasis added in all bold sections above]

Read Full Post »

In (still) writing what was to be Part Six (Attribution in AR5 from the IPCC), I was working through Knutson et al 2013, one of the papers referenced by AR5. That paper in turn referenced Are historical records sufficient to constrain ENSO simulations? [link corrected] by Andrew Wittenberg (2009). This is a very interesting paper and I was glad to find it because it illustrates some of the points we have been looking at.

It’s an easy paper to read (and free) and so I recommend reading the whole paper.

The paper uses NOAA’s Geophysical Fluid Dynamics Laboratory (GFDL) CM2.1 global coupled atmosphere/ocean/land/ice GCM (see note 1 for reference and description):

CM2.1 played a prominent role in the third Coupled Model Intercomparison Project (CMIP3) and the Fourth Assessment of the Intergovernmental Panel on Climate Change (IPCC), and its tropical and ENSO simulations have consistently ranked among the world’s top GCMs [van Oldenborgh et al., 2005; Wittenberg et al., 2006; Guilyardi, 2006; Reichler and Kim, 2008].

The coupled pre-industrial control run is initialized as by Delworth et al. [2006], and then integrated for 2220 yr with fixed 1860 estimates of solar irradiance, land cover, and atmospheric composition; we focus here on just the last 2000 yr. This simulation required one full year to run on 60 processors at GFDL.

First of all we see the challenge for climate models – a reasonable resolution coupled GCM running just one 2000-year simulation consumed one year of multiple processor time.

Wittenberg shows the results in the graph below. At the top is our observational record going back 140 years, then below are the simulation results of the SST variation in the El Nino region broken into 20 century-long segments.

From Wittenberg 2009

From Wittenberg 2009

 Figure 1 – Click to Expand

What we see is that different centuries have very different results:

There are multidecadal epochs with hardly any variability (M5); epochs with intense, warm-skewed ENSO events spaced five or more years apart (M7); epochs with moderate, nearly sinusoidal ENSO events spaced three years apart (M2); and epochs that are highly irregular in amplitude and period (M6). Occasional epochs even mimic detailed temporal sequences of observed ENSO events; e.g., in both R2 and M6, there are decades of weak, biennial oscillations, followed by a large warm event, then several smaller events, another large warm event, and then a long quiet period. Although the model’s NINO3 SST variations are generally stronger than observed, there are long epochs (like M1) where the ENSO amplitude agrees well with observations (R1).

Wittenberg comments on the problem for climate modelers:

An unlucky modeler – who by chance had witnessed only M1-like variability throughout the first century of simulation – might have erroneously inferred that the model’s ENSO amplitude matched observations, when a longer simulation would have revealed a much stronger ENSO.

If the real-world ENSO is similarly modulated, then there is a more disturbing possibility. Had the research community been unlucky enough to observe an unrepresentative ENSO over the past 150 yr of measurements, then it might collectively have misjudged ENSO’s longer-term natural behavior. In that case, historically-observed statistics could be a poor guide for modelers, and observed trends in ENSO statistics might simply reflect natural variations..

..A 200 yr epoch of consistently strong variability (M3) can be followed, just one century later, by a 200 yr epoch of weak variability (M4). Documenting such extremes might thus require a 500+ yr record. Yet few modeling centers currently attempt simulations of that length when evaluating CGCMs under development – due to competing demands for high resolution, process completeness, and quick turnaround to permit exploration of model sensitivities.

Model developers thus might not even realize that a simulation manifested long-term ENSO modulation, until long after freezing the model development. Clearly this could hinder progress. An unlucky modeler – unaware of centennial ENSO modulation and misled by comparisons between short, unrepresentative model runs – might erroneously accept a degraded model or reject an improved model.

[Emphasis added].

Wittenberg shows the same data in the frequency domain and has presented the data in a way that illustrates the different perspective you might have depending upon your period of observation or period of model run. It’s worth taking the time to understand what is in these graphs:

From Wittenberg 2009

From Wittenberg 2009

Figure 2 – Click to Expand

The first graph, 2a:

..time-mean spectra of the observations for epochs of length 20 yr – roughly the duration of observations from satellites and the Tropical Atmosphere Ocean (TAO) buoy array. The spectral power is fairly evenly divided between the seasonal cycle and the interannual ENSO band, the latter spanning a broad range of time scales between 1.3 to 8 yr.

So the different colored lines indicate the spectral power for each period. The black dashed line is the observed spectral power over the 140 year (observational) period. This dashed line is repeated in figure 2c.

The second graph, 2b shows the modeled results if we break up the 2000 years into 100 x 20-year periods.

The third graph, 2c, shows the modeled results broken up into 100 year periods. The probability number in the bottom right, 90%, is the likelihood of observations falling outside the range of the model results – if “the simulated subspectra independent and identically distributed.. at bottom right is the probability that an interval so constructed would bracket the next subspectrum to emerge from the model.

So what this says, paraphrasing and over-simplifying: “we are 90% sure that the observations can’t be explained by the models”.

Of course, this independent and identically distributed assumption is not valid, but as we will hopefully get onto many articles further in this series, most of these statistical assumptions – stationary, gaussian, AR1 – are problematic for real world non-linear systems.

To be clear, the paper’s author is demonstrating a problem in such a statistical approach.

Conclusion

Models are not reality. This is a simulation with the GFDL model. It doesn’t mean ENSO is like this. But it might be.

The paper illustrates a problem I highlighted in Part Five – observations are only one “realization” of possible outcomes. The last century or century and a half of surface observations could be an outlier. The last 30 years of satellite data could equally be an outlier. Even if our observational periods are not an outlier and are right there on the mean or median, matching climate models to observations may still greatly under-represent natural climate variability.

Non-linear systems can demonstrate variability over much longer time-scales than the the typical period between characteristic events. We will return to this in future articles in more detail. Such systems do not have to be “chaotic” (where chaotic means that tiny changes in initial conditions cause rapidly diverging results).

What period of time is necessary to capture natural climate variability?

I will give the last word to the paper’s author:

More worryingly, if nature’s ENSO is similarly modulated, there is no guarantee that the 150 yr historical SST record is a fully representative target for model development..

..In any case, it is sobering to think that even absent any anthropogenic changes, the future of ENSO could look very different from what we have seen so far.

Articles in the Series

Natural Variability and Chaos – One – Introduction

Natural Variability and Chaos – Two – Lorenz 1963

Natural Variability and Chaos – Three – Attribution & Fingerprints

Natural Variability and Chaos – Four – The Thirty Year Myth

Natural Variability and Chaos – Five – Why Should Observations match Models?

Natural Variability and Chaos – Six – El Nino

Natural Variability and Chaos – Seven – Attribution & Fingerprints Or Shadows?

Natural Variability and Chaos – Eight – Abrupt Change

References

Are historical records sufficient to constrain ENSO simulations? Andrew T. Wittenberg, GRL (2009) – free paper

GFDL’s CM2 Global Coupled Climate Models. Part I: Formulation and Simulation Characteristics, Delworth et al, Journal of Climate, 2006 – free paper

Notes

Note 1: The paper referenced for the GFDL model is GFDL’s CM2 Global Coupled Climate Models. Part I: Formulation and Simulation Characteristics, Delworth et al, 2006:

The formulation and simulation characteristics of two new global coupled climate models developed at NOAA’s Geophysical Fluid Dynamics Laboratory (GFDL) are described.

The models were designed to simulate atmospheric and oceanic climate and variability from the diurnal time scale through multicentury climate change, given our computational constraints. In particular, an important goal was to use the same model for both experimental seasonal to interannual forecasting and the study of multicentury global climate change, and this goal has been achieved.

Two versions of the coupled model are described, called CM2.0 and CM2.1. The versions differ primarily in the dynamical core used in the atmospheric component, along with the cloud tuning and some details of the land and ocean components. For both coupled models, the resolution of the land and atmospheric components is 2° latitude x 2.5° longitude; the atmospheric model has 24 vertical levels.

The ocean resolution is 1° in latitude and longitude, with meridional resolution equatorward of 30° becoming progressively finer, such that the meridional resolution is 1/3° at the equator. There are 50 vertical levels in the ocean, with 22 evenly spaced levels within the top 220 m. The ocean component has poles over North America and Eurasia to avoid polar filtering. Neither coupled model employs flux adjustments.

The control simulations have stable, realistic climates when integrated over multiple centuries. Both models have simulations of ENSO that are substantially improved relative to previous GFDL coupled models. The CM2.0 model has been further evaluated as an ENSO forecast model and has good skill (CM2.1 has not been evaluated as an ENSO forecast model). Generally reduced temperature and salinity biases exist in CM2.1 relative to CM2.0. These reductions are associated with 1) improved simulations of surface wind stress in CM2.1 and associated changes in oceanic gyre circulations; 2) changes in cloud tuning and the land model, both of which act to increase the net surface shortwave radiation in CM2.1, thereby reducing an overall cold bias present in CM2.0; and 3) a reduction of ocean lateral viscosity in the extra- tropics in CM2.1, which reduces sea ice biases in the North Atlantic.

Both models have been used to conduct a suite of climate change simulations for the 2007 Intergovern- mental Panel on Climate Change (IPCC) assessment report and are able to simulate the main features of the observed warming of the twentieth century. The climate sensitivities of the CM2.0 and CM2.1 models are 2.9 and 3.4 K, respectively. These sensitivities are defined by coupling the atmospheric components of CM2.0 and CM2.1 to a slab ocean model and allowing the model to come into equilibrium with a doubling of atmospheric CO2. The output from a suite of integrations conducted with these models is freely available online (see http://nomads.gfdl.noaa.gov/).

There’s a brief description of the newer model version CM3.0 on the GFDL page.

Read Full Post »

In Part Four – The Thirty Year Myth we looked at the idea of climate as the “long term statistics” of weather. In one case, climate = statistics of weather, has been arbitrarily defined as over a 30 year period. In certain chaotic systems, “long term statistics” might be repeatable and reliable, but “long term” can’t be arbitrarily defined for convenience. Climate, when defined as predictable statistics of weather, might just as well be 100,000 years (note 1)

I’ve had a question about the current approach to climate models for some time and found it difficult to articulate. In reading Broad range of 2050 warming from an observationally constrained large climate model ensemble, Daniel Rowlands et al, Nature (2012) I found an explanation that helps me clarify my question.

This paper by Rowlands et al is similar in approach to that of Stainforth et al 2005 – the idea of much larger ensembles of climate models. The Stainforth paper was discussed in the comments of Models, On – and Off – the Catwalk – Part Four – Tuning & the Magic Behind the Scenes.

For new readers who want to understand a bit more about ensembles of models – take a look at Ensemble Forecasting.

Weather Forecasting

The basic idea behind ensembles for weather forecasts is that we have uncertainty about:

  • the initial conditions – because observations are not perfect
  • parameters in our model – because our understanding of the physics of weather is not perfect

So multiple simulations are run and the frequency of occurrence of, say, a severe storm tells us the probability that the severe storm will occur.

Given the short term nature of weather forecasts we can compare the frequency of occurrence of particular events with the % probability that our ensemble produced.

Let’s take an example to make it clear. Suppose the ensemble prediction of a severe storm in a certain area is 5%. The severe storm occurs. What can we make of the accuracy our prediction? Well, we can’t deduce anything from that event.

Why? Because we only had one occurrence.

Out of a 1000 future forecasts, the “5%ers” are going to occur 50 times – if we are right on the money with our probabilistic forecast. We need a lot of forecasts to be compared with a lot of results. Then we might find that 5%ers actually occur 20% of the time. Or only 1% of the time. Armed with this information we can a) try and improve our model because we know the deficiencies, and b) temper our ensemble forecast with our knowledge of how well it has historically predicted the 5%, 10%, 90% chances of occurrence.

This is exactly what currently happens with numerical weather prediction.

And if instead we run one simulation with our “best estimate” of initial conditions and parameters the results are not as good as the results from the ensemble.

Climate Forecasting

The idea behind ensembles of climate forecasts is subtly different. Initial conditions are no help with predicting the long term statistics (aka “climate”). But we still have a lot of uncertainty over model physics and parameterizations. So we run ensembles of simulations with slightly different physics/parameterizations (see note 2).

Assuming our model is a decent representation of climate, there are three important points:

  1. we need to know the timescale of “predictable statistics”, given constant “external” forcings (e.g. anthropogenic GHG changes)
  2. we need to cover the real range of possible parameterizations
  3. the results we get from ensembles can, at best, only ever give us the probabilities of outcomes over a given time period

Item 1 was discussed in the last article and I have not been able to find any discussion of this timescale in climate science papers (that doesn’t mean there aren’t any, hopefully someone can point me to a discussion of this topic).

Item 2 is something that I believe climate scientists are very interested in. The limitation has been, and still is, the computing power required.

Item 3 is what I want to discuss in this article, around the paper by Rowlands et al.

Rowlands et al 2012

In the latest generation of coupled atmosphere–ocean general circulation models (AOGCMs) contributing to the Coupled Model Intercomparison Project phase 3 (CMIP-3), uncertainties in key properties controlling the twenty-first century response to sustained anthropogenic greenhouse-gas forcing were not fully sampled, partially owing to a correlation between climate sensitivity and aerosol forcing, a tendency to overestimate ocean heat uptake and compensation between short-wave and long-wave feedbacks.

This complicates the interpretation of the ensemble spread as a direct uncertainty estimate, a point reflected in the fact that the ‘likely’ (>66% probability) uncertainty range on the transient response was explicitly subjectively assessed as −40% to +60% of the CMIP-3 ensemble mean for global-mean temperature in 2100, in the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4). The IPCC expert range was supported by a range of sources, including studies using pattern scaling, ensembles of intermediate-complexity models, and estimates of the strength of carbon-cycle feedbacks. From this evidence it is clear that the CMIP-3 ensemble, which represents a valuable expression of plausible responses consistent with our current ability to explore model structural uncertainties, fails to reflect the full range of uncertainties indicated by expert opinion and other methods..

..Perturbed-physics ensembles offer a systematic approach to quantify uncertainty in models of the climate system response to external forcing. Here we investigate uncertainties in the twenty-first century transient response in a multi-thousand-member ensemble of transient AOGCM simulations from 1920 to 2080 using HadCM3L, a version of the UK Met Office Unified Model, as part of the climateprediction.net British Broadcasting Corporation (BBC) climate change experiment (CCE). We generate ensemble members by perturbing the physics in the atmosphere, ocean and sulphur cycle components, with transient simulations driven by a set of natural forcing scenarios and the SRES A1B emissions scenario, and also control simulations to account for unforced model drifts.

[Emphasis added]. So this project runs a much larger ensemble than the CMIP3 models produced for AR4.

Figure 1 shows the evolution of global-mean surface temperatures in the ensemble (relative to 1961–1990), each coloured by the goodness-of-fit to observations of recent surface temperature changes, as detailed below.

Rowlands et al 2012

From Rowlands et al 2012

The raw ensemble range (1.1–4.2 K around 2050), primarily driven by uncertainties in climate sensitivity (Supplementary Information), is potentially misleading because many ensemble members have an unrealistic response to the forcing over the past 50 years.

[Emphasis added]

And later in the paper:

..On the assumption that models that simulate past warming realistically are our best candidates for making estimates of the future..

So here’s my question:

If model simulations give us probabilistic forecasts of future climate, why are climate model simulations “compared” with the average of the last few years current “weather” – and those that don’t match up well are rejected or devalued?

It seems like an obvious thing to do, of course. But current averaged weather might be in the top 10% or the bottom 10% of probabilities. We have no way of knowing.

Let’s say that the current 10-year average of GMST = 13.7ºC (I haven’t looked up the right value).

Suppose for the given “external” conditions (solar output and latitudinal distribution, GHG concentration) the “climate” – i.e., the real long term statistics of weather – has an average of 14.5ºC, with a standard deviation for any 10-year period of 0.5ºC. That is, 95% of 10-year periods would lie inside 13.5 – 15.5ºC (2 std deviations).

If we run a lot of simulations (and they truly represent the climate) then of course we expect 5% to be outside 13.5 – 15.5ºC. If we reject that 5% as being “unrealistic of current climate”, we’ve arbitrarily and incorrectly reduced the spread of our ensemble.

If we assume that “current averaged weather” – at 13.7ºC – represents reality then we might bias our results even more, depending on the standard deviation that we calculate or assume. We might accept outliers of 13.0ºC because they are closer to our observable and reject good simulations of 15.0ºC because they are more than two standard deviations from our observable (note 3).

The whole point of running an ensemble of simulations is to find out what the spread is, given our current understanding of climate physics.

Let me give another example. One theory for initiation of El Nino is that its initiation is essentially a random process during certain favorable conditions. Now we might have a model that reproduced El Nino starting in 1998 and 10 models that reproduced El Nino starting in other years. Do we promote the El Nino model that “predicted in retrospect” 1998 and demote/reject the others? No. We might actually be rejecting better models. We would need to look at the statistics of lots of El Ninos to decide.

Kiehl 2007 & Knutti 2008

Here’s a couple of papers that don’t articulate the point of view of this article – however, they do comment on the uncertainties in parameter space from a different and yet related perspective.

First, Kiehl 2007:

Methods of testing these models with observations form an important part of model development and application. Over the past decade one such test is our ability to simulate the global anomaly in surface air temperature for the 20th century.. Climate model simulations of the 20th century can be compared in terms of their ability to reproduce this temperature record. This is now an established necessary test for global climate models.

Of course this is not a sufficient test of these models and other metrics should be used to test models..

..A review of the published literature on climate simulations of the 20th century indicates that a large number of fully coupled three dimensional climate models are able to simulate the global surface air temperature anomaly with a good degree of accuracy [Houghton et al., 2001]. For example all models simulate a global warming of 0.5 to 0.7°C over this time period to within 25% accuracy. This is viewed as a reassuring confirmation that models to first order capture the behavior of the physical climate system..

One curious aspect of this result is that it is also well known [Houghton et al., 2001] that the same models that agree in simulating the anomaly in surface air temperature differ significantly in their predicted climate sensitivity. The cited range in climate sensitivity from a wide collection of models is usually 1.5 to 4.5°C for a doubling of CO2, where most global climate models used for climate change studies vary by at least a factor of two in equilibrium sensitivity.

The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy.

Second, Why are climate models reproducing the observed global surface warming so well? Knutti (2008):

The agreement between the CMIP3 simulated and observed 20th century warming is indeed remarkable. But do the current models simulate the right magnitude of warming for the right reasons? How much does the agreement really tell us?

Kiehl [2007] recently showed a correlation of climate sensitivity and total radiative forcing across an older set of models, suggesting that models with high sensitivity (strong feedbacks) avoid simulating too much warming by using a small net forcing (large negative aerosol forcing), and models with weak feedbacks can still simulate the observed warming with a larger forcing (weak aerosol forcing).

Climate sensitivity, aerosol forcing and ocean diffusivity are all uncertain and relatively poorly constrained from the observed surface warming and ocean heat uptake [e.g., Knutti et al., 2002; Forest et al., 2006]. Models differ because of their underlying assumptions and parameterizations, and it is plausible that choices are made based on the model’s ability to simulate observed trends..

..Models, therefore, simulate similar warming for different reasons, and it is unlikely that this effect would appear randomly. While it is impossible to know what decisions are made in the development process of each model, it seems plausible that choices are made based on agreement with observations as to what parameterizations are used, what forcing datasets are selected, or whether an uncertain forcing (e.g., mineral dust, land use change) or feedback (indirect aerosol effect) is incorporated or not.

..Second, the question is whether we should be worried about the correlation between total forcing and climate sensitivity. Schwartz et al. [2007] recently suggested that ‘‘the narrow range of modelled temperatures [in the CMIP3 models over the 20th century] gives a false sense of the certainty that has been achieved’’. Because of the good agreement between models and observations and compensating effects between climate sensitivity and radiative forcing (as shown here and by Kiehl [2007]) Schwartz et al. [2007] concluded that the CMIP3 models used in the most recent Intergovernmental Panel on Climate Change (IPCC) report [IPCC, 2007] ‘‘may give a false sense of their predictive capabilities’’.

Here I offer a different interpretation of the CMIP3 climate models. They constitute an ‘ensemble of opportunity’, they share biases, and probably do not sample the full range of uncertainty [Tebaldi and Knutti, 2007; Knutti et al., 2008]. The model development process is always open to influence, conscious or unconscious, from the participants’ knowledge of the observed changes. It is therefore neither surprising nor problematic that the simulated and observed trends in global temperature are in good agreement.

Conclusion

The idea that climate models should all reproduce global temperature anomalies over a 10-year or 20-year or 30-year time period, presupposes that we know:

a) climate, as the long term statistics of weather, can be reliably obtained over these time periods. Remember that with a simple chaotic system where we have “deity like powers” we can simulate the results and find the time period over which the statistics are reliable.

or

b) climate, as the 10-year (or 20-year or 30-year) statistics of weather is tightly constrained within a small range, to a high level of confidence, and therefore we can reject climate model simulations that fall outside this range.

Given that this Rowlands et al 2012 is attempting to better sample climate uncertainty by a larger ensemble it’s clear that this answer is not known in advance.

There are a lot of uncertainties in climate simulation. Constraining models to match the past may be under-sampling the actual range of climate variability.

Models are not reality. But if we accept that climate simulation is, at best, a probabilistic endeavor, then we must sample what the models produce, rather than throwing out results that don’t match the last 100 years of recorded temperature history.

Articles in the Series

Natural Variability and Chaos – One – Introduction

Natural Variability and Chaos – Two – Lorenz 1963

Natural Variability and Chaos – Three – Attribution & Fingerprints

Natural Variability and Chaos – Four – The Thirty Year Myth

Natural Variability and Chaos – Five – Why Should Observations match Models?

Natural Variability and Chaos – Six – El Nino

Natural Variability and Chaos – Seven – Attribution & Fingerprints Or Shadows?

Natural Variability and Chaos – Eight – Abrupt Change

References

Broad range of 2050 warming from an observationally constrained large climate model ensemble, Daniel Rowlands et al, Nature (2012) – free paper

Uncertainty in predictions of the climate response to rising levels of greenhouse gases, Stainforth et al, Nature (2005) – free paper

Why are climate models reproducing the observed global surface warming so well? Reto Knutti, GRL (2008) – free paper

Twentieth century climate model response and climate sensitivity, Jeffrey T Kiehl, GRL (2007) – free paper

Notes

Note 1: We are using the ideas that have been learnt from simple chaotic systems, like the Lorenz 1963 model. There is discussion of this in Part One and Part Two of this series. As some commenters have pointed out that doesn’t mean the climate works in the same way as these simple systems, it is much more complex.

The starting point is that weather is unpredictable. With modern numerical weather prediction (NWP) on current supercomputers we can get good forecasts 1 week ahead. But beyond that we might as well use the average value for that month in that location, measured over the last decade. It’s going to be better than a forecast from NWP.

The idea behind climate prediction is that even though picking the weather 8 weeks from now is a no-hoper, what we have learnt from simple chaotic systems is that the statistics of many chaotic systems can be reliably predicted.

Note 2: Models are run with different initial conditions as well. My only way of understanding this from a theoretical point of view (i.e., from anything other than a “practical” or “this is how we have always done it” approach) is to see different initial conditions as comparable to one model run over a much longer period.

That is, if climate is not an “initial value problem”, why are initial values changed in each ensemble member to assist climate model output? Running 10 simulations of the same model for 100 years, each with different initial conditions, should be equivalent to running one simulation for 1,000 years.

Well, that is not necessarily true because that 1,000 years might not sample the complete “attractor space”, which is the same point discussed in the last article.

Note 3: Models are usually compared to observations via temperature anomalies rather than via actual temperatures, see Models, On – and Off – the Catwalk – Part Four – Tuning & the Magic Behind the Scenes. The example was given for simplicity.

Read Full Post »

In Part Three we looked at attribution in the early work on this topic by Hegerl et al 1996. I started to write Part Four as the follow up on Attribution as explained in the 5th IPCC report (AR5), but got caught up in the many volumes of AR5.

And instead for this article I decided to focus on what might seem like an obscure point. I hope readers stay with me because it is important.

Here is a graphic from chapter 11 of IPCC AR5:

From IPCC AR5 Chapter 11

From IPCC AR5 Chapter 11

Figure 1

And in the introduction, chapter 1:

Climate in a narrow sense is usually defined as the average weather, or more rigorously, as the statistical description in terms of the mean and variability of relevant quantities over a period of time ranging from months to thousands or millions of years. The relevant quantities are most often surface variables such as temperature, precipitation and wind.

Classically the period for averaging these variables is 30 years, as defined by the World Meteorological Organization.

Climate in a wider sense also includes not just the mean conditions, but also the associated statistics (frequency, magnitude, persistence, trends, etc.), often combining parameters to describe phenomena such as droughts. Climate change refers to a change in the state of the climate that can be identified (e.g., by using statistical tests) by changes in the mean and/or the variability of its properties, and that persists for an extended period, typically decades or longer.

[Emphasis added].

Weather is an Initial Value Problem, Climate is a Boundary Value Problem

The idea is fundamental, the implementation is problematic.

As explained in Natural Variability and Chaos – Two – Lorenz 1963, there are two key points about a chaotic system:

  1. With even a minute uncertainty in the initial starting condition, the predictability of future states is very limited
  2. Over a long time period the statistics of the system are well-defined

(Being technical, the statistics are well-defined in a transitive system).

So in essence, we can’t predict the exact state of the future – from the current conditions – beyond a certain timescale which might be quite small. In fact, in current weather prediction this time period is about one week.

After a week we might as well say either “the weather on that day will be the same as now” or “the weather on that day will be the climatological average” – and either of these will be better than trying to predict the weather based on the initial state.

No one disagrees on this first point.

In current climate science and meteorology the term used is the skill of the forecast. Skill means, not how good is the forecast, but how much better is it than a naive approach like, “it’s July in New York City so the maximum air temperature today will be 28ºC”.

What happens in practice, as can be seen in the simple Lorenz system shown in Part Two, is a tiny uncertainty about the starting condition gets amplified. Two almost identical starting conditions will diverge rapidly – the “butterfly effect”. Eventually these two conditions are no more alike than one of the conditions and a time chosen at random from the future.

The wide divergence doesn’t mean that the future state can be anything. Here’s an example from the simple Lorenz system for three slightly different initial conditions:

Lorenz63-5ksecs-x-vs-time-zoom-499px

Figure 2

We can see that the three conditions that looked identical for the first 20 seconds (see figure 2 in Part Two) have diverged. The values are bounded but at any given time we can’t predict what the value will be.

On the second point – the statistics of the system, there is a tiny hiccup.

But first let’s review what is agreed upon. Climate is the statistics of weather. Weather is unpredictable more than a week ahead. Climate, as the statistics of weather, might be predictable. That is, just because weather is unpredictable, it doesn’t mean (or prove) that climate is also unpredictable.

This is what we find with simple chaotic systems.

So in the endeavor of climate modeling the best we can hope for is a probabilistic forecast. We have to run “a lot” of simulations and review the statistics of the parameter we are trying to measure.

To give a concrete example, we might determine from model simulations that the mean sea surface temperature in the western Pacific (between a certain latitude and longitude) in July has a mean of 29ºC with a standard deviation of 0.5ºC, while for a certain part of the north Atlantic it is 6ºC with a standard deviation of 3ºC. In the first case the spread of results tells us – if we are confident in our predictions – that we know the western Pacific SST quite accurately, but the north Atlantic SST has a lot of uncertainty. We can’t do anything about the model spread. In the end, the statistics are knowable (in theory), but the actual value on a given day or month or year are not.

Now onto the hiccup.

With “simple” chaotic systems that we can perfectly model (note 1) we don’t know in advance the timescale of “predictable statistics”. We have to run lots of simulations over long time periods until the statistics converge on the same result. If we have parameter uncertainty (see Ensemble Forecasting) this means we also have to run simulations over the spread of parameters.

Here’s my suggested alternative of the initial value vs boundary value problem:

Suggested replacement for AR5, Box 11.1, Figure 2

Figure 3

So one body made an ad hoc definition of climate as the 30-year average of weather.

If this definition is correct and accepted then “climate” is not a “boundary value problem” at all. Climate is an initial value problem and therefore a massive problem given our ability to forecast only one week ahead.

Suppose, equally reasonably, that the statistics of weather (=climate), given constant forcing (note 2), are predictable over a 10,000 year period.

In that case we can be confident that, with near perfect models, we have the ability to be confident about the averages, standard deviations, skews, etc of the temperature at various locations on the globe over a 10,000 year period.

Conclusion

The fact that chaotic systems exhibit certain behavior doesn’t mean that 30-year statistics of weather can be reliably predicted.

30-year statistics might be just as dependent on the initial state as the weather three weeks from today.

Articles in the Series

Natural Variability and Chaos – One – Introduction

Natural Variability and Chaos – Two – Lorenz 1963

Natural Variability and Chaos – Three – Attribution & Fingerprints

Natural Variability and Chaos – Four – The Thirty Year Myth

Natural Variability and Chaos – Five – Why Should Observations match Models?

Natural Variability and Chaos – Six – El Nino

Natural Variability and Chaos – Seven – Attribution & Fingerprints Or Shadows?

Natural Variability and Chaos – Eight – Abrupt Change

Notes

Note 1: The climate system is obviously imperfectly modeled by GCMs, and this will always be the case. The advantage of a simple model is we can state that the model is a perfect representation of the system – it is just a definition for convenience. It allows us to evaluate how slight changes in initial conditions or parameters affect our ability to predict the future.

The IPCC report also has continual reminders that the model is not reality, for example, chapter 11, p. 982:

For the remaining projections in this chapter the spread among the CMIP5 models is used as a simple, but crude, measure of uncertainty. The extent of agreement between the CMIP5 projections provides rough guidance about the likelihood of a particular outcome. But — as partly illustrated by the discussion above — it must be kept firmly in mind that the real world could fall outside of the range spanned by these particular models.

[Emphasis added].

Chapter 1, p.138:

Model spread is often used as a measure of climate response uncertainty, but such a measure is crude as it takes no account of factors such as model quality (Chapter 9) or model independence (e.g., Masson and Knutti, 2011; Pennell and Reichler, 2011), and not all variables of interest are adequately simulated by global climate models..

..Climate varies naturally on nearly all time and space scales, and quantifying precisely the nature of this variability is challenging, and is characterized by considerable uncertainty.

I haven’t yet been able to determine how these firmly noted and challenging uncertainties have been factored into the quantification of 95-100%, 99-100%, etc, in the various chapters of the IPCC report.

Note 2:  There are some complications with defining exactly what system is under review. For example, do we take the current solar output, current obliquity,precession and eccentricity as fixed? If so, then any statistics will be calculated for a condition that will anyway be changing. Alternatively, we can take these values as changing inputs in so far as we know the changes – which is true for obliquity, precession and eccentricity but not for solar output.

The details don’t really alter the main point of this article.

Read Full Post »

I’ve been somewhat sidetracked on this series, mostly by starting up a company and having no time, but also by the voluminous distractions of IPCC AR5. The subject of attribution could be a series by itself but as I started the series Natural Variability and Chaos it makes sense to weave it into that story.

In Part One and Part Two we had a look at chaotic systems and what that might mean for weather and climate. I was planning to develop those ideas a lot more before discussing attribution, but anyway..

AR5, Chapter 10: Attribution is 85 pages on the idea that the changes over the last 50 or 100 years in mean surface temperature – and also some other climate variables – can be attributed primarily to anthropogenic greenhouse gases.

The technical side of the discussion fascinated me, but has a large statistical component. I’m a rookie with statistics, and maybe because of this, I’m often suspicious about statistical arguments.

Digression on Statistics

The foundation of a lot of statistics is the idea of independent events. For example, spin a roulette wheel and you get a number between 0 and 36 and a color that is red, black – or if you’ve landed on a zero, neither.

The statistics are simple – each spin of the roulette wheel is an independent event – that is, it has no relationship with the last spin of the roulette wheel. So, looking ahead, what is the chance of getting 5 two times in a row? The answer (with a 0 only and no “00” as found in some roulette tables) is 1/37 x 1/37 = 0.073%.

However, after you have spun the roulette wheel and got a 5, what is the chance of a second 5? It’s now just 1/37 = 2.7%. The past has no impact on the future statistics. Most of real life doesn’t correspond particularly well to this idea, apart from playing games of chance like poker and so on.

I was in the gym the other day and although I try and drown it out with music from my iPhone, the Travesty (aka “the News”) was on some of the screens in the gym – with text of the “high points” on the screen aimed at people trying to drown out the annoying travestyreaders. There was a report that a new study had found that autism was caused by “Cause X” – I have blanked it out to avoid any unpleasant feeling for parents of autistic kids – or people planning on having kids who might worry about “Cause X”.

It did get me thinking – if you have let’s say 10,000 potential candidates for causing autism, and you set the bar at 95% probability of rejecting the hypothesis that a given potential cause is a factor, what is the outcome? Well, if there is a random spread of autism among the population with no actual cause (let’s say it is caused by a random genetic mutation with no link to any parental behavior, parental genetics or the environment) then you will expect to find about 500 “statistically significant” factors for autism simply by testing at the 95% level. That’s 500, when none of them are actually the real cause. It’s just chance. Plenty of fodder for pundits though.

That’s one problem with statistics – the answer you get unavoidably depends on your frame of reference.

The questions I have about attribution are unrelated to this specific point about statistics, but there are statistical arguments in the attribution field that seem fatally flawed. Luckily I’m a statistical novice so no doubt readers will set me straight.

On another unrelated point about statistical independence, only slightly more relevant to the question at hand, Pirtle, Meyer & Hamilton (2010) said:

In short, we note that GCMs are commonly treated as independent from one another, when in fact there are many reasons to believe otherwise. The assumption of independence leads to increased confidence in the ‘‘robustness’’ of model results when multiple models agree. But GCM independence has not been evaluated by model builders and others in the climate science community. Until now the climate science literature has given only passing attention to this problem, and the field has not developed systematic approaches for assessing model independence.

.. end of digression

Attribution History

In my efforts to understand Chapter 10 of AR5 I followed up on a lot of references and ended up winding my way back to Hegerl et al 1996.

Gabriele Hegerl is one of the lead authors of Chapter 10 of AR5, was one of the two coordinating lead authors of the Attribution chapter of AR4, and one of four lead authors on the relevant chapter of AR3 – and of course has a lot of papers published on this subject.

As is often the case, I find that to understand a subject you have to start with a focus on the earlier papers because the later work doesn’t make a whole lot of sense without this background.

This paper by Hegerl and her colleagues use the work of one of the co-authors, Klaus Hasselmann – his 1993 paper “Optimal fingerprints for detection of time dependent climate change”.

Fingerprints, by the way, seems like a marketing term. Fingerprints evokes the idea that you can readily demonstrate that John G. Doe of 137 Smith St, Smithsville was at least present at the crime scene and there is no possibility of confusing his fingerprints with John G. Dode who lives next door even though their mothers could barely tell them apart.

This kind of attribution is more in the realm of “was it the 6ft bald white guy or the 5’5″ black guy”?

Well, let’s set aside questions of marketing and look at the details.

Detecting GHG Climate Change with Optimal Fingerprint Methods in 1996

The essence of the method is to compare observations (measurements) with:

  • model runs with GHG forcing
  • model runs with “other anthropogenic” and natural forcings
  • model runs with internal variability only

Then based on the fit you can distinguish one from the other. The statistical basis is covered in detail in Hasselmann 1993 and more briefly in this paper: Hegerl et al 1996 – both papers are linked below in the References.

At this point I make another digression.. as regular readers know I am fully convinced that the increases in CO2, CH4 and other GHGs over the past 100 years or more can be very well quantified into “radiative forcing” and am 100% in agreement with the IPCCs summary of the work of atmospheric physics over the last 50 years on this topic. That is, the increases in GHGs have led to something like a “radiative forcing” of 2.8 W/m² [corrected, thanks to niclewis].

And there isn’t any scientific basis for disputing this “pre-feedback” value. It’s simply the result of basic radiative transfer theory, well-established, and well-demonstrated in observations both in the lab and through the atmosphere. People confused about this topic are confused about science basics and comments to the contrary may be allowed or more likely will be capriciously removed due to the fact that there have been more than 50 posts on this topic (post your comments on those instead). See The “Greenhouse” Effect Explained in Simple Terms and On Uses of A 4 x 2: Arrhenius, The Last 15 years of Temperature History and Other Parodies.

Therefore, it’s “very likely” that the increases in GHGs over the last 100 years have contributed significantly to the temperature changes that we have seen.

To say otherwise – and still accept physics basics – means believing that the radiative forcing has been “mostly” cancelled out by feedbacks while internal variability has been amplified by feedbacks to cause a significant temperature change.

Yet this work on attribution seems to be fundamentally flawed.

Here was the conclusion:

We find that the latest observed 30-year trend pattern of near-surface temperature change can be distinguished from all estimates of natural climate variability with an estimated risk of less than 2.5% if the optimal fingerprint is applied.

With the caveats, that to me, eliminated the statistical basis of the previous statement:

The greatest uncertainty of our analysis is the estimate of the natural variability noise level..

..The shortcomings of the present estimates of natural climate variability cannot be readily overcome. However, the next generation of models should provide us with better simulations of natural variability. In the future, more observations and paleoclimatic information should yield more insight into natural variability, especially on longer timescales. This would enhance the credibility of the statistical test.

Earlier in the paper the authors said:

..However, it is generally believed that models reproduce the space-time statistics of natural variability on large space and long time scales (months to years) reasonably realistic. The verification of variability of CGMCs [coupled GCMs] on decadal to century timescales is relatively short, while paleoclimatic data are sparce and often of limited quality.

..We assume that the detection variable is Gaussian with zero mean, that is, that there is no long-term nonstationarity in the natural variability.

[Emphasis added].

The climate models used would be considered rudimentary by today’s standards. Three different coupled atmosphere-ocean GCMs were used. However, each of them required “flux corrections”.

This method was pretty much the standard until the post 2000 era. The climate models “drifted”, unless, in deity-like form, you topped up (or took out) heat and momentum from various grid boxes.

That is, the models themselves struggled (in 1996) to represent climate unless the climate modeler knew, and corrected for, the long term “drift” in the model.

Conclusion

In the next article we will look at more recent work in attribution and fingerprints and see whether the field has developed.

But in this article we see that the conclusion of an attribution study in 1996 was that there was only a “2.5% chance” that recent temperature changes could be attributed to natural variability. At the same time, the question of how accurate the models were in simulating natural variability was noted but never quantified. And the models were all “flux corrected”. This means that some aspects of the long term statistics of climate were considered to be known – in advance.

So I find it difficult to accept any statistical significance in the study at all.

If the finding instead was introduced with the caveat “assuming the accuracy of our estimates of long term natural variability of climate is correct..” then I would probably be quite happy with the finding. And that question is the key.

The question should be:

What is the likelihood that climate models accurately represent the long-term statistics of natural variability?

  • Virtually certain
  • Very likely
  • Likely
  • About as likely as not
  • Unlikely
  • Very unlikely
  • Exceptionally unlikely

So far I am yet to run across a study that poses this question.

Articles in the Series

Natural Variability and Chaos – One – Introduction

Natural Variability and Chaos – Two – Lorenz 1963

Natural Variability and Chaos – Three – Attribution & Fingerprints

Natural Variability and Chaos – Four – The Thirty Year Myth

Natural Variability and Chaos – Five – Why Should Observations match Models?

Natural Variability and Chaos – Six – El Nino

Natural Variability and Chaos – Seven – Attribution & Fingerprints Or Shadows?

Natural Variability and Chaos – Eight – Abrupt Change

References

Bindoff, N.L., et al, 2013: Detection and Attribution of Climate Change: from Global to Regional. In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change

Detecting greenhouse gas induced climate change with an optimal fingerprint method, Hegerl, von Storch, Hasselmann, Santer, Cubasch & Jones, Journal of Climate (1996)

What does it mean when climate models agree? A case for assessing independence among general circulation models, Zachary Pirtle, Ryan Meyer & Andrew Hamilton, Environ. Sci. Policy (2010)

Optimal fingerprints for detection of time dependent climate change, Klaus Hasselmann, Journal of Climate (1993)

Read Full Post »

There are many classes of systems but in the climate blogosphere world two ideas about climate seem to be repeated the most.

In camp A:

We can’t forecast the weather two weeks ahead so what chance have we got of forecasting climate 100 years from now.

And in camp B:

Weather is an initial value problem, whereas climate is a boundary value problem. On the timescale of decades, every planetary object has a mean temperature mainly given by the power of its star according to Stefan-Boltzmann’s law combined with the greenhouse effect. If the sources and sinks of CO2 were chaotic and could quickly release and sequester large fractions of gas perhaps the climate could be chaotic. Weather is chaotic, climate is not.

Of course, like any complex debate, simplified statements don’t really help. So this article kicks off with some introductory basics.

Many inhabitants of the climate blogosphere already know the answer to this subject and with much conviction. A reminder for new readers that on this blog opinions are not so interesting, although occasionally entertaining. So instead, try to explain what evidence is there for your opinion. And, as suggested in About this Blog:

And sometimes others put forward points of view or “facts” that are obviously wrong and easily refuted.  Pretend for a moment that they aren’t part of an evil empire of disinformation and think how best to explain the error in an inoffensive way.

Pendulums

The equation for a simple pendulum is “non-linear”, although there is a simplified version of the equation, often used in introductions, which is linear. However, the number of variables involved is only two:

  • angle
  • speed

and this isn’t enough to create a “chaotic” system.

If we have a double pendulum, one pendulum attached at the bottom of another pendulum, we do get a chaotic system. There are some nice visual simulations around, which St. Google might help interested readers find.

If we have a forced damped pendulum like this one:

Pendulum-forced

Figure 1 – the blue arrows indicate that the point O is being driven up and down by an external force

-we also get a chaotic system.

What am I talking about? What is linear & non-linear? What is a “chaotic system”?

Digression on Non-Linearity for Non-Technical People

Common experience teaches us about linearity. If I pick up an apple in the supermarket it weighs about 0.15 kg or 150 grams (also known in some countries as “about 5 ounces”). If I take 10 apples the collection weighs 1.5 kg. That’s pretty simple stuff. Most of our real world experience follows this linearity and so we expect it.

On the other hand, if I was near a very cold black surface held at 170K (-103ºC) and measured the radiation emitted it would be 47 W/m². Then we double the temperature of this surface to 340K (67ºC) what would I measure? 94 W/m²? Seems reasonable – double the absolute temperature and get double the radiation.. But it’s not correct.

The right answer is 758 W/m², which is 16x the amount. Surprising, but most actual physics, engineering and chemistry is like this. Double a quantity and you don’t get double the result.

It gets more confusing when we consider the interaction of other variables.

Let’s take riding a bike [updated thanks to Pekka]. Once you get above a certain speed most of the resistance comes from the wind so we will focus on that. Typically the wind resistance increases as the square of the speed. So if you double your speed you get four times the wind resistance. Work done = force x distance moved, so with no head wind power input has to go up as the cube of speed (note 4). This means you have to put in 8x the effort to get 2x the speed.

On Sunday you go for a ride and the wind speed is zero. You get to 25 km/hr (16 miles/hr) by putting a bit of effort in – let’s say you are producing 150W of power (I have no idea what the right amount is). You want your new speedo to register 50 km/hr – so you have to produce 1,200W.

On Monday you go for a ride and the wind speed is 20 km/hr into your face. Probably should have taken the day off.. Now with 150W you get to only 14 km/hr, it takes almost 500W to get to your basic 25 km/hr, and to get to 50 km/hr it takes almost 2,400W. No chance of getting to that speed!

On Tuesday you go for a ride and the wind speed is the same so you go in the opposite direction and take the train home. Now with only 6W you get to go 25 km/hr, to get to 50km/hr you only need to pump out 430W.

In mathematical terms it’s quite simple: F = k(v-w)², Force = (a constant, k) x (road speed – wind speed) squared. Power, P = Fv = kv(v-w)². But notice that the effect of the “other variable”, the wind speed, has really complicated things.

To double your speed on the first day you had to produce eight times the power. To double your speed the second day you had to produce almost five times the power. To double your speed the third day you had to produce just over 70 times the power. All with the same physics.

The real problem with nonlinearity isn’t the problem of keeping track of these kind of numbers. You get used to the fact that real science – real world relationships – has these kind of factors and you come to expect them. And you have an equation that makes calculating them easy. And you have computers to do the work.

No, the real problem with non-linearity (the real world) is that many of these equations link together and solving them is very difficult and often only possible using “numerical methods”.

It is also the reason why something like climate feedback is very difficult to measure. Imagine measuring the change in power required to double speed on the Monday. It’s almost 5x, so you might think the relationship is something like the square of speed. On Tuesday it’s about 70 times, so you would come up with a completely different relationship. In this simple case know that wind speed is a factor, we can measure it, and so we can “factor it out” when we do the calculation. But in a more complicated system, if you don’t know the “confounding variables”, or the relationships, what are you measuring? We will return to this question later.

When you start out doing maths, physics, engineering.. you do “linear equations”. These teach you how to use the tools of the trade. You solve equations. You rearrange relationships using equations and mathematical tricks, and these rearranged equations give you insight into how things work. It’s amazing. But then you move to “nonlinear” equations, aka the real world, which turns out to be mostly insoluble. So nonlinear isn’t something special, it’s normal. Linear is special. You don’t usually get it.

..End of digression

Back to Pendulums

Let’s take a closer look at a forced damped pendulum. Damped, in physics terms, just means there is something opposing the movement. We have friction from the air and so over time the pendulum slows down and stops. That’s pretty simple. And not chaotic. And not interesting.

So we need something to keep it moving. We drive the pivot point at the top up and down and now we have a forced damped pendulum. The equation that results (note 1) has the massive number of three variables – position, speed and now time to keep track of the driving up and down of the pivot point. Three variables seems to be the minimum to create a chaotic system (note 2).

As we increase the ratio of the forcing amplitude to the length of the pendulum (β in note 1) we can move through three distinct types of response:

  • simple response
  • a “chaotic start” followed by a deterministic oscillation
  • a chaotic system

This is typical of chaotic systems – certain parameter values or combinations of parameters can move the system between quite different states.

Here is a plot (note 3) of position vs time for the chaotic system, β=0.7, with two initial conditions, only different from each other by 0.1%:

Forced damped harmonic pendulum, b=0.7: Start angular speed 0.1; 0.1001

Forced damped harmonic pendulum, b=0.7: Start angular speed 0.1; 0.1001

Figure 1

It’s a little misleading to view the angle like this because it is in radians and so needs to be mapped between 0-2π (but then we get a discontinuity on a graph that doesn’t match the real world). We can map the graph onto a cylinder plot but it’s a mess of reds and blues.

Another way of looking at the data is via the statistics – so here is a histogram of the position (θ), mapped to 0-2π, and angular speed (dθ/dt) for the two starting conditions over the first 10,000 seconds:

Histograms for 10k seconds

Histograms for 10,000 seconds

Figure 2

We can see they are similar but not identical (note the different scales on the y-axis).

That might be due to the shortness of the run, so here are the results over 100,000 seconds:

Pendulum-0.7-100k seconds-2 conditions-hist

Histogram for 100,000 seconds

Figure 3

As we increase the timespan of the simulation the statistics of two slightly different initial conditions become more alike.

So if we want to know the state of a chaotic system at some point in the future, very small changes in the initial conditions will amplify over time, making the result unknowable – or no different from picking the state from a random time in the future. But if we look at the statistics of the results we might find that they are very predictable. This is typical of many (but not all) chaotic systems.

Orbits of the Planets

The orbits of the planets in the solar system are chaotic. In fact, even 3-body systems moving under gravitational attraction have chaotic behavior. So how did we land a man on the moon? This raises the interesting questions of timescales and amount of variation. Planetary movement – for our purposes – is extremely predictable over a few million years. But over 10s of millions of years we might have trouble predicting exactly the shape of the earth’s orbit – eccentricity, time of closest approach to the sun, obliquity.

However, it seems that even over a much longer time period the planets will still continue in their orbits – they won’t crash into the sun or escape the solar system. So here we see another important aspect of some chaotic systems – the “chaotic region” can be quite restricted. So chaos doesn’t mean unbounded.

According to Cencini, Cecconi & Vulpiani (2010):

Therefore, in principle, the Solar system can be chaotic, but not necessarily this implies events such as collisions or escaping planets..

However, there is evidence that the Solar system is “astronomically” stable, in the sense that the 8 largest planets seem to remain bound to the Sun in low eccentricity and low inclination orbits for time of the order of a billion years. In this respect, chaos mostly manifest in the irregular behavior of the eccentricity and inclination of the less massive planets, Mercury and Mars. Such variations are not large enough to provoke catastrophic events before extremely large time. For instance, recent numerical investigations show that for catastrophic events, such as “collisions” between Mercury and Venus or Mercury failure into the Sun, we should wait at least a billion years.

And bad luck, Pluto.

Deterministic, non-Chaotic, Systems with Uncertainty

Just to round out the picture a little, even if a system is not chaotic and is deterministic we might lack sufficient knowledge to be able to make useful predictions. If you take a look at figure 3 in Ensemble Forecasting you can see that with some uncertainty of the initial velocity and a key parameter the resulting velocity of an extremely simple system has quite a large uncertainty associated with it.

This case is quantitively different of course. By obtaining more accurate values of the starting conditions and the key parameters we can reduce our uncertainty. Small disturbances don’t grow over time to the point where our calculation of a future condition might as well just be selected from a randomly time in the future.

Transitive, Intransitive and “Almost Intransitive” Systems

Many chaotic systems have deterministic statistics. So we don’t know the future state beyond a certain time. But we do know that a particular position, or other “state” of the system, will be between a given range for x% of the time, taken over a “long enough” timescale. These are transitive systems.

Other chaotic systems can be intransitive. That is, for a very slight change in initial conditions we can have a different set of long term statistics. So the system has no “statistical” predictability. Lorenz 1968 gives a good example.

Lorenz introduces the concept of almost intransitive systems. This is where, strictly speaking, the statistics over infinite time are independent of the initial conditions, but the statistics over “long time periods” are dependent on the initial conditions. And so he also looks at the interesting case (Lorenz 1990) of moving between states of the system (seasons), where we can think of the precise starting conditions each time we move into a new season moving us into a different set of long term statistics. I find it hard to explain this clearly in one paragraph, but Lorenz’s papers are very readable.

Conclusion?

This is just a brief look at some of the basic ideas.

Articles in the Series

Natural Variability and Chaos – One – Introduction

Natural Variability and Chaos – Two – Lorenz 1963

Natural Variability and Chaos – Three – Attribution & Fingerprints

Natural Variability and Chaos – Four – The Thirty Year Myth

Natural Variability and Chaos – Five – Why Should Observations match Models?

Natural Variability and Chaos – Six – El Nino

Natural Variability and Chaos – Seven – Attribution & Fingerprints Or Shadows?

Natural Variability and Chaos – Eight – Abrupt Change

References

Chaos: From Simple Models to Complex Systems, Cencini, Cecconi & Vulpiani, Series on Advances in Statistical Mechanics – Vol. 17 (2010)

Climatic Determinism, Edward Lorenz (1968) – free paper

Can chaos and intransivity lead to interannual variation, Edward Lorenz, Tellus (1990) – free paper

Notes

Note 1 – The equation is easiest to “manage” after the original parameters are transformed so that tω->t. That is, the period of external driving, T0=2π under the transformed time base.

Then:

Pendulum-forced-equation

where θ = angle, γ’ = γ/ω, α = g/Lω², β =h0/L;

these parameters based on γ = viscous drag coefficient, ω = angular speed of driving, g = acceleration due to gravity = 9.8m/s², L = length of pendulum, h0=amplitude of driving of pivot point

Note 2 – This is true for continuous systems. Discrete systems can be chaotic with less parameters

Note 3 – The results were calculated numerically using Matlab’s ODE (ordinary differential equation) solver, ode45.

Note 4 – Force = k(v-w)2 where k is a constant, v=velocity, w=wind speed. Work done = Force x distance moved so Power, P = Force x velocity.

Therefore:

P = kv(v-w)2

If we know k, v & w we can find P. If we have P, k & w and want to find v it is a cubic equation that needs solving.

Read Full Post »

I don’t think this is a simple topic.

The essence of the problem is this:

Can we measure the top of atmosphere (TOA) radiative changes and the surface temperature changes and derive the “climate sensivity” from the relationship between the two parameters?

First, what do we mean by “climate sensitivity”?

In simple terms this parameter should tell us how much more radiation (“flux”) escapes to space for each 1°C increase in surface temperature.

Climate Sensitivity Is All About Feedback

Climate sensitivity is all about trying to discover whether the climate system has positive or negative feedback.

If the average surface temperature of the earth increased by 1°C and the radiation to space consequently increased by 3.3 W/m², this would be approximately “zero feedback”.

Why is this zero feedback?

If somehow the average temperature of the surface of the planet increased by 1°C – say due to increased solar radiation – then as a result we would expect a higher flux into space. A hotter planet should radiate more. If the increase in flux = 3.3 W/m² it would indicate that there was no negative or positive feedback from this solar forcing (note 1).

Suppose the flux increased by 0. That is, the planet heated up but there was no increase in energy radiated to space. That would be positive feedback within the climate system – because there would be nothing to “rein in” the increase in temperature.

Suppose the flux increased by 5 W/m². In this case it would indicate negative feedback within the climate system.

The key value is the “benchmark” no feedback value of 3.3 W/m². If the value is above this, it’s negative feedback. If the value is below this, it’s positive feedback.

Essentially, the higher the radiation to space as a result of a temperature increase the more the planet is able to “damp out” temperature changes that are forced via solar radiation, or due to increases in inappropriately-named “greenhouse” gases.

Consider the extreme case where as the planet warms up it actually radiates less energy to space – clearly this will lead to runaway temperature increases (less energy radiated means more energy absorbed, which increased temperatures, which leads to even less energy radiated..).

As a result we measure sensitivity as W/m².K which we read as Watts per meter squared per Kelvin” – and 1K change is the same as 1°C change.

Theory and Measurement

In many subjects, researchers’ algebra converges on conventional usage, but in the realm of climate sensitivity everyone has apparently adopted their own. As a note for non-mathematicians, there is nothing inherently wrong with this, but it just makes each paper confusing especially for newcomers and probably for everyone.

I mostly adopt the Spencer & Braswell 2008 terminology in this article (see reference and free link below). I do change their α (climate sensitivity) into λ (which everyone else uses for this value) mainly because I had already produced a number of graphs with λ before starting to write the article..

The model is a very simple 1-dimensional model of temperature deviation into the ocean mixed layer, from the first law of thermodynamics:

C.∂T/∂t = F + S ….[1]

where C = heat capacity of the ocean, T = temperature anomaly, t = time, F = total top of atmosphere (TOA) radiative flux anomaly, S = heat flux anomaly into the deeper ocean

What does this equation say?

Heat capacity times change in temperature equals the net change in energy

– this is a simple statement of energy conservation, the first law of thermodynamics.

The TOA radiative flux anomaly, F, is a value we can measure using satellites. T is average surface temperature, which is measured around the planet on a frequent basis. But S is something we can’t measure.

What is F made up of?

Let’s define:

F = N + f – λT ….[1a]

where N = random fluctuations in radiative flux, f = “forcings”, and λT is the all important climate response or feedback.

The forcing f is, for the purposes of this exercise, defined as something added into the system which we believe we can understand and estimate or measure. This could be solar increases/decreases, it could be the long term increase in the “greenhouse” effect due to CO2, methane and other gases. For the purposes of this exercise it is not feedback. Feedback includes clouds and water vapor and other climate responses like changing lapse rates (atmospheric temperature profiles), all of which combine to produce a change in radiative output at TOA.

And an important point is that for the purposes of this theoretical exercise, we can remove f from the measurements because we believe we know what it is at any given time.

N is an important element. Effectively it describes the variations in TOA radiative flux due to the random climatic variations over many different timescales.

The climate sensitivity is the value λT, where λ is the value we want to find.

Noting the earlier comment about our assumed knowledge of ‘f’ (note 2), we can rewrite eqn 1:

C.∂T/∂t = – λT + N + S ….[2]

remembering that – λT + N = F is the radiative value we measure at TOA

Regression

If we plot F (measured TOA flux) vs T we can estimate λ from the slope of the least squares regression.

However, there is a problem with the estimate:

λ (est) = Cov[F,T] / Var[T] ….[3]

          = Cov[- λT + N, T] / Var[T]

where Cov[a,b] = covariance of a with b, and Var[a]= variance of a

Forster & Gregory 2006

This oft-cited paper (reference and free link below) calculates the climate sensitivity from 1985-1996 using measured ERBE data at 2.3 ± 1.3 W/m².K.

Their result indicates positive feedback, or at least, a range of values which sit mainly in the positive feedback space.

On the method of calculation they say:

This equation includes a term that allows F to vary independently of surface temperature.. If we regress (- λT+ N) against T, we should be able to obtain a value for λ. The N terms are likely to contaminate the result for short datasets, but provided the N terms are uncorrelated to T, the regression should give the correct value for λ, if the dataset is long enough..

[Terms changed to SB2008 for easier comparison, and emphasis added].

Simulations

Like Spencer & Braswell, I created a simple model to demonstrate why measured results might deviate from the actual climate sensitivity.

The model is extremely simple:

  • a “slab” model of the ocean of a certain depth
  • daily radiative noise (normally distributed with mean=0, and standard deviation σN)
  • daily ocean flux noise (normally distributed with mean=0, and standard deviation σS)
  • radiative feedback calculated from the temperature and the actual climate sensitivity
  • daily temperature change calculated from the daily energy imbalance
  • regression of the whole time series to calculate the “apparent” climate sensitivity

In this model, the climate sensitivity, λ = 3.0 W/m².K.

In some cases the regression is done with the daily values, and in other cases the regression is done with averaged values of temperature and TOA radiation across time periods of 7, 30 & 90 days. I also put a 30-day low pass filter on the daily radiative noise in one case (before “injecting” into the model).

Some results are based on 10,000 days (about 30 years), with 100,000 days (300 years) as a separate comparison.

In each case the estimated value of λ is calculated from the mean of 100 simulation results. The 2nd graph shows the standard deviation σλ, of these simulation results which is a useful guide to the likely spread of measured results of λ (if the massive oversimplifications within the model were true). The vertical axis (for the estimate of λ) is the same in each graph for easier comparison, while the vertical axis for the standard deviation changes according to the results due to the large changes in this value.

First, the variation as the number of time steps changes and as the averaging period changes from 1 (no averaging) through to 90-days. Remember that the “real” value of λ = 3.0 :

Figure 1

Second, the estimate as the standard deviation of the radiative flux is increased, and the ocean depth ranges from 20-200m. The daily temperature and radiative flux is calculated as a monthly average before the regression calculation is carried out:

Figure 2

As figure 2, but for 100,000 time steps (instead of 10,000):

Figure 3

Third, the estimate as the standard deviation of the radiative flux is increased, and the ocean depth ranges from 20-200m. The regression calculation is carried out on the daily values:

Figure 4

As figure 4, but with 100,000 time steps:

Figure 5

Now against averaging period and also against low pass filtering of the “radiative flux noise”:

Figure 6

As figure 6 but with 100,000 time steps:

Figure 7

Now with the radiative “noise” as an AR(1) process (see Statistics and Climate – Part Three – Autocorrelation), vs the autoregressive parameter φ and vs the number of averaging periods: 1 (no averaging), 7, 30, 90 with 10,000 time steps (30 years):

Figure 8

And the same comparison but with 100,000 timesteps:

Figure 9

Discussion of Results

If we consider first the changes in the standard deviation of the estimated value of climate sensitivity we can see that the spread in the results is much higher in each case when we consider 30 years of data vs 300 years of data. This is to be expected. However, given that in the 30-year cases σλ is similar in magnitude to λ we can see that doing one estimate and relying on the result is problematic. This of course is what is actually done with measurements from satellites where we have 30 years of history.

Second, we can see that mostly the estimates of λ tend to be lower than the actual value of 3.0 W/m².K. The reason is quite simple and is explained mathematically in the next section which non-mathematically inclined readers can skip.

In essence, it is related to the idea in the quote from Forster & Gregory. If the radiative flux noise is uncorrelated to temperature then the estimates of λ will be unbiased. By the way, remember that by “noise” we don’t mean instrument noise, although that will certainly be present. We mean the random fluctuations due to the chaotic nature of weather and climate.

If we refer back to Figure 1 we can see that when the averaging period = 1, the estimates of climate sensitivity are equal to 3.0. In this case, the noise is uncorrelated to the temperature because of the model construction. Slightly oversimplifying, today’s temperature is calculated from yesterday’s noise. Today’s noise is a random number unrelated to yesterday’s noise. Therefore, no correlation between today’s temperature and today’s noise.

As soon as we average the daily data into monthly results which we use to calculate the regression then we have introduced the fact that monthly temperature is correlated to monthly radiative flux noise (note 3).

This is also why Figures 8 & 9 show a low bias for λ even with no averaging of daily results. These figures are calculated with autocorrelation for radiative flux noise. This means that past values of flux are correlated to current vales – and so once again, daily temperature will be correlated with daily flux noise. This is also the case where low pass filtering is used to create the radiative noise data (as in Figures 6 & 7).

Maths

x = slope of the line from the linear regression

x = Cov[- λT + N, T] / Var[T] ….[3]

It’s not easy to read equations with complex terms numerator and denominator on the same line, so breaking it up:

Cov[- λT + N, T] = E[ (λT + N)T ] – E[- λT + N]E[T] ….[4], where E[a] = expected value of a

= E[-λT²] + E[NT] + λ.E[T].E[T] – E[N].E[T]

= -λ { E[T²] – (E[T])² } + E[NT] – E[N].E[T] …. [4]

And

Var[T] = E[T²] – (E[T])² …. [5]

So

x = -λ + { E[NT] – E[N].E[T] } / { E[T²] – (E[T])² } …. [6]

And we see that the regression of the line is always biased if N is correlated with T. If the expected value of N = 0 the last term in the top part of the equation drops out, but E[NT] ≠ 0 unless N is uncorrelated with T.

Note of course that we will use the negative of the slope of the line to estimate λ, and so estimates of λ will be biased low.

As a note for the interested student, why is it that some of the results show λ > 3.0?

Murphy & Forster 2010

Murphy & Forster picked up the challenge from Spencer & Braswell 2008 (reference below but no free link unfortunately). The essence of their paper is that using more realistic values for radiative noise and mixed ocean depth the error in calculation of λ is very small:

From Murphy & Forster (2010)

Figure 10

The value ba on the vertical axis is a normalized error term (rather than the estimate of λ).

Evaluating their arguments requires more work on my part, especially analyzing some CERES data, so I hope to pick that up in a later article. [Update, Spencer has a response to this paper on his blog, thanks to Ken Gregory for highlighting it]

Linear Feedback Relationship?

One of the biggest problems with the idea of climate sensitivity, λ, is the idea that it exists as a constant value.

From Stephens (2005), reference and free link below:

The relationship between global-mean radiative forcing and global-mean climate response (temperature) is of intrinsic interest in its own right. A number of recent studies, for example, discuss some of the broad limitations of (1) and describe procedures for using it to estimate Q from GCM experiments (Hansen et al. 1997; Joshi et al. 2003; Gregory et al. 2004) and even procedures for estimating  from observations (Gregory et al. 2002).

While we cannot necessarily dismiss the value of (1) and related interpretation out of hand, the global response, as will become apparent in section 9, is the accumulated result of complex regional responses that appear to be controlled by more local-scale processes that vary in space and time.

If we are to assume gross time–space averages to represent the effects of these processes, then the assumptions inherent to (1) certainly require a much more careful level of justification than has been given. At this time it is unclear as to the specific value of a global-mean sensitivity as a measure of feedback other than providing a compact and convenient measure of model-to-model differences to a fixed climate forcing (e.g., Fig. 1).

[Emphasis added and where the reference to “(1)” is to the linear relationship between global temperature and global radiation].

If, for example, λ is actually a function of location, season & phase of ENSO.. then clearly measuring overall climate response is a more difficult challenge.

Conclusion

Measuring the relationship between top of atmosphere radiation and temperature is clearly very important if we want to assess the all-important climate sensitivity.

Spencer & Braswell have produced a very useful paper which demonstrates some obvious problems with deriving the value of climate sensitivity from measurements. Although I haven’t attempted to reproduce their actual results, I have done many other model simulations to demonstrate the same problem.

Murphy & Forster have produced a paper which claims that the actual magnitude of the problem demonstrated by Spencer & Braswell is quite small in comparison to the real value being measured (as yet I can’t tell whether their claim is correct).

The value called climate sensitivity might be a variable (i.e., not a constant value) and it might turn out to be much harder to measure than it really seems (and already it doesn’t seem easy).

Articles in this Series

Measuring Climate Sensitivity – Part Two – Mixed Layer Depths

Measuring Climate Sensitivity – Part Three – Eddy Diffusivity

References

The Climate Sensitivity and Its Components Diagnosed from Earth Radiation Budget Data, Forster & Gregory, Journal of Climate (2006)

Potential Biases in Feedback Diagnosis from Observational Data: A Simple Model Demonstration, Spencer & Braswell, Journal of Climate (2008)

On the accuracy of deriving climate feedback parameters from correlations between surface temperature and outgoing radiation, Murphy & Forster, Journal of Climate (2010)

Cloud Feedbacks in the Climate System: A Critical Review, Stephens, Journal of Climate (2005)

Notes

Note 1 – The reason why the “no feedback climate response” = 3.3 W/m².K is a little involved but is mostly due to the fact that the overall climate is radiating around 240 W/m² at TOA.

Note 2 – This is effectively the same as saying f=0. If that seems alarming I note in advance that the exercise we are going through is a theoretical exercise to demonstrate that even if f=0, the regression calculation of climate sensitivity includes some error due to random fluctuations.

Note 3 – If the model had one random number for last month’s noise which was used to calculate this month’s temperature then the monthly results would also be free of correlation between the temperature and radiative noise.

Read Full Post »

In the last article we saw some testing of the simplest autoregressive model AR(1). I still have an outstanding issue raised by one commenter relating to the hypothesis testing that was introduced, and I hope to come back to it at a later stage.

Different Noise Types

Before we move onto more general AR models, I did do some testing of the effectiveness of the hypothesis test for AR(1) models with different noise types.

The testing shown in Part Four has Gaussian noise (a “normal distribution”), and the theory applied is only apparently valid for Gaussian noise, so I tried uniform distribution of noise and also a Gamma noise distribution:

Figure 1

The Gaussian and uniform distribution produce the same results. The Gamma noise result isn’t shown because it was also the same.

A Gamma distribution can be quite skewed, which was why I tried it – here is the Gamma distribution that was used (with the same variance as the Gaussian, and shifted to produce the same mean = 0):

Figure 2

So in essence I have found that the tests work just as well when the noise component is uniformly distributed or Gamma distributed as when it has a Gaussian distribution (normal distribution).

Hypothesis Testing of AR(1) Model When the Model is Actually AR(2)

The next idea I was interested to try was to apply the hypothesis testing from Part Three on an AR(2) model, when we assume incorrectly that it is an AR(1) model.

Remember that the hypothesis test is quite simple – we produce a series with a known mean, extract a sample, and then using the sample find out how many times the test rejects the hypothesis that the mean is different from its actual value:

Figure 3

As we can see, the test, which should be only rejecting 5% of the tests, rejects a much higher proportion as φ2 increases. This simple test is just by way of introduction.

Higher Order AR Series

The AR(1) model is very simple. As we saw in Part Three, it can be written as:

xt – μ = φ(xt-1 – μ) + εt

where xt = the next value in the sequence, xt-1 = the last value in the sequence, μ = the mean, εt = random quantity and φ = auto-regression parameter

[Minor note, the notation is changed slightly from the earlier article]

In non-technical terms, the next value in the series is made up of a random element plus a dependence on the last value – with the strength of this dependence being the parameter φ.

The more general autoregressive model of order p, AR(p), can be written as:

xt – μ = φ1(xt-1 – μ) + φ2(xt-2 – μ) + .. + φp(xt-p – μ) + εt

φ1..φp = the series of auto-regression parameters

In non-technical terms, the next value in the series is made up of a random element plus a dependence on the last few values. So of course, the challenge is to determine the order p, and then the parameters φ1..φp

There is a bewildering array of tests that can be applied, so I started simply. With some basic algebraic manipulation (not shown – but if anyone is interested I will provide more details in the comments), we can produce a series of linear equations known as the Yule-Walker equations, which allow us to calculate φ1..φp from the estimates of the autoregression.

If you look back to Figure 2 in Part Three you see that by regressing the time series with itself moved by k time steps we can calculate the lag-k correlation, rk, for k=1, 2, 3, etc. So we estimate r1, r2, r3, etc., from the sample of data that we have, and then solve the Yule-Walker equations to get φ1..φp

First of all I played around with simple AR(2) models. The results below are for two different sample sizes.

A population of 90,000 is created (actually 100,000 then the first 10% is deleted), and then a sample is randomly selected 10,000 times from this population. For each sample, the Yule-Walker equations are solved (each of 10,000 times) and then the results are averaged.

In these results I normalized the mean and standard deviation of the parameters by the original values (later I decided that made it harder to see what was going on and reverted to just displaying the actual sample mean and sample standard deviation):

Figure 4

Notice that the sample size of 1,000 produces very accurate results in the estimation of φ1 & φ2, with a small spread. The sample size of 50 appears to produce a low bias in the calculated results, especially for φ2, which is no doubt due to not reading the small print somewhere..

Here is a histogram of the results, showing the spread across φ1 & φ2 – note the values on the axes, the sample size of 1000 produces a much tighter set of results, the sample size of 50 has a much wider spread:

Figure 5

Then I played around with a more general model. With this model I send in AR parameters to create the population, but can define a higher order of AR to test against, to see how well the algorithm estimates the AR parameters from the samples.

In the example below the population is created as AR(3), but tested as if it might be an AR(4) model. The AR(3) parameters (shown on the histogram in the figure below) are φ1= 0.4, φ2= 0.2, φ3= -0.3.

The estimation seems to cope quite well as φ4 is estimated at about zero:

Figure 6

The histogram of results for the first two parameters, note again the difference in values on the axes for the different sample sizes:

Figure 7

[The reason for the finer detail on this histogram compared with figure 5 is just discovery of the Matlab parameters for 3d histograms].

Rotating the histograms around in 3d appears to confirm a bell-curve. Something to test formally at a later stage.

Here’s an example of a process which is AR(5) with φ1= 0.3, φ2= 0, φ3= 0, φ4= 0, φ5= 0.4; tested against AR(6):

Figure 8

And the histogram of estimates of φ1& φ2:

Figure 8

ARMA

We haven’t yet seen ARMA models – auto-regressive moving average models. And we haven’t seen MA models – moving average models with no auto-regressive behavior.

What is an MA or “moving average” model?

The term in the moving average is a “linear filter” on the random elements of the process. So instead of εt as the “uncorrelated noise” in the AR model we have εt plus a weighted sum of earlier random elements. The MA process, of order q, can be written as:

xt – μ = εt + θ1εt-1+ θ2εt-2 + .. + θpεt-p

θ1..θp = the series of moving average parameters

The term “moving average” is a little misleading, as Box and Jenkins also comment.

Why is it misleading?

Because for AR (auto-regressive) and MA (moving average) and ARMA (auto-regressive moving average = combination of AR & MA) models the process is stationary.

This means, in non-technical terms, that the mean of the process is constant through time. That doesn’t sound like “moving average”.

So think of “moving average” as a moving average (filter) of the random elements, or noise, in the process. By their nature these will average out over time (because if the average of the random elements = 0, the average of the moving average of the random elements = 0).

An example of this in the real world might be a chemical introduced randomly into a physical process  – this is the εt term – but because the chemical gets caught up in pipework and valves, the actual value of the chemical released into the process at time t is the sum of a proportion of the current value released plus a proportion of earlier values released. Examples of the terminology used for the various processes:

  • AR(3) is an autoregressive process of order 3
  • MA(2) is a moving average process of order 2
  • ARMA(1,1) is a combination of AR(1) and MA(1)

References

Time Series Analysis: Forecasting & Control, 3rd Edition, Box, Jenkins & Reinsel, Prentice Hall (1994)

Read Full Post »

In Part Three we started looking at time-series that are autocorrelated, which means each value has a relationship to one or more previous values in the time-series. This is unlike the simple statistical models of independent events.

And in Part Two we have seen how to test whether a sample comes from a population of a stated mean value. The ability to run this test is important and in Part Two the test took place for a population of independent events.

The theory that allows us to accept or reject hypotheses to a certain statistical significance does not work properly with serially correlated data (not without modification).

Here is a nice example from Wilks:

From Wilks (2011)

Figure 1

Remember that (usually) with statistical test we don’t actually know the whole population – that’s what we want to find out about. Instead, we take a sample and attempt to find out information about the population.

Take a look at figure 1 – the lighter short horizontal lines are the means (the “averages”) of a number of samples. If you compare the top and bottom graphs you see that the distribution of the means of samples is larger in the bottom graph. This bottom graph is the timeseries with autocorrelation.

What this means is that if we take a sample from a time-series and apply the standard Student-t test to find out whether it came from a population of mean = μ, we will calculate that it didn’t come from a mean that it actually did come from too many times. So a 95% test will incorrectly reject the hypothesis a lot more than 5%.

To demonstrate this, here is the % of false rejections (“Type I errors”) as the autocorrelation parameter increases, when a standard Student-t test is applied:

Figure 2

The test was done with Matlab, with a time-series population of 100,000, Gaussian (“normal distribution”) errors, and samples of 100 taken 10,000 times (in each case a random start point was chosen then the next 100 points were taken as a sample – this was repeated 10,000 times). When the time-series is generated with no serial correlation, the hypothesis test works just fine. As the autocorrelation increases (as we move to the right of the graph), the hypothesis test starts creating more false fails.

With AR(1) autocorrelation – the simplest model of autocorrelation – there is a simple correction that we can apply. This goes under different names like effective sample size and variance inflation factor.

For those who like details, instead of the standard deviation of the sample means:

s = σ/√n

we derive:

s = σ.√[(1+ρ)/n.(1-ρ)], where ρ = autocorrelation parameter.

Repeating the same test with the adjusted value:

Figure 3

We see that Type I errors start to get above our expected values at higher values of autocorrelation. (I’m not sure whether that actually happens with an infinite number of tests and true random samples).

Note as well that the tests above were done using the known value of the autocorrelation parameter (this is like having secret information which we don’t normally have).

So I re-ran the tests using the derived autocorrelation parameter from the sample data (regressing the time-series against the same time-series with a one time step lag) – and got similar, but not identical results and apparently more false fails.

Curiosity made me continue (tempered by the knowledge of the large time-wasting exercise I had previously engaged in because of a misplaced bracket in one equation), so I rewrote the Matlab program to allow me to test some ideas a little further. It was good to rewrite because I was also wondering whether having one (long) time-series generated with lots of tests against it was as good as repeatedly generating a time-series and carrying out lots of tests each time.

So this following comparison had a time-series population of 100,000 events, samples of 100 items for each test, repeated for 100 tests, then the time-series regenerated – and this done 100 times. So 10,000 tests across 100 different populations – first with the known autoregression parameter, then with the estimated value of this parameter from the sample in question:

Figure 4 – Each sample size = 100

The correct value of rejected tests should be 5% no matter what the autoregression parameter.

The rewritten program allows us to test for the effect of sample size. The following graph uses the known value of autogression parameter in the test, a time-series population of 100,000, drawing samples out 1000 times from each population, and repeating through 10 populations in total:

Figure 5 – Using known value of autoregression parameter in Student T-test

Remembering that all of the lines should be horizontal on 5%, we can see that the largest sample population of 1000 is the most resistant to higher autoregression parameters.

This reminded me that the equation for the variance inflation factor (shown earlier) is in fact an approximation. The correct formula (for those who like to see such things):

from Zwiers & von Storch (1995)

Figure 6

So I adjusted the variance inflation factor in the program and reran.

I’m really starting to slow things down now – because in each single hypothesis test we are estimating the autoregression parameter, ρ, by a lag-1 correlation, then with this estimate we have to calculate the above circled formula, which requires summing the equation from 1 through to the number of samples. So in the case of n=1000 that’s 1000 calculations, all summed, then used in a Student-t test. And this is done in each case for 1000 tests per population x 10 populations.. thank goodness for Matlab which did it in 18 minutes. (And apologies to readers trying to follow the detail – in the graphics I show the autoregression parameter as φ, when I meant to use ρ, no idea why..)

Fortunately, the result turns out almost identical to using the approximation (the graph using the approximation is not shown):

Figure 7 – Using estimated autoregression parameter

So unless I have made some kind of mistake (quite possible), I take this to mean that the sampling uncertainty in the autoregression parameter adds uncertainty to the Student T-test, which can’t be corrected for easily.

With large samples, like 1000, it appears to work just fine. With time-series data from the climate system we have to take what we can get and mostly it’s not 1000 points.

We are still considering a very basic model – AR(1) with normally-distributed noise.

In the next article I hope to cover some more complex models, as well as the results from this kind of significance test if we assume AR(1) with normally-distributed noise yet actually have a different model in operation..

References

Statistical Methods in the Atmospheric Sciences, 3rd edition, Daniel Wilks, Academic Press (2011)

Taking Serial Correlation into Account in Tests of the Mean, Zwiers & von Storch, Journal of Climate (1995)

Read Full Post »

Older Posts »