In (still) writing what was to be Part Six (Attribution in AR5 from the IPCC), I was working through Knutson et al 2013, one of the papers referenced by AR5. That paper in turn referenced Are historical records sufficient to constrain ENSO simulations? [link corrected] by Andrew Wittenberg (2009). This is a very interesting paper and I was glad to find it because it illustrates some of the points we have been looking at.
It’s an easy paper to read (and free) and so I recommend reading the whole paper.
The paper uses NOAA’s Geophysical Fluid Dynamics Laboratory (GFDL) CM2.1 global coupled atmosphere/ocean/land/ice GCM (see note 1 for reference and description):
CM2.1 played a prominent role in the third Coupled Model Intercomparison Project (CMIP3) and the Fourth Assessment of the Intergovernmental Panel on Climate Change (IPCC), and its tropical and ENSO simulations have consistently ranked among the world’s top GCMs [van Oldenborgh et al., 2005; Wittenberg et al., 2006; Guilyardi, 2006; Reichler and Kim, 2008].
The coupled pre-industrial control run is initialized as by Delworth et al. , and then integrated for 2220 yr with fixed 1860 estimates of solar irradiance, land cover, and atmospheric composition; we focus here on just the last 2000 yr. This simulation required one full year to run on 60 processors at GFDL.
First of all we see the challenge for climate models – a reasonable resolution coupled GCM running just one 2000-year simulation consumed one year of multiple processor time.
Wittenberg shows the results in the graph below. At the top is our observational record going back 140 years, then below are the simulation results of the SST variation in the El Nino region broken into 20 century-long segments.
Figure 1 – Click to Expand
What we see is that different centuries have very different results:
There are multidecadal epochs with hardly any variability (M5); epochs with intense, warm-skewed ENSO events spaced five or more years apart (M7); epochs with moderate, nearly sinusoidal ENSO events spaced three years apart (M2); and epochs that are highly irregular in amplitude and period (M6). Occasional epochs even mimic detailed temporal sequences of observed ENSO events; e.g., in both R2 and M6, there are decades of weak, biennial oscillations, followed by a large warm event, then several smaller events, another large warm event, and then a long quiet period. Although the model’s NINO3 SST variations are generally stronger than observed, there are long epochs (like M1) where the ENSO amplitude agrees well with observations (R1).
Wittenberg comments on the problem for climate modelers:
An unlucky modeler – who by chance had witnessed only M1-like variability throughout the first century of simulation – might have erroneously inferred that the model’s ENSO amplitude matched observations, when a longer simulation would have revealed a much stronger ENSO.
If the real-world ENSO is similarly modulated, then there is a more disturbing possibility. Had the research community been unlucky enough to observe an unrepresentative ENSO over the past 150 yr of measurements, then it might collectively have misjudged ENSO’s longer-term natural behavior. In that case, historically-observed statistics could be a poor guide for modelers, and observed trends in ENSO statistics might simply reflect natural variations..
..A 200 yr epoch of consistently strong variability (M3) can be followed, just one century later, by a 200 yr epoch of weak variability (M4). Documenting such extremes might thus require a 500+ yr record. Yet few modeling centers currently attempt simulations of that length when evaluating CGCMs under development – due to competing demands for high resolution, process completeness, and quick turnaround to permit exploration of model sensitivities.
Model developers thus might not even realize that a simulation manifested long-term ENSO modulation, until long after freezing the model development. Clearly this could hinder progress. An unlucky modeler – unaware of centennial ENSO modulation and misled by comparisons between short, unrepresentative model runs – might erroneously accept a degraded model or reject an improved model.
Wittenberg shows the same data in the frequency domain and has presented the data in a way that illustrates the different perspective you might have depending upon your period of observation or period of model run. It’s worth taking the time to understand what is in these graphs:
Figure 2 – Click to Expand
The first graph, 2a:
..time-mean spectra of the observations for epochs of length 20 yr – roughly the duration of observations from satellites and the Tropical Atmosphere Ocean (TAO) buoy array. The spectral power is fairly evenly divided between the seasonal cycle and the interannual ENSO band, the latter spanning a broad range of time scales between 1.3 to 8 yr.
So the different colored lines indicate the spectral power for each period. The black dashed line is the observed spectral power over the 140 year (observational) period. This dashed line is repeated in figure 2c.
The second graph, 2b shows the modeled results if we break up the 2000 years into 100 x 20-year periods.
The third graph, 2c, shows the modeled results broken up into 100 year periods. The probability number in the bottom right, 90%, is the likelihood of observations falling outside the range of the model results – if “the simulated subspectra independent and identically distributed.. at bottom right is the probability that an interval so constructed would bracket the next subspectrum to emerge from the model.”
So what this says, paraphrasing and over-simplifying: “we are 90% sure that the observations can’t be explained by the models”.
Of course, this independent and identically distributed assumption is not valid, but as we will hopefully get onto many articles further in this series, most of these statistical assumptions – stationary, gaussian, AR1 – are problematic for real world non-linear systems.
To be clear, the paper’s author is demonstrating a problem in such a statistical approach.
Models are not reality. This is a simulation with the GFDL model. It doesn’t mean ENSO is like this. But it might be.
The paper illustrates a problem I highlighted in Part Five – observations are only one “realization” of possible outcomes. The last century or century and a half of surface observations could be an outlier. The last 30 years of satellite data could equally be an outlier. Even if our observational periods are not an outlier and are right there on the mean or median, matching climate models to observations may still greatly under-represent natural climate variability.
Non-linear systems can demonstrate variability over much longer time-scales than the the typical period between characteristic events. We will return to this in future articles in more detail. Such systems do not have to be “chaotic” (where chaotic means that tiny changes in initial conditions cause rapidly diverging results).
What period of time is necessary to capture natural climate variability?
I will give the last word to the paper’s author:
More worryingly, if nature’s ENSO is similarly modulated, there is no guarantee that the 150 yr historical SST record is a fully representative target for model development..
..In any case, it is sobering to think that even absent any anthropogenic changes, the future of ENSO could look very different from what we have seen so far.
Articles in the Series
Are historical records sufficient to constrain ENSO simulations? Andrew T. Wittenberg, GRL (2009) – free paper
GFDL’s CM2 Global Coupled Climate Models. Part I: Formulation and Simulation Characteristics, Delworth et al, Journal of Climate, 2006 – free paper
Note 1: The paper referenced for the GFDL model is GFDL’s CM2 Global Coupled Climate Models. Part I: Formulation and Simulation Characteristics, Delworth et al, 2006:
The formulation and simulation characteristics of two new global coupled climate models developed at NOAA’s Geophysical Fluid Dynamics Laboratory (GFDL) are described.
The models were designed to simulate atmospheric and oceanic climate and variability from the diurnal time scale through multicentury climate change, given our computational constraints. In particular, an important goal was to use the same model for both experimental seasonal to interannual forecasting and the study of multicentury global climate change, and this goal has been achieved.
Two versions of the coupled model are described, called CM2.0 and CM2.1. The versions differ primarily in the dynamical core used in the atmospheric component, along with the cloud tuning and some details of the land and ocean components. For both coupled models, the resolution of the land and atmospheric components is 2° latitude x 2.5° longitude; the atmospheric model has 24 vertical levels.
The ocean resolution is 1° in latitude and longitude, with meridional resolution equatorward of 30° becoming progressively finer, such that the meridional resolution is 1/3° at the equator. There are 50 vertical levels in the ocean, with 22 evenly spaced levels within the top 220 m. The ocean component has poles over North America and Eurasia to avoid polar filtering. Neither coupled model employs flux adjustments.
The control simulations have stable, realistic climates when integrated over multiple centuries. Both models have simulations of ENSO that are substantially improved relative to previous GFDL coupled models. The CM2.0 model has been further evaluated as an ENSO forecast model and has good skill (CM2.1 has not been evaluated as an ENSO forecast model). Generally reduced temperature and salinity biases exist in CM2.1 relative to CM2.0. These reductions are associated with 1) improved simulations of surface wind stress in CM2.1 and associated changes in oceanic gyre circulations; 2) changes in cloud tuning and the land model, both of which act to increase the net surface shortwave radiation in CM2.1, thereby reducing an overall cold bias present in CM2.0; and 3) a reduction of ocean lateral viscosity in the extra- tropics in CM2.1, which reduces sea ice biases in the North Atlantic.
Both models have been used to conduct a suite of climate change simulations for the 2007 Intergovern- mental Panel on Climate Change (IPCC) assessment report and are able to simulate the main features of the observed warming of the twentieth century. The climate sensitivities of the CM2.0 and CM2.1 models are 2.9 and 3.4 K, respectively. These sensitivities are defined by coupling the atmospheric components of CM2.0 and CM2.1 to a slab ocean model and allowing the model to come into equilibrium with a doubling of atmospheric CO2. The output from a suite of integrations conducted with these models is freely available online (see http://nomads.gfdl.noaa.gov/).
There’s a brief description of the newer model version CM3.0 on the GFDL page.