[I was going to post this new article not long after the last article in the series, but felt I was missing something important and needed to think about it. Instead I’ve not had any new insights and am posting for comment.]

In Part Nine – Data I, we looked at the relationship between Ts (surface temperature) and OLR (outgoing longwave radiation), for reasons all explained there.

The relationship shown there appears to be primarily the seasonal relationship, which looks like a positive feedback due to the 2W/m² per 1K temperature increase. What about the feedback on a different timescale from the seasonal relationship?

From the 2001-2013 data, here is the monthly mean and the daily mean for both Ts and OLR:

Monthly mean & daily mean for Ts & OLR

Figure 1

If we remove the monthly mean from the data, here are those same relationships (shown in the last article as anomalies from the overall 2001-2013 mean):

OLR vs Ts - NCAR -CERES-monthlymeansremoved

Figure 2 – Click to Expand

On a lag of 1 day there is a possible relationship with a low correlation – and the rest of the lags show no relationship at all.

Of course, we have created a problem with this new dataset – as the lag increases we are “jumping boundaries”. For example, on the 7-day lag all of the Ts data in the last week of April is being compared with the OLR data in the first week of May. With slowly rising temperatures, the last week of April will be “positive temperature data”, but the first week of May will be “negative OLR data”. So we expect 1/4 of our data to show the opposite relationship.

So we can show the data with the “monthly boundary jumps removed” – which means we can only show lags of say 1-14 days (with 3% – 50% of the data cut out); and we can also show the data as anomalies from the daily mean. Both have the potential to demonstrate the feedback on shorter timescales than the seasonal cycle.

First, here is the data with daily means removed:

OLR vs Ts - NCAR -CERES-dailymeansremoved

Figure 3 – Click to Expand

Second, here is the data with the monthly means removed as in figure 2, but this time ensuring that no monthly boundaries are crossed (so some of the data is removed to ensure this):

OLR vs Ts - NCAR -CERES-monthlymeansremoved-noboundary

Figure 4 – Click to Expand

So basically this demonstrates no correlation between change in daily global OLR and change in daily global temperature on less than seasonal timescales. (Or “operator error” with the creation of my anomaly data). This is excluding (because we haven’t tested it here) the very short timescale of day to night change.

This was surprising at first sight.

That is, we see global Ts increasing on a given day but we can’t distinguish any corresponding change in global OLR from random changes, at least until we get to seasonal time periods? (See graph in last article).

Then what is probably the reason came into view. Remember that this is anomaly data (daily global temperature with monthly mean subtracted). This bar graph demonstrates that when we are looking at anomaly data, most of the changes in global Ts are reversed the next day, or usually within a few days:

Days temperature goes in same direction

Figure 5

This means that we are unlikely to see changes in Ts causing noticeable changes in OLR unless the climate response we are looking for (humidity and cloud changes) occurs within a day or two.

That’s my preliminary thinking, looking at the data – i.e., we can’t expect to see much of a relationship, and we don’t see any relationship.

One further point – explained in much more detail in the (short) series Measuring Climate Sensitivity – is that of course changes in temperature are not caused by some mechanism that is independent of radiative forcing.

That is, our measurement problem is compounded by changes in temperature being first caused by fluctuations in radiative forcing (the radiation balance) and ocean heat changes and then we are measuring the “resulting” change in the radiation balance resulting from this temperature change:

Radiation balance & ocean heat balance => Temperature change => Radiation balance & ocean heat balance

So we can’t easily distinguish the net radiation change caused by temperature changes from the radiative contribution to the original temperature changes.

I look forward to readers’ comments.

Articles in the Series

Part One – introducing some ideas from Ramanathan from ERBE 1985 – 1989 results

Part One – Responses – answering some questions about Part One

Part Two – some introductory ideas about water vapor including measurements

Part Three – effects of water vapor at different heights (non-linearity issues), problems of the 3d motion of air in the water vapor problem and some calculations over a few decades

Part Four – discussion and results of a paper by Dessler et al using the latest AIRS and CERES data to calculate current atmospheric and water vapor feedback vs height and surface temperature

Part Five – Back of the envelope calcs from Pierrehumbert – focusing on a 1995 paper by Pierrehumbert to show some basics about circulation within the tropics and how the drier subsiding regions of the circulation contribute to cooling the tropics

Part Six – Nonlinearity and Dry Atmospheres – demonstrating that different distributions of water vapor yet with the same mean can result in different radiation to space, and how this is important for drier regions like the sub-tropics

Part Seven – Upper Tropospheric Models & Measurement – recent measurements from AIRS showing upper tropospheric water vapor increases with surface temperature

Part Eight – Clear Sky Comparison of Models with ERBE and CERES – a paper from Chung et al (2010) showing clear sky OLR vs temperature vs models for a number of cases

Part Nine – Data I – Ts vs OLR – data from CERES on OLR compared with surface temperature from NCAR – and what we determine

Part Ten – Data II – Ts vs OLR – more on the data

In the last article we looked at a paper which tried to unravel – for clear sky only – how the OLR (outgoing longwave radiation) changed with surface temperature. It did the comparison by region, by season and from year to year.

The key point for new readers to understand – why are we interested in how OLR changes with surface temperature? The concept is not so difficult. The practical analysis presents more problems.

Let’s review the concept – and for more background please read at least the start of the last article: if we increase the surface temperature, perhaps due to increases in GHGs, but it could be due to any reason, what happens to outgoing longwave radiation? Obviously, we expect OLR to increase. The real question is how by how much?

If there is no feedback then OLR should increase by about 3.6 W/m² for every 1K in surface temperature (these values are global averages):

  • If there is positive feedback, perhaps due to more humidity, then we expect OLR to increase by less than 3.6 W/m² – think “not enough heat got out to get things back to normal”
  • If there is negative feedback, then we expect OLR to increase by more than 3.6 W/m². In the paper we reviewed in the last article the authors found about 2 W/m² per 1K increase – a positive feedback, but were only considering clear sky areas

One reader asked about an outlier point on the regression slope and whether it affected the result. This motivated me to do something I have had on my list for a while now – get “all of the data” and analyse it. This way, we can review it and answer questions ourselves – like in the Visualizing Atmospheric Radiation series where we created an atmospheric radiation model (first principles physics) and used the detailed line by line absorption data from the HITRAN database to calculate how this change and that change affected the surface downward radiation (“back radiation”) and the top of atmosphere OLR.

With the raw surface temperature, OLR and humidity data “in hand” we can ask whatever questions we like and answer these questions ourselves..

NCAR reanalysis, CERES and AIRS

CERES and AIRS – satellite instruments – are explained in CERES, AIRS, Outgoing Longwave Radiation & El Nino.

CERES measures total OLR in a 1ºx 1º grid on a daily basis.

AIRS has a “hyper-spectral” instrument, which means it looks at lots of frequency channels. The intensity of radiation at these many wavelengths can be converted, via calculation, into measurements of atmospheric temperature at different heights, water vapor concentration at different heights, CO2 concentration, and concentration of various other GHGs. Additionally, AIRS calculates total OLR (it doesn’t measure it – i.e. it doesn’t have a measurement device from 4μm – 100μm). It also measures parameters like “skin temperature” in some locations and calculates the same in other locations.

For the purposes of this article, I haven’t yet dug into the “how” and the reliability of surface AIRS measurements. The main point to note about satellites is they sit at the “top of atmosphere” and their ability to measure stuff near the surface depends on clever ideas and is often subverted by factors including clouds and surface emissivity. (AIRS has microwave instruments specifically to independently measure surface temperature even in cloudy conditions, because of this problem).

NCAR is a “reanalysis product”. It is not measurement, but it is “informed by measurement”. It is part measurement, part model. Where there is reliable data measurement over a good portion of the globe the reanalysis is usually pretty reliable – only being suspect at the times when new measurement systems come on line (so trends/comparisons over long time periods are problematic). Where there is little reliable measurement the reanalysis depends on the model (using other parameters to allow calculation of the missing parameters).

Some more explanation in Water Vapor Trends under the sub-heading Reanalysis – or Filling in the Blanks.

For surface temperature measurements reanalysis is not subverted by models too much. However, the mainstream surface temperature series are surely better than NCAR – I know that there is an army of “climate interested people” who follow this subject very closely. (I am not in that group).

I used NCAR because it is simple to download and extract. And I expect – but haven’t yet verified – that it will be quite close to the various mainstream surface temperature series. If someone is interested and can provide daily global temperature from another surface temperature series as an Excel, csv, .nc – or pretty much any data format – we can run the same analysis.

For those interested, see note 1 on accessing the data.

Results – Global Averages

For our starting point in this article I decided to look at global averages from 2001 to 2013 inclusive (data from CERES not yet available for the whole of 2014). This was after:

  • looking at daily AIRS data
  • creating and comparing NCAR over 8 days with AIRS 8-day averages for surface skin temperature and surface air temperature
  • creating and comparing AIRS over 8-days with CERES for TOA OLR

More on those points in later articles.

The global relationship with surface temperature and OLR is what we have a primary interest in – for the purpose of determining feedbacks. Then we want to figure out some detail about why it occurs. I am especially interested in the AIRS data because it is the only global measurement of upper tropospheric water vapor (UTWV) – and UTWV along with clouds are the key factors in the question of feedback – how OLR changes with surface temperature. For now, we will look at the simple relationship between surface temperature (“skin temperature”) and OLR.

Here is the data, shown as an anomaly from the global mean values over the period Jan 1st, 2001 to Dec 31st, 2013. Each graph represents a different lag – how does global OLR (CERES) change with global surface temperature (NCAR) on a lag of 1 day, 7 days, 14 days and so on:


Figure 1 – Click to Expand

The slope gives the “apparent feedback” and the R² simply reflects how much of the graph is explained by the linear trend. This last value is easily estimated just by looking at each graph.

For reference, here is the timeseries data, as anomalies, with the temperature anomaly multiplied by a factor of 3 so its magnitude is similar to the OLR anomaly:

OLR from CERES vs Ts from NCAR as timeseries

Figure 2 – Click to Expand

Note on the calculation – I used the daily data to calculate a global mean value (area-weighted) and calculated one mean value over the whole time period then subtracted it from every daily data value to obtain an anomaly for each day. Obviously we would get the same slope and R² without using anomaly data (just a different intercept on the axes).

For reference, mean OLR = 238.9 W/m², mean Ts = 288.0 K.

My first question – before even producing the graphs – was whether a lag graph shows the change in OLR due to a change in Ts or due to a mixture of many effects. That is, what is the interpretation of the graphs?

The second question – what is the “right lag” to use? We don’t expect an instant response when we are looking for feedbacks:

  • The OLR through the window region will of course respond instantly to surface temperature change
  • The OLR as a result of changing humidity will depend upon how long it takes for more evaporated surface water to move into the mid- to upper-troposphere
  • The OLR as a result of changing atmospheric temperature, in turn caused by changing surface temperature, will depend upon the mixture of convection and radiative cooling

To say we know the right answer in advance pre-supposes that we fully understand atmospheric dynamics. This is the question we are asking, so we can’t pre-suppose anything. But at least we can suggest that something in the realm of a few days to a few months is the most likely candidate for a reasonable lag.

But the idea that there is one constant feedback and one constant lag is an idea that might well be fatally flawed, despite being seductively simple. (A little more on that in note 3).

And that is one of the problems of this topic. Non-linear dynamics means non-linear results – a subject I find hard to describe in simple words. But let’s say – changes in OLR from changes in surface temperature might be “spread over” multiple time scales and be different at different times. (I have half-written an article trying to explain this idea in words, hopefully more on that sometime soon).

But for the purpose of this article I only wanted to present the simple results – for discussion and for more analysis to follow in subsequent articles.

Articles in the Series

Part One – introducing some ideas from Ramanathan from ERBE 1985 – 1989 results

Part One – Responses – answering some questions about Part One

Part Two – some introductory ideas about water vapor including measurements

Part Three – effects of water vapor at different heights (non-linearity issues), problems of the 3d motion of air in the water vapor problem and some calculations over a few decades

Part Four – discussion and results of a paper by Dessler et al using the latest AIRS and CERES data to calculate current atmospheric and water vapor feedback vs height and surface temperature

Part Five – Back of the envelope calcs from Pierrehumbert – focusing on a 1995 paper by Pierrehumbert to show some basics about circulation within the tropics and how the drier subsiding regions of the circulation contribute to cooling the tropics

Part Six – Nonlinearity and Dry Atmospheres – demonstrating that different distributions of water vapor yet with the same mean can result in different radiation to space, and how this is important for drier regions like the sub-tropics

Part Seven – Upper Tropospheric Models & Measurement – recent measurements from AIRS showing upper tropospheric water vapor increases with surface temperature

Part Eight – Clear Sky Comparison of Models with ERBE and CERES – a paper from Chung et al (2010) showing clear sky OLR vs temperature vs models for a number of cases

Part Nine – Data I – Ts vs OLR – data from CERES on OLR compared with surface temperature from NCAR – and what we determine

Part Ten – Data II – Ts vs OLR – more on the data


Wielicki, B. A., B. R. Barkstrom, E. F. Harrison, R. B. Lee III, G. L. Smith, and J. E. Cooper, 1996: Clouds and the Earth’s Radiant Energy System (CERES): An Earth Observing System Experiment, Bull. Amer. Meteor. Soc., 77, 853-868   – free paper

Kalnay et al.,The NCEP/NCAR 40-year reanalysis project, Bull. Amer. Meteor. Soc., 77, 437-470, 1996  – free paper

NCEP Reanalysis data provided by the NOAA/OAR/ESRL PSD, Boulder, Colorado, USA, from their Web site at http://www.esrl.noaa.gov/psd/


Note 1: Boring Detail about Extracting Data

On the plus side, unlike many science journals, the data is freely available. Credit to the organizations that manage this data for their efforts in this regard, which includes visualization software and various ways of extracting data from their sites. However, you can still expect to spend a lot of time figuring out what files you want, where they are, downloading them, and then extracting the data from them. (Many traps for the unwary).

NCAR – data in .nc files, each parameter as a daily value (or 4x daily) in a separate annual .nc file on an (approx) 2.5º x 2.5º grid (actually T62 gaussian grid).

Data via ftp – ftp.cdc.noaa.gov. See http://www.esrl.noaa.gov/psd/data/gridded/data.ncep.reanalysis.surface.html.

You get lat, long, and time in the file as well as the parameter. Care needed to navigate to the right folder because the filenames are the same for the 4x daily and the daily data.

NCAR are using latest version .nc files (which Matlab circa 2010 would not open, I had to update to the latest version – many hours wasted trying to work out the reason for failure).

CERES – data in .nc files, you select the data you want and the time period but it has to be a less than 2G file and you get a file to download. I downloaded daily OLR data for each annual period. Data in a 1ºx 1º grid. CERES are using older version .nc so there should be no problem opening.

Data from http://ceres-tool.larc.nasa.gov/ord-tool/srbavg

AIRS – data in .hdf files, in daily, 8-day average, or monthly average. The data is “ascending” = daytime, “descending” = nighttime plus some other products. Daily data doesn’t give global coverage (some gaps). 8-day average does but there are some missing values due to quality issues. Data in a 1ºx 1º grid. I used v6 data.

Data access page – http://disc.sci.gsfc.nasa.gov/datacollection/AIRX3STD_V006.html?AIRX3STD&#tabs-1.

Data via ftp.

HDF is not trivial to open up. The AIRS team have helpfully provided a Matlab tool to extract data which helped me. I think I still spent many hours figuring out how to extract what I needed.

Files Sizes – it’s a lot of data:

NCAR files that I downloaded (skin temperature) are only 12MB per annual file.

CERES files with only 2 parameters are 190MB per annual file.

AIRS files as 8-day averages (or daily data) are 400MB per file.

Also the grid for each is different. Lat from S-pole to N-pole in CERES, the reverse for AIRS and NCAR. Long from 0.5º to 359.5º in CERES but -179.5 to 179.5 in AIRS. (Note for any Matlab people, it won’t regrid, say using interp2, unless the grid runs from lowest number to highest number).

Note 2: Checking data – because I plan on using the daily 1ºx1º grid data from CERES and NCAR, I used it to create the daily global averages. As a check I downloaded the global monthly averages from CERES and compared. There is a discrepancy, which averages at 0.1 W/m².

Here is the difference by month:


Figure 3 – Click to expand

And a scatter plot by month of year, showing some systematic bias:


Figure 4

As yet, I haven’t dug any deeper to find if this is documented – for example, is there a correction applied to the daily data product in monthly means? is there an issue with the daily data? or, more likely, have I %&^ed up somewhere?

Note 3: Extract from Measuring Climate Sensitivity – Part One:

Linear Feedback Relationship?

One of the biggest problems with the idea of climate sensitivity, λ, is the idea that it exists as a constant value.

From Cloud Feedbacks in the Climate System: A Critical Review, Stephens, Journal of Climate (2005):

The relationship between global-mean radiative forcing and global-mean climate response (temperature) is of intrinsic interest in its own right. A number of recent studies, for example, discuss some of the broad limitations of (1) and describe procedures for using it to estimate Q from GCM experiments (Hansen et al. 1997; Joshi et al. 2003; Gregory et al. 2004) and even procedures for estimating from observations (Gregory et al. 2002).

While we cannot necessarily dismiss the value of (1) and related interpretation out of hand, the global response, as will become apparent in section 9, is the accumulated result of complex regional responses that appear to be controlled by more local-scale processes that vary in space and time.

If we are to assume gross time–space averages to represent the effects of these processes, then the assumptions inherent to (1) certainly require a much more careful level of justification than has been given. At this time it is unclear as to the specific value of a global-mean sensitivity as a measure of feedback other than providing a compact and convenient measure of model-to-model differences to a fixed climate forcing (e.g., Fig. 1).

[Emphasis added and where the reference to “(1)” is to the linear relationship between global temperature and global radiation].

If, for example, λ is actually a function of location, season & phase of ENSO.. then clearly measuring overall climate response is a more difficult challenge.

Long before the printing of money, golden eggs were the only currency.

In a deep cave, goose Day-Lewis, the last of the gold-laying geese, was still at work.

Day-Lewis lived in the country known affectionately as Utopia. Every day, Day-Lewis laid 10 perfect golden eggs, and was loved and revered for her service. Luckily, everyone had read Aesop’s fables, and no one tried to kill Day-Lewis to get all those extra eggs out. Still Utopia did pay a few armed guards to keep watch for the illiterates, just in case.

Utopia wasn’t into storing wealth because it wanted to run some important social programs to improve the education and health of the country. Thankfully they didn’t run a deficit and issue bonds so we don’t need to get into any political arguments about libertarianism.

This article is about golden eggs.

Utopia employed the service of bunny Fred to take the golden eggs to the nearby country of Capitalism in return for services of education and health. Every day, bunny Fred took 10 eggs out of the country. Every day, goose Day-Lewis produced 10 eggs. It was a perfect balance. The law of conservation of golden eggs was intact.

Thankfully, history does not record any comment on the value of the services received for these eggs, or on the benefit to society of those services, so we can focus on the eggs story.

Due to external circumstances outside of Utopia’s control, on January 1st, the year of Our Goose 150, a new international boundary was created between Utopia and Capitalism. History does not record the complex machinations behind the reasons for this new border control.

However, as always with government organizations, things never go according to plan. On the first day, January 1st, there were paperwork issues.

Bunny Fred showed up with 10 golden eggs, and, what he thought was the correct paperwork. Nothing got through. Luckily, unlike some government organizations with wafer-thin protections for citizens’ rights, they didn’t practice asset forfeiture for “possible criminal activity we might dream up and until you can prove you earned this honestly we are going to take it and never give it back”. Instead they told Fred to come back tomorrow.

On January 2nd, Bunny Fred had another run at the problem and brought another 10 eggs. The export paperwork for the supply of education and health only allowed for 10 golden eggs to be exported to Capitalism so border control sent on the 10 eggs from Jan 1st and insisted Bunny Fred take 10 eggs take back to Utopia.

On January 3rd, Bunny Fred, desperate to remedy the deficit of services in Utopia took 20 eggs – 10 from Day-Lewis and 10 he had brought back from border control the day before.

Insistent on following their new ad hoc processes, border control could only send on 10 eggs to Capitalism. As they had no approved paperwork for “storing” extra eggs, they insisted that Fred take back the excess eggs.

Every day, the same result:

  • Day-Lewis produced 10 eggs, Bunny Fred took 20 eggs to border control
  • Border control sent 10 eggs to Capitalism, Bunny Fred brought 10 eggs back

One day some people who thought they understood the law of conservation of golden eggs took a look at the current situation and declared:

Heretics! This is impossible. Day-Lewis, last of the gold-laying geese, only produces 10 eggs per day. How can Bunny Fred be taking 20 eggs to border control?

You can’t create golden eggs! The law of conservation of golden eggs has been violated.

You can’t get more than 100% efficiency. This is impossible.

And in other completely unrelated stories:

A Challenge for Bryan & 

Do Trenberth and Kiehl understand the First Law of Thermodynamics? & Part Two & Part Three – The Creation of Energy?

and recent comments in CO2- An Insignificant Trace Gas? – Part One

Amazing Things we Find in Textbooks – The Real Second Law of Thermodynamics

The Three Body Problem

In Part Seven we had a look at a 2008 paper by Gettelman & Fu which assessed models vs measurements for water vapor in the upper troposphere.

In this article we will look at a 2010 paper by Chung, Yeomans & Soden. This paper studies outgoing longwave radiation (OLR) vs temperature change, for clear skies only, in three ways (and comparing models and measurements):

  • by region
  • by season
  • year to year

Why is this important and what is the approach all about?

Let’s suppose that the surface temperature increases for some reason. What happens to the total annual radiation emitted by the climate system? We expect it to increase. The hotter objects are the more they radiate.

If there is no positive feedback in the climate system then for a uniform global 1K (=1ºC) increase in surface & atmospheric temperature we expect the OLR to increase by 3.6 W/m². This is often called, by convention only, the “Planck feedback”. It refers to the fact that an increased surface temperature, and increased atmospheric temperature, will radiate more – and the “no feedback value” is 3.6 W/m² per 1K rise in temperature.

To explain a little further for newcomers.. with the concept of “no positive feedback” an initial 1K surface temperature rise – from any given cause – will stay at 1K. But if there is positive feedback in the climate system, an initial 1K surface temperature rise will result in a final temperature higher than 1K.

If the OLR increases by less than 3.6 W/m² the final temperature will end up higher than 1K – positive feedback. If the OLR increases by more than 3.6 W/m² the final temperature will end up lower than 1K – negative feedback.

Base Case

At the start of their paper they show the calculated clear-sky OLR change as the result of an ideal case. This is the change in OLR as a result of the surface and atmosphere increasing uniformly by 1K:

  • first, from the temperature change alone
  • second, from the change in water vapor as a result of this temperature change, assuming relative humidity stays constant
  • finally, from the first and second combined
From Chung et al (2010)

From Chung et al (2010)

Figure 1 – Click to expand

The graphs show the breakdown by pressure (=height) and latitude. 1000mbar is the surface and 200mbar is approximately the tropopause, the place where convection stops.

The sum of the first graph (note 1) is the “no feedback” response and equals 3.6 W/m². The sum of the second graph is the “feedback from water vapor” and equals -1.6 W/m². The combined result in the third graph equals 2.0 W/m². The second and third graphs are the result if relative humidity is constant.

We can also see that the tropics is where most of the changes take place.

They say:

One striking feature of the fixed-RH kernel is the small values in the tropical upper troposphere, where the positive OLR response to a temperature increase is offset by negative responses to the corresponding vapor increase. Thus under a constant RH- warming scenario, the tropical upper troposphere is in a runaway greenhouse state – the stabilizing effect of atmospheric warming is neutralized by the increased absorption from water vapor. Of course, the tropical upper troposphere is not isolated but is closely tied to the lower tropical troposphere where the combined temperature-water vapor responses are safely stabilizing.

To understand the first part of their statement, if temperatures increase and overall OLR does not increase at all then there is nothing to stop temperatures increasing. Of course, in practice, the “close to zero” increase in OLR for the tropical upper troposphere under a temperature rise can’t lead to any kind of runaway temperature increase. This is because there is a relationship between the temperatures in the upper troposphere and the lower- & mid- troposphere.

Relative Humidity Stays Constant?

Back in 1967, Manabe & Wetherald published their seminal paper which showed the result of increases in CO2 under two cases – with absolute humidity constant and with relative humidity constant:

Generally speaking, the sensitivity of the surface equilibrium temperature upon the change of various factors such as solar constant, cloudiness, surface albedo, and CO2 content are almost twice as much for the atmosphere with a given distribution of relative humidity as for that with a given distribution of absolute humidity..

..Doubling the existing CO2 content of the atmosphere has the effect of increasing the surface temperature by about 2.3ºC for the atmosphere with the realistic distribution of relative humidity and by about 1.3ºC for that with the realistic distribution of absolute humidity.

They explain important thinking about this topic:

Figure 1 shows the distribution of relative humidity as a function of latitude and height for summer and winter. According to this figure, the zonal mean distributions of relative humidity closely resemble one another, whereas those of absolute humidity do not. These data suggest that, given sufficient time, the atmosphere tends to restore a certain climatological distribution of relative humidity responding to the change of temperature.

It doesn’t mean that anyone should assume that relative humidity stays constant under a warmer world. It’s just likely to be a more realistic starting point than assuming that absolute humidity stays constant.

I only point this out for readers to understand that this idea is something that has seemed reasonable for almost 50 years. Of course, we have to question this “reasonable” assumption. How relative humidity changes as the climate warms or cools is a key factor in determining the water feedback and, therefore, it has had a lot of attention.

Results From the Paper

The observed rates of radiative damping from regional, seasonal, and interannual variations are substantially smaller than the rate of Planck radiative damping (3.6W/m²), yet slightly larger than that anticipated from a uniform warming, constant-RH response (2.0 W/m²).

The three comparison regressions can be seen, with ERBE data on the left and model results on the right:

From Chung et al (2010)

From Chung et al (2010)

Figure 2 – Click to expand

In the next figure, the differences between the models can be seen, and compared with ERBE and CERES results. The red “Planck” line is the no-feedback line, showing that (for these sets of results) models and experimental data show a positive feedback (when looking at clear sky OLR).

From Chung et al (2010)

From Chung et al (2010)

Figure 3 – Click to expand


At the least, we can see that climate models and measured values are quite close, when the results are aggregated. Both the model and the measured results are a long way from neutral feedback (the dashed slope in figure 2 and the red line in figure 3), instead they show positive feedback, quite close to what we would expect from constant relative humidity. The results indicate that relative humidity declines a little in the warmer case. The results also indicate that the models calculate a little more positive feedback than the real world measurements under these cases.

What does this mean for feedback from warming from increased GHGs? It’s the important question. We could say that the results tell us nothing, because how the world warms from increasing CO2 (and other GHGs) will change climate patterns and so seasonal, regional and year to year changes in periods from 1985-1988 and 2005-2008 are not particularly useful.

We could say that the results tell us that water vapor feedback is demonstrated to be a positive feedback, and matches quite closely the results of models. Or we could say that without cloudy sky data the results aren’t very interesting.

At the very least we can see that for current climate conditions under clear skies the change in OLR as temperature changes indicates an overall positive feedback, quite close to constant relative humidity results and quite close to what models calculate.

The ERBE results include the effect of a large El Nino and I do question whether year to year changes (graph c in figs 2 & 3) under El Nino to La Nino changes can be considered to represent how the climate might warm with more CO2. If we consider how the weather patterns shift during El-Nino to La Nina it has long been clear that there are positive feedbacks, but also the weather patterns end up back to normal (the cycle ends). I welcome knowledgeable readers explaining why El Nino feedback patters are relevant to future climate shifts, perhaps this will help me to clarify my thinking, or correct my misconceptions.

However, the CERES results from 2005-2008 don’t include the effect of a large El Nino and they show an overall slightly more positive feedback.

I asked Brian Soden a few question about this paper and he was kind enough to respond:

Q. Given the much better quality data since CERES and AIRS, why is ERBE data the focus?
A. At the time, the ERBE data was the only measurement that covered a large ENSO cycle (87/88 El Nino event followed by 88/89 La Nina)

Q. Why not include cloudy skies as well in this review? Collecting surface temperature data is more challenging of course because it needs a different data source. Is there a comparable study that you know of for cloudy skies?
A. The response of clouds to surface temperature changes is more complicated. We wanted to start with something relatively simple; i.e., water vapor. Andrew Dessler at Texas AM has a paper that came out a few years back that looks at total-sky fluxes and thus includes the effects on clouds.

Q. Do you know of any studies which have done similar work with what must now be over 10 years of CERES/AIRS.
A. Not off-hand. But it would be useful to do.

Articles in the Series

Part One – introducing some ideas from Ramanathan from ERBE 1985 – 1989 results

Part One – Responses – answering some questions about Part One

Part Two – some introductory ideas about water vapor including measurements

Part Three – effects of water vapor at different heights (non-linearity issues), problems of the 3d motion of air in the water vapor problem and some calculations over a few decades

Part Four – discussion and results of a paper by Dessler et al using the latest AIRS and CERES data to calculate current atmospheric and water vapor feedback vs height and surface temperature

Part Five – Back of the envelope calcs from Pierrehumbert – focusing on a 1995 paper by Pierrehumbert to show some basics about circulation within the tropics and how the drier subsiding regions of the circulation contribute to cooling the tropics

Part Six – Nonlinearity and Dry Atmospheres – demonstrating that different distributions of water vapor yet with the same mean can result in different radiation to space, and how this is important for drier regions like the sub-tropics

Part Seven – Upper Tropospheric Models & Measurement – recent measurements from AIRS showing upper tropospheric water vapor increases with surface temperature

Part Eight – Clear Sky Comparison of Models with ERBE and CERES – a paper from Chung et al (2010) showing clear sky OLR vs temperature vs models for a number of cases

Part Nine – Data I – Ts vs OLR – data from CERES on OLR compared with surface temperature from NCAR – and what we determine

Part Ten – Data II – Ts vs OLR – more on the data


An assessment of climate feedback processes using satellite observations of clear-sky OLR, Eui-Seok Chung, David Yeomans, & Brian J. Soden, GRL (2010) – free paper

Thermal equilibrium of the atmosphere with a given distribution of relative humidity, Manabe & Wetherald, Journal of the Atmospheric Sciences (1967) – free paper


Note 1: The values are per 100 mbar “slice” of the atmosphere. So if we want to calculate the total change we need to sum the values in each vertical slice, and of course, because they vary through latitude we need to average the values (area-weighted) across all latitudes.

I’ve been a student of history for a long time and have read quite a bit about Nazi Germany and WWII. In fact right now, having found audible.com I’m listening to an audio book The Coming of the Third Reich, by Richard Evans, while I walk, drive and exercise.

It’s heartbreaking to read about the war and to read about the Holocaust. Words fail me to describe the awfulness of that regime and what they did.

But it’s pretty easy for someone who is curious about evidence, or who has had someone question whether or not the Holocaust actually took place, to find and understand the proof.

The photos. The bodies. The survivors’ accounts. The thousands of eyewitness accounts. The army reports. The stated aims of Hitler and many of the leading Nazis in their own words.

We can all understand how to weigh up witness accounts and photos. It’s intrinsic to our nature.

People who don’t believe the Nazis murdered millions of Jews are denying simple and overwhelming evidence.

Let’s compare that with the evidence behind the science of anthropogenic global warming (AGW) and the inevitability of a 2-6ºC rise in temperature if we continue to add CO2 and other GHGs to the atmosphere.

Step 1 – The ‘greenhouse’ effect

To accept AGW of course you need to accept the ‘greenhouse’ effect. It’s fundamental science and not in question but what if you don’t take my word for it? What if you want to check for yourself?

And by the way, the complexity of the subject for many people becomes clear even at this stage, with countless hordes not even clear that the ‘greenhouse’ effect is a just a building block for AGW. It is not itself AGW.

AGW relies on the ‘greenhouse’ effect but also on other considerations.

I wrote The “Greenhouse” Effect Explained in Simple Terms to make it simple, yet not too simple. But that article relies on (and references) many basics – radiation, absorption and emission of radiation through gases, heat transfer and convection. All of those are necessary to understand the greenhouse effect.

Many people have conceptual misunderstandings of “basic” physics. In reading comments on this blog and on other blogs I often see fundamental misunderstanding of how heat transfer works. No space here for that.

But the difficulty of communicating a physics idea is very real. Once someone has a conceptual block because they think some process works a subtly different way, the only way to resolve the question is with equations. It is further complicated because these misunderstandings are often unstated by the commenter – they don’t realize they see the world differently from physics basics.

So when we need to demonstrate that the greenhouse effect is real, and that it increases with more GHGs we need some equations. And by ‘increases’ I mean more GHGs mean a higher surface temperature, all other things being equal. (Which, of course, they never are).

The equations are crystal clear and no one over the age of 10 could possibly be confused. I show the equations for radiative transfer (and their derivation) in Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations:

Iλ(0) = Iλm)em + ∫ Bλ(T)e [16]

The terms are explained in that article. In brief, the equation shows how the intensity of radiation at the top of the atmosphere at one wavelength is affected by the number of absorbing molecules in the atmosphere. And, obviously, you have to integrate it over all wavelengths. Why do I even bring that up, it’s so simple?


And equally obviously, anyone questioning the validity of the equation, or the results from the equation, is doing so from evil motives.

I do need to add that we have to prescribe the temperature profile in the atmosphere (and the GHG concentration) to be able to solve this equation. The temperature profile is known as the lapse rate – temperature reduces as you go up in altitude. In the tropical regions where convection is stronger we can come up with a decent equation for the lapse rate.

All you have to know is the first law of thermodynamics, the ideal gas law and the equation for the change in pressure vs height due to the mass of the atmosphere. Everyone can do this in their heads of course. But here it is:

Screen Shot 2015-02-03 at 7.18.53 pm

So with these two elementary principles we can prove that more GHGs means a higher surface temperature before any feedbacks. That’s the ‘greenhouse’ effect.

Step 2 – AGW = ‘Greenhouse effect’ plus feedbacks

This is so simple. Feedbacks are things like – a hotter world probably has more water vapor in the atmosphere, and water vapor is the most important GHG, so this amplifies the ‘greenhouse’ effect of increasing CO2. Calculating the changes is only a little more difficult than the super simple equations I showed earlier.

You just need a GCM – a climate model run on a supercomputer. That’s all.

There are many misconceptions about climate models but only people who are determined to believe a lie can possibly believe them.

As an example, many people think that the amplifying effect, or positive feedback, of water vapor is programmed into the GCMs. All they have to do is have a quick read through the 200-page technical summary of a model like say CAM (community atmosphere model).

Here is an extract from Description of the NCAR Community Atmosphere Model (CAM 3.0), W.D. Collins (2004):

Screen Shot 2015-02-03 at 7.31.24 pm

As soon as anyone reads this – and if they can’t be bothered to find the reference via Google Scholar and read it, well, what can you say about such people – as soon as they read it, of course, it’s crystal clear that positive feedback isn’t “programmed in” to climate models.

So GCMs all come to the conclusion that more GHGs results in a hotter world (2-6ºC). They solve basic physics equations in a “grid” fashion, stepping forward in time, and so the result is clear and indisputable.

Step 3 – Attribution Studies

I recently spent some time reading AR4 and AR5 (the IPCC reports) on Attribution (Natural Variability and Chaos – Seven – Attribution & Fingerprints Or Shadows? and Natural Variability and Chaos – Three – Attribution & Fingerprints).

This is the work of attributing the last century’s rise in temperature to the increases in anthropogenic GHGs. I followed the trail of papers back and found one of the source papers by Hasselmann from 1993. In it we can clearly see the basis for attribution studies:

Screen Shot 2015-02-03 at 7.40.31 pm

Now it’s very difficult to believe that anyone questioning attribution studies isn’t of evil intent. After all, there is the basic principle in black and white. Who could be confused?

As a side note, to excuse my own irredeemable article on the topic, the actual basis of attribution isn’t just in these equations, it is also in the assumption that climate models accurately calculate the statistics of natural variability. The IPCC chapter on attribution doesn’t really make this clear, yet in another chapter (11) different authors suggest completely restating the statistical certainty claimed in the attribution chapter because “..it is explicitly recognized that there are sources of uncertainty not simulated by the models”. Their ad hoc restatement, while more accurate than the executive summary, still needs to be justified.

However, none of this can offer me redemption.

Step 4 – Unprecedented Temperature Rises

(This could probably be switched around with step 3. The order here is not important).

Once people have seen the unprecedented rise in temperature this century, how could they not align themselves with the forces of good?

Anthropogenic warming ‘writ large’ (AR5, chapter 2):

Screen Shot 2015-02-03 at 7.54.49 pm

There’s the problem. The last 400,000 years were quite static by comparison:

Screen Shot 2015-02-03 at 8.03.13 pm

From ‘800,000 Years of Abrupt Climate Variability’, Barker et al (2011)

The red is a Greenland ice core proxy for temperature, the green is a mid-latitude SST estimate – and it’s important to understand that calculating global annual temperatures is quite difficult and not done here.

So no one who looks at climate history can possibly be excused for not agreeing with consensus climate science, whatever that is when we come to “consensus paleoclimate”.. It was helpful to read Chapter 5 of AR5:

There is high confidence that orbital forcing is the primary external driver of glacial cycles (Kawamura et al,. 2007; Cheng et al., 2009; Lisiecki, 2010; Huybers, 2011).

I’ve only read about 350 papers on paleoclimate and I’m confused about the origin of the high confidence as I explained in Ghosts of Climate Past -Eighteen – “Probably Nonlinearity” of Unknown Origin.

Anyway, the key takeaway message is that the recent temperature history is another demonstration that anyone not in line with consensus climate science is clearly acting from evil motives.


I thought about putting a photo of the Holocaust from a concentration camp next to a few pages of mathematical equations – to make a point. But that would be truly awful.

That would trivialize the memory of the terrible suffering of millions of people under one of the most evil regimes the world has seen.

And that, in fact, is my point.

I can’t find words to describe how I feel about the apologists for the Nazi regime, and those who deny that the holocaust took place. The evidence for the genocide is overwhelming and everyone can understand it.

On the other hand, those who ascribe the word ‘denier’ to people not in agreement with consensus climate science are trivializing the suffering and deaths of millions of people. Everyone knows what this word means. It means people who are apologists for those evil jackbooted thugs who carried the swastika and cheered as they sent six million people to their execution.

By comparison, understanding climate means understanding maths, physics and statistics. This is hard, very hard. It’s time consuming, requires some training (although people can be self-taught), actually requires academic access to be able to follow the thread of an argument through papers over a few decades – and lots and lots of dedication.

The worst you could say is people who don’t accept ‘consensus climate science’ are likely finding basic – or advanced – thermodynamics, fluid mechanics, heat transfer and statistics a little difficult and might have misunderstood, or missed, a step somewhere.

The best you could say is with such a complex subject straddling so many different disciplines, they might be entitled to have a point.

If you have no soul and no empathy for the suffering of millions under the Third Reich, keep calling people who don’t accept consensus climate science ‘deniers’.

Otherwise, just stop.

Important Note: The moderation filter on comments is setup to catch the ‘D..r’ word specifically because such name calling is not accepted on this blog. This article is an exception to the norm, but I can’t change the filter for one article.

In one stereotypical view of climate, the climate state has some variability over a 30 year period – we could call this multi-decadal variability “noise” – but it is otherwise fixed by the external conditions, the “external forcings”.

This doesn’t really match up with climate history, but climate models have mostly struggled to do much more than reproduce the stereotyped view. See Natural Variability and Chaos – Four – The Thirty Year Myth for a different perspective on (only) the timescale.

In this stereotypical view, the only reason why “long term” (=30 year statistics) can change is because of “external forcing”. Otherwise, where does the “extra energy” come from (we will examine this particular idea in a future article).

One of our commenters recently highlighted a paper from Drijfhout et al (2013) –Spontaneous abrupt climate change due to an atmospheric blocking–sea-ice–ocean feedback in an unforced climate model simulation.

Here is how the paper introduces the subject:

Abrupt climate change is abundant in geological records, but climate models rarely have been able to simulate such events in response to realistic forcing.

Here we report on a spontaneous abrupt cooling event, lasting for more than a century, with a temperature anomaly similar to that of the Little Ice Age. The event was simulated in the preindustrial control run of a high- resolution climate model, without imposing external perturbations.

This is interesting and instructive on many levels so let’s take a look. In later articles we will look at the evidence in climate history for “abrupt” events, for now note that Dansgaard–Oeschger (DO) events are the description of the originally identified form of abrupt change.

The distinction between “abrupt” changes and change that is not “abrupt” is an artificial one, it is more a reflection of the historical order in which we discovered “slow” and “abrupt” change. 

Under a Significance inset box in the paper:

There is a long-standing debate about whether climate models are able to simulate large, abrupt events that characterized past climates. Here, we document a large, spontaneously occurring cold event in a preindustrial control run of a new climate model.

The event is comparable to the Little Ice Age both in amplitude and duration; it is abrupt in its onset and termination, and it is characterized by a long period in which the atmospheric circulation over the North Atlantic is locked into a state with enhanced blocking.

To simulate this type of abrupt climate change, climate models should possess sufficient resolution to correctly represent atmospheric blocking and a sufficiently sensitive sea-ice model.

Here is their graph of the time-series of temperature (left) , and the geographical anomaly (right) expressed as the change during the 100 year event against the background of years 200-400:

From Drijfhout et al 2013

From Drijfhout et al 2013

Figure 1 – Click to expand

In their summary they state:

The lesson learned from this study is that the climate system is capable of generating large, abrupt climate excursions without externally imposed perturbations. Also, because such episodic events occur spontaneously, they may have limited predictability.

Before we look at the “causes” – the climate mechanisms – of this event, let’s briefly look at the climate model.

Their coupled GCM has an atmospheric resolution of just over 1º x 1º with 62 vertical levels, and the ocean has a resolution of 1º in the extra-tropics, increasing to 0.3º near the equator. The ocean has 42 vertical levels, with the top 200m of the ocean represented by 20 equally spaced 10m levels.

The GHGs and aerosols are set at pre-industrial 1860 values and don’t change over the 1,125 year simulation. There are no “flux adjustments” (no need for artificial momentum and energy additions to keep the model stable as with many older models).

See note 1 for a fuller description and the paper in the references for a full description.

The simulated event itself:

After 450 y, an abrupt cooling event occurred, with a clear signal in the Atlantic multidecadal oscillation (AMO). In the instrumental record, the amplitude of the AMO since the 1850s is about 0.4 °C, its SD 0.2 °C. During the event simulated here, the AMO index dropped by 0.8 °C for about a century..

How did this abrupt change take place?

The main mechanism was a change in the Atlantic Meridional Overturning Current (AMOC), also known as the Thermohaline circulation. The AMOC raises a nice example of the sensitivity of climate. The AMOC brings warmer water from the tropics into higher latitudes. A necessary driver of this process is the intensity of deep convection in high latitudes (sinking dense water) which in turn depends on two factors – temperature and salinity. More importantly, more accurately, it depends on the competing differences in anomalies of temperature and salinity

To shut down deep convection, the density of the surface water must decrease. In the temperature range of 7–12 °C, typical for the Labrador Sea, the SST anomaly in degrees Celsius has to be roughly 5 times the sea surface salinity (SSS) anomaly in practical salinity units for density compensation to occur. The SST anomaly was only about twice that of the SSS anomaly; the density anomaly was therefore mostly determined by the salinity anomaly.

In the figure below we see (left) the AMOC time series at two locations with the reduction during the cold century, and (right) the anomaly by depth and latitude for the “cold century” vs the climatology for years 200-400:

From Drijfhout et al 2013

From Drijfhout et al 2013

Figure 2 – Click to expand

What caused the lower salinities? It was more sea ice, melting in the right location. The excess sea ice was caused by positive feedback between atmospheric and ocean conditions “locking in” a particular pattern. The paper has a detailed explanation with graphics of the pressure anomalies which is hard to reduce to anything more succinct, apart from their abstract:

Initial cooling started with a period of enhanced atmospheric blocking over the eastern subpolar gyre.

In response, a southward progression of the sea-ice margin occurred, and the sea-level pressure anomaly was locked to the sea-ice margin through thermal forcing. The cold-core high steered more cold air to the area, reinforcing the sea-ice concentration anomaly east of Greenland.

The sea-ice surplus was carried southward by ocean currents around the tip of Greenland. South of 70°N, sea ice already started melting and the associated freshwater anomaly was carried to the Labrador Sea, shutting off deep convection. There, surface waters were exposed longer to atmospheric cooling and sea surface temperature dropped, causing an even larger thermally forced high above the Labrador Sea.


It is fascinating to see a climate model reproducing an example of abrupt climate change. There are a few contexts to suggest for this result.

1. From the context of timescale we could ask how often these events take place, or what pre-conditions are necessary. The only way to gather meaningful statistics is for large ensemble runs of considerable length – perhaps thousands of “perturbed physics” runs each of 100,000 years length. This is far out of reach for processing power at the moment. I picked some arbitrary numbers – until the statistics start to converge and match what we see from paleoclimatology studies we don’t know if we have covered the “terrain”.

Or perhaps only five runs of 1,000 years are needed to completely solve the problem (I’m kidding).

2. From the context of resolution – as we achieve higher resolution in models we may find new phenomena emerging in climate models that did not appear before. For example, in ice age studies, coarser climate models could not achieve “perennial snow cover” at high latitudes (as a pre-condition for ice age inception), but higher resolution climate models have achieved this first step. (See Ghosts of Climates Past – Part Seven – GCM I & Part Eight – GCM II).

As a comparison on resolution, the 2,000 year El Nino study we saw in Part Six of this series had an atmospheric resolution of 2.5º x 2.0º with 24 levels.

However, we might also find that as the resolution progressively increases (with the inevitable march of processing power) phenomena that appear at one resolution disappear at yet higher resolutions. This is an opinion, but if you ask people who have experience with computational fluid dynamics I expect they will say this would not be surprising.

3. Other models might reach similar or higher resolution and never get this kind of result and demonstrate the flaw in the EC-Earth model that allowed this “Little Ice Age” result to occur. Or the reverse.

As the authors say:

As a result, only coupled climate models that are capable of realistically simulating atmospheric blocking in relation to sea-ice variations feature the enhanced sensitivity to internal fluctuations that may temporarily drive the climate system to a state that is far beyond its standard range of natural variability.

Articles in the Series

Natural Variability and Chaos – One – Introduction

Natural Variability and Chaos – Two – Lorenz 1963

Natural Variability and Chaos – Three – Attribution & Fingerprints

Natural Variability and Chaos – Four – The Thirty Year Myth

Natural Variability and Chaos – Five – Why Should Observations match Models?

Natural Variability and Chaos – Six – El Nino

Natural Variability and Chaos – Seven – Attribution & Fingerprints Or Shadows?

Natural Variability and Chaos – Eight – Abrupt Change


Spontaneous abrupt climate change due to an atmospheric blocking–sea-ice–ocean feedback in an unforced climate model simulation, Sybren Drijfhout, Emily Gleeson, Henk A. Dijkstra & Valerie Livina, PNAS (2013) – free paper

EC-Earth V2.2: description and validation of a new seamless earth system prediction model, W. Hazeleger et al, Climate dynamics (2012) – free paper


Note 1: From the Supporting Information from their paper:

Climate Model and Numerical Simulation. The climate model used in this study is version 2.2 of the EC-Earth earth system model [see references] whose atmospheric component is based on cycle 31r1 of the European Centre for Medium-range Weather Forecasts (ECMWF) Integrated Forecasting System.

The atmospheric component runs at T159 horizontal spectral resolution (roughly 1.125°) and has 62 vertical levels. In the vertical a terrain-following mixed σ/pressure coordinate is used.

The Nucleus for European Modeling of the Ocean (NEMO), version V2, running in a tripolar configuration with a horizontal resolution of nominally 1° and equatorial refinement to 0.3° (2) is used for the ocean component of EC-Earth.

Vertical mixing is achieved by a turbulent kinetic energy scheme. The vertical z coordinate features a partial step implementation, and a bottom boundary scheme mixes dense water down bottom slopes. Tracer advection is accomplished by a positive definite scheme, which does not produce spurious negative values.

The model does not resolve eddies, but eddy-induced tracer advection is parameterized (3). The ocean is divided into 42 vertical levels, spaced by ∼10 m in the upper 200 m, and thereafter increasing with depth. NEMO incorporates the Louvain-la-Neuve sea-ice model LIM2 (4), which uses the same grid as the ocean model. LIM2 treats sea ice as a 2D viscous-plastic continuum that transmits stresses between the ocean and atmosphere. Thermodynamically it consists of a snow and an ice layer.

Heat storage, heat conduction, snow–ice transformation, nonuniform snow and ice distributions, and albedo are accounted for by subgrid-scale parameterizations.

The ocean, ice, land, and atmosphere are coupled through the Ocean, Atmosphere, Sea Ice, Soil 3 coupler (5). No flux adjustments are applied to the model, resulting in a physical consistency between surface fluxes and meridional transports.

The present preindustrial (PI) run was conducted by Met Éireann and comprised 1,125 y. The ocean was initialized from the World Ocean Atlas 2001 climatology (6). The atmosphere used the 40-year ECMWF Re-Analysis of January 1, 1979, as the initial state with permanent PI (1850) greenhouse gas (280 ppm) and aerosol concentrations.

In Part Three – Attribution & Fingerprints we looked at an early paper in this field, from 1996. I was led there by following back through many papers referenced from AR5 Chapter 10. The lead author of that paper, Gabriele Hegerl, has made a significant contribution to the 3rd report, 4th and 5th IPCC reports on attribution.

We saw in Part Three that this particular paper ascribed a probability:

We find that the latest observed 30-year trend pattern of near-surface temperature change can be distinguished from all estimates of natural climate variability with an estimated risk of less than 2.5% if the optimal fingerprint is applied.

That paper did note that greatest uncertainty was in understanding the magnitude of natural variability. This is an essential element of attribution.

It wasn’t explicitly stated whether the 97.5% confidence was with the premise that natural variability was accurately understood in 1996. I believe that this was the premise. I don’t know what confidence would have been ascribed to the attribution study if uncertainty over natural variability was included.


In this article we will look at the IPCC 5th report, AR5, and see how this field has progressed, specifically in regard to the understanding of natural variability. Chapter 10 covers Detection and Attribution of Climate Change.

From p.881 (the page numbers are from the start of the whole report, chapter 10 has just over 60 pages plus references):

Since the AR4, detection and attribution studies have been carried out using new model simulations with more realistic forcings, and new observational data sets with improved representation of uncertainty (Christidis et al., 2010; Jones et al., 2011, 2013; Gillett et al., 2012, 2013; Stott and Jones, 2012; Knutson et al., 2013; Ribes and Terray, 2013).

Let’s have a look at these papers (see note 1 on CMIP3 & CMIP5).

I had trouble understanding AR5 Chapter 10 because there was no explicit discussion of natural variability. The papers referenced (usually) have their own section on natural variability, but chapter 10 doesn’t actually cover it.

I emailed Geert Jan van Oldenborgh to ask for help. He is the author of one paper we will briefly look at here – his paper was very interesting and he had a video segment explaining his paper. He suggested the problem was more about communication because natural variability was covered in chapter 9 on models. He had written a section in chapter 11 that he pointed me towards, so this article became something that tried to grasp the essence of three chapters (9 – 11), over 200 pages of reports and several pallet loads of papers.

So I’m not sure I can do the synthesis justice, but what I will endeavor to do in this article is demonstrate the minimal focus (in IPCC AR5) on how well models represent natural variability.

That subject deserves a lot more attention, so this article will be less about what natural variability is, and more about how little focus it gets in AR5. I only arrived here because I was determined to understand “fingerprints” and especially the rationale behind the certainties ascribed.

Subsequent articles will continue the discussion on natural variability.

Knutson et al 2013

The models [CMIP5] are found to provide plausible representations of internal climate variability, although there is room for improvement..

..The modeled internal climate variability from long control runs is used to determine whether observed and simulated trends are consistent or inconsistent. In other words, we assess whether observed and simulated forced trends are more extreme than those that might be expected from random sampling of internal climate variability.


The model control runs exhibit long-term drifts. The magnitudes of these drifts tend to be larger in the CMIP3 control runs than in the CMIP5 control runs, although there are exceptions. We assume that these drifts are due to the models not being in equilibrium with the control run forcing, and we remove the drifts by a linear trend analysis (depicted by the orange straight lines in Fig. 1). In some CMIP3 cases, the drift initially proceeds at one rate, but then the trend becomes smaller for the remainder of the run. We approximate the drift in these cases by two separate linear trend segments, which are identified in the figure by the short vertical orange line segments. These long-term drift trends are removed to produce the drift corrected series.

[Emphasis added].

Another paper suggests this assumption might not be correct. Here is Jones, Stott and Christidis (2013) – “piControl” are the natural variability model simulations:

Often a model simulation with no changes in external forcing (piControl) will have a drift in the climate diagnostics due to various flux imbalances in the model [Gupta et al., 2012]. Some studies attempt to account for possible model climate drifts, for instance Figure 9.5 in Hegerl et al. [2007] did not include transient simulations of the 20th century if the long-term trend of the piControl was greater in magnitude than 0.2 K/century (Appendix 9.C in Hegerl et al. [2007]).

Another technique is to remove the trend, from the transient simulations, deduced from a parallel section of piControl [e.g., Knutson et al., 2006]. However whether one should always remove the piControl trend, and how to do it in practice, is not a trivial issue [Taylor et al., 2012; Gupta et al., 2012]..

..We choose not to remove the trend from the piControl from parallel simulations of the same model in this study due to the impact it would have on long-term variability, i.e., the possibility that part of the trend in the piControl may be long-term internal variability that may or may not happen in a parallel experiment when additional forcing has been applied.

Here are further comments from Knutson et al 2013:

Five of the 24 CMIP3 models, identified by “(-)” in Fig. 1, were not used, or practically not used, beyond Fig. 1 in our analysis. For instance, the IAP_fgoals1.0.g model has a strong discontinuity near year 200 of the control run. We judge this as likely an artifact due to some problem with the model simulation, and we therefore chose to exclude this model from further analysis

From Knutson et al 2013

From Knutson et al 2013

Figure 1

Perhaps this is correct. Or perhaps the jump in simulated temperature is the climate model capturing natural climate variability.

The authors do comment:

As noted by Wittenberg (2009) and Vecchi and Wittenberg (2010), long-running control runs suggest that internally generated SST variability, at least in the ENSO region, can vary substantially between different 100-yr periods (approximately the length of record used here for observations), which again emphasizes the caution that must be placed on comparisons of modeled vs. observed internal variability based on records of relatively limited duration.

The first paper referenced, Wittenberg 2009, was the paper we looked at in Part Six – El Nino.

So is the “caution” that comes from that study included in the probability of our models ability to simulate natural variability?

In reality, questions about internal variability are not really discussed. Trends are removed, models with discontinuities are artifacts. What is left? This paper essentially takes the modeling output from the CMIP3 and CMIP5 archives (with and without GHG forcing) as a given and applies some tests.

Ribes & Terray 2013

This was a “Part II” paper and they said:

We use the same estimates of internal variability as in Ribes et al. 2013 [the “Part I”].

These are based on intra-ensemble variability from the above CMIP5 experiments as well as pre-industrial simulations from both the CMIP3 and CMIP5 archives, leading to a much larger sample than previously used (see Ribes et al. 2013 for details about ensembles). We then implicitly assume that the multi-model internal variability estimate is reliable.

[Emphasis added]. The Part I paper said:

An estimate of internal climate variability is required in detection and attribution analysis, for both optimal estimation of the scaling factors and uncertainty analysis.

Estimates of internal variability are usually based on climate simulations, which may be control simulations (i.e. in the present case, simulations with no variations in external forcings), or ensembles of simulations with the same prescribed external forcings.

In the latter case, m – 1 independent realisations of pure internal variability may be obtained by subtracting the ensemble mean from each member (assuming again additivity of the responses) and rescaling the result by a factor √(m/(m-1)) , where m denotes the number of members in the ensemble.

Note that estimation of internal variability usually means estimation of the covariance matrix of a spatio-temporal climate-vector, the dimension of this matrix potentially being high. We choose to use a multi-model estimate of internal climate variability, derived from a large ensemble of climate models and simulations. This multi-model estimate is subject to lower sampling variability and better represents the effects of model uncertainty on the estimate of internal variability than individual model estimates. We then simultaneously consider control simulations from the CMIP3 and CMIP5 archives, and ensembles of historical simulations (including simulations with individual sets of forcings) from the CMIP5 archive.

All control simulations longer than 220 years (i.e. twice the length of our study period) and all ensembles (at least 2 members) are used. The overall drift of control simulations is removed by subtracting a linear trend over the full period.. We then implicitly assume that this multi- model internal variability estimate is reliable.

[Emphasis added]. So two approaches to evaluate internal variability – one approach uses GCM runs with no GHG forcing; and the other approach uses the variation between different runs of the same model (with GHG forcing) to estimate natural variability. Drift is removed as “an error”.

Chapter 10 on Spatial Trends

The IPCC report also reviews the spatial simulations compared with spatial observations, p. 880:

Figure 10.2a shows the pattern of annual mean surface temperature trends observed over the period 1901–2010, based on Hadley Centre/ Climatic Research Unit gridded surface temperature data set 4 (Had- CRUT4). Warming has been observed at almost all locations with sufficient observations available since 1901.

Rates of warming are generally higher over land areas compared to oceans, as is also apparent over the 1951–2010 period (Figure 10.2c), which simulations indicate is due mainly to differences in local feedbacks and a net anomalous heat transport from oceans to land under GHG forcing, rather than differences in thermal inertia (e.g., Boer, 2011). Figure 10.2e demonstrates that a similar pattern of warming is simulated in the CMIP5 simulations with natural and anthropogenic forcing over the 1901–2010 period. Over most regions, observed trends fall between the 5th and 95th percentiles of simulated trends, and van Oldenborgh et al. (2013) find that over the 1950–2011 period the pattern of observed grid cell trends agrees with CMIP5 simulated trends to within a combination of model spread and internal variability..

van Oldenborgh et al (2013)

Let’s take a look at van Oldenborgh et al (2013).

There’s a nice video of (I assume) the lead author talking about the paper and comparing the probabilistic approach used in weather forecasts with that of climate models (see Ensemble Forecasting). I recommend the video for a good introduction to the topic of ensemble forecasting.

With weather forecasting the probability comes from running ensembles of weather models and seeing, for example, how many simulations predict rain vs how many do not. The proportion is the probability of rain. With weather forecasting we can continually review how well the probabilities given by ensembles match the reality. Over time we will build up a set of statistics of “probability of rain” and compare with the frequency of actual rainfall. It’s pretty easy to see if the models are over-confident or under-confident.

Here is what the authors say about the problem and how they approached it:

The ensemble is considered to be an estimate of the probability density function (PDF) of a climate forecast. This is the method used in weather and seasonal forecasting (Palmer et al 2008). Just like in these fields it is vital to verify that the resulting forecasts are reliable in the definition that the forecast probability should be equal to the observed probability (Joliffe and Stephenson 2011).

If outcomes in the tail of the PDF occur more (less) frequently than forecast the system is overconfident (underconfident): the ensemble spread is not large enough (too large).

In contrast to weather and seasonal forecasts, there is no set of hindcasts to ascertain the reliability of past climate trends per region. We therefore perform the verification study spatially, comparing the forecast and observed trends over the Earth. Climate change is now so strong that the effects can be observed locally in many regions of the world, making a verification study on the trends feasible. Spatial reliability does not imply temporal reliability, but unreliability does imply that at least in some areas the forecasts are unreliable in time as well. In the remainder of this letter we use the word ‘reliability’ to indicate spatial reliability.

[Emphasis added]. The paper first shows the result for one location, the Netherlands, with the spread of model results vs the actual result from 1950-2011:

from van Oldenborgh et al 2013

from van Oldenborgh et al 2013

Figure 2

We can see that the models are overall mostly below the observation. But this is one data point. So if we compared all of the datapoints – and this is on a grid of 2.5º – how do the model spreads compare with the results? Are observations above 95% of the model results only 5% of the time? Or more than 5% of the time? And are observations below 5% of the model results only 5% of the time?

We can see that the frequency of observations in the bottom 5% of model results is about 13% and the frequency of observations in the top 5% of model results is about 20%. Therefore the models are “overconfident” in spatial representation of the last 60 year trends:

van Oldenborgh-2013-fig3

From van Oldenborgh et al 2013

Figure 3

We investigated the reliability of trends in the CMIP5 multi-model ensemble prepared for the IPCC AR5. In agreement with earlier studies using the older CMIP3 ensemble, the temperature trends are found to be locally reliable. However, this is due to the differing global mean climate response rather than a correct representation of the spatial variability of the climate change signal up to now: when normalized by the global mean temperature the ensemble is overconfident. This agrees with results of Sakaguchi et al (2012) that the spatial variability in the pattern of warming is too small. The precipitation trends are also overconfident. There are large areas where trends in both observational dataset are (almost) outside the CMIP5 ensemble, leading us to conclude that this is unlikely due to faulty observations.

It’s probably important to note that the author comments in the video “on the larger scale the models are not doing so badly”.

It’s an interesting paper. I’m not clear whether the brief note in AR5 reflects the paper’s conclusions.

Jones et al 2013

It was reassuring to finally find a statement that confirmed what seemed obvious from the “omissions”:

A basic assumption of the optimal detection analysis is that the estimate of internal variability used is comparable with the real world’s internal variability.

Surely I can’t be the only one reading Chapter 10 and trying to understand the assumptions built into the “with 95% confidence” result. If Chapter 10 is only aimed at climate scientists who work in the field of attribution and detection it is probably fine not to actually mention this minor detail in the tight constraints of only 60 pages.

But if Chapter 10 is aimed at a wider audience it seems a little remiss not to bring it up in the chapter itself.

I probably missed the stated caveat in chapter 10’s executive summary or introduction.

The authors continue:

As the observations are influenced by external forcing, and we do not have a non-externally forced alternative reality to use to test this assumption, an alternative common method is to compare the power spectral density (PSD) of the observations with the model simulations that include external forcings.

We have already seen that overall the CMIP5 and CMIP3 model variability compares favorably across different periodicities with HadCRUT4-observed variability (Figure 5). Figure S11 (in the supporting information) includes the PSDs for each of the eight models (BCC-CSM1-1, CNRM-CM5, CSIRO- Mk3-6-0, CanESM2, GISS-E2-H, GISS-E2-R, HadGEM2- ES and NorESM1-M) that can be examined in the detection analysis.

Variability for the historical experiment in most of the models compares favorably with HadCRUT4 over the range of periodicities, except for HadGEM2-ES whose very long period variability is lower due to the lower overall trend than observed and for CanESM2 and bcc-cm1-1 whose decadal and higher period variability are larger than observed.

While not a strict test, Figure S11 suggests that the models have an adequate representation of internal variability—at least on the global mean level. In addition, we use the residual test from the regression to test whether there are any gross failings in the models representation of internal variability.

Figure S11 is in the supplementary section of the paper:

From Jones et al 2013, figure S11

From Jones et al 2013, figure S11

Figure 4

From what I can see, this demonstrates that the spectrum of the models’ internal variability (“historicalNat”) is different from the spectrum of the models’ forced response with GHG changes (“historical”).

It feels like my quantum mechanics classes all over again. I’m probably missing something obvious, and hopefully knowledgeable readers can explain.

Chapter 9 of AR5 – Climate Models’ Representation of Internal Variability

Chapter 9, reviewing models, stretches to over 80 pages. The section on internal variability is section 9.5.1:

However, the ability to simulate climate variability, both unforced internal variability and forced variability (e.g., diurnal and seasonal cycles) is also important. This has implications for the signal-to-noise estimates inherent in climate change detection and attribution studies where low-frequency climate variability must be estimated, at least in part, from long control integrations of climate models (Section 10.2).

Section 9.5.3:

In addition to the annual, intra-seasonal and diurnal cycles described above, a number of other modes of variability arise on multi-annual to multi-decadal time scales (see also Box 2.5). Most of these modes have a particular regional manifestation whose amplitude can be larger than that of human-induced climate change. The observational record is usually too short to fully evaluate the representation of variability in models and this motivates the use of reanalysis or proxies, even though these have their own limitations.

Figure 9.33a shows simulated internal variability of mean surface temperature from CMIP5 pre-industrial control simulations. Model spread is largest in the tropics and mid to high latitudes (Jones et al., 2012), where variability is also large; however, compared to CMIP3, the spread is smaller in the tropics owing to improved representation of ENSO variability (Jones et al., 2012). The power spectral density of global mean temperature variance in the historical simulations is shown in Figure 9.33b and is generally consistent with the observational estimates. At longer time scale of the spectra estimated from last millennium simulations, performed with a subset of the CMIP5 models, can be assessed by comparison with different NH temperature proxy records (Figure 9.33c; see Chapter 5 for details). The CMIP5 millennium simulations include natural and anthropogenic forcings (solar, volcanic, GHGs, land use) (Schmidt et al., 2012).

Significant differences between unforced and forced simulations are seen for time scale larger than 50 years, indicating the importance of forced variability at these time scales (Fernandez-Donado et al., 2013). It should be noted that a few models exhibit slow background climate drift which increases the spread in variance estimates at multi-century time scales.

Nevertheless, the lines of evidence above suggest with high confidence that models reproduce global and NH temperature variability on a wide range of time scales.

[Emphasis added]. Here is fig 9.33:

From IPCC AR5 Chapter 10

From IPCC AR5 Chapter 10

Figure 5 – Click to Expand

The bottom graph shows the spectra of the last 1,000 years – black line is observations (reconstructed from proxies), dashed lines are without GHG forcings, and solid lines are with GHG forcings.

In later articles we will review this in more detail.


The IPCC report on attribution is very interesting. Most attribution studies compare observations of the last 100 – 150 years with model simulations using anthropogenic GHG changes and model simulations without (note 3).

The results show a much better match for the case of the anthropogenic forcing.

The primary method is with global mean surface temperature, with more recent studies also comparing the spatial breakdown. We saw one such comparison with van Oldenborgh et al (2013). Jones et al (2013) also reviews spatial matching, finding a better fit (of models & observations) for the last half of the 20th century than the first half. (As with van Oldenborgh’s paper, the % match outside 90% of model results was greater than 10%).

My question as I first read Chapter 10 was how was the high confidence attained and what is a fingerprint?

I was led back, by following the chain of references, to one of the early papers on the topic (1996) that also had similar high confidence. (We saw this in Part Three). It was intriguing that such confidence could be attained with just a few “no forcing” model runs as comparison, all of which needed “flux adjustment”. Current models need much less, or often zero, flux adjustment.

In later papers reviewed in AR5, “no forcing” model simulations that show temperature trends or jumps are often removed or adjusted.

I’m not trying to suggest that “no forcing” GCM simulations of the last 150 years have anything like the temperature changes we have observed. They don’t.

But I was trying to understand what assumptions and premises were involved in attribution. Chapter 10 of AR5 has been valuable in suggesting references to read, but poor at laying out the assumptions and premises of attribution studies.

For clarity, as I stated in Part Three:

..as regular readers know I am fully convinced that the increases in CO2, CH4 and other GHGs over the past 100 years or more can be very well quantified into “radiative forcing” and am 100% in agreement with the IPCCs summary of the work of atmospheric physics over the last 50 years on this topic. That is, the increases in GHGs have led to something like a “radiative forcing” of 2.8 W/m²..

..Therefore, it’s “very likely” that the increases in GHGs over the last 100 years have contributed significantly to the temperature changes that we have seen.

So what’s my point?

Chapter 10 of the IPCC report fails to highlight the important assumptions in the attribution studies. Chapter 9 of the IPCC report has a section on centennial/millennial natural variability with a “high confidence” conclusion that comes with little evidence and appears to be based on a cursory comparison of the spectral results of the last 1,000 years proxy results with the CMIP5 modeling studies.

In chapter 10, the executive summary states:

..given that observed warming since 1951 is very large compared to climate model estimates of internal variability (Section, which are assessed to be adequate at global scale (Section, we conclude that it is virtually certain [99-100%] that internal variability alone cannot account for the observed global warming since 1951.

[Emphasis added]. I agree, and I don’t think anyone who understands radiative forcing and climate basics would disagree. To claim otherwise would be as ridiculous as, for example, claiming that tiny changes in solar insolation from eccentricity modifications over 100 kyrs cause the end of ice ages, whereas large temperature changes during these ice ages have no effect (see note 2).

The executive summary also says:

It is extremely likely [95–100%] that human activities caused more than half of the observed increase in GMST from 1951 to 2010.

The idea is plausible, but the confidence level is dependent on a premise that is claimed via one graph (fig 9.33) of the spectrum of the last 1,000 years. High confidence (“that models reproduce global and NH temperature variability on a wide range of time scales”) is just an opinion.

It’s crystal clear, by inspection of CMIP3 and CMIP5 model results, that models with anthropogenic forcing match the last 150 years of temperature changes much better than models held at constant pre-industrial forcing.

I believe natural variability is a difficult subject which needs a lot more than a cursory graph of the spectrum of the last 1,000 years to even achieve low confidence in our understanding.

Chapters 9 & 10 of AR5 haven’t investigated “natural variability” at all. For interest, some skeptic opinions are given in note 4.

I propose an alternative summary for Chapter 10 of AR5:

It is extremely likely [95–100%] that human activities caused more than half of the observed increase in GMST from 1951 to 2010, but this assessment is subject to considerable uncertainties.

Articles in the Series

Natural Variability and Chaos – One – Introduction

Natural Variability and Chaos – Two – Lorenz 1963

Natural Variability and Chaos – Three – Attribution & Fingerprints

Natural Variability and Chaos – Four – The Thirty Year Myth

Natural Variability and Chaos – Five – Why Should Observations match Models?

Natural Variability and Chaos – Six – El Nino

Natural Variability and Chaos – Seven – Attribution & Fingerprints Or Shadows?

Natural Variability and Chaos – Eight – Abrupt Change


Multi-model assessment of regional surface temperature trends, TR Knutson, F Zeng & AT Wittenberg, Journal of Climate (2013) – free paper

Attribution of observed historical near surface temperature variations to anthropogenic and natural causes using CMIP5 simulations, Gareth S Jones, Peter A Stott & Nikolaos Christidis, Journal of Geophysical Research Atmospheres (2013) – paywall paper

Application of regularised optimal fingerprinting to attribution. Part II: application to global near-surface temperature, Aurélien Ribes & Laurent Terray, Climate Dynamics (2013) – free paper

Application of regularised optimal fingerprinting to attribution. Part I: method, properties and idealised analysis, Aurélien Ribes, Serge Planton & Laurent Terray, Climate Dynamics (2013) – free paper

Reliability of regional climate model trends, GJ van Oldenborgh, FJ Doblas Reyes, SS Drijfhout & E Hawkins, Environmental Research Letters (2013) – free paper


Note 1: CMIP = Coupled Model Intercomparison Project. CMIP3 was for AR4 and CMIP5 was for AR5.

Read about CMIP5:

At a September 2008 meeting involving 20 climate modeling groups from around the world, the WCRP’s Working Group on Coupled Modelling (WGCM), with input from the IGBP AIMES project, agreed to promote a new set of coordinated climate model experiments. These experiments comprise the fifth phase of the Coupled Model Intercomparison Project (CMIP5). CMIP5 will notably provide a multi-model context for

1) assessing the mechanisms responsible for model differences in poorly understood feedbacks associated with the carbon cycle and with clouds

2) examining climate “predictability” and exploring the ability of models to predict climate on decadal time scales, and, more generally

3) determining why similarly forced models produce a range of responses…

From the website link above you can read more. CMIP5 is a substantial undertaking, with massive output of data from the latest climate models. Anyone can access this data, similar to CMIP3. Here is the Getting Started page.

And CMIP3:

In response to a proposed activity of the World Climate Research Programme (WCRP) Working Group on Coupled Modelling (WGCM), PCMDI volunteered to collect model output contributed by leading modeling centers around the world. Climate model output from simulations of the past, present and future climate was collected by PCMDI mostly during the years 2005 and 2006, and this archived data constitutes phase 3 of the Coupled Model Intercomparison Project (CMIP3). In part, the WGCM organized this activity to enable those outside the major modeling centers to perform research of relevance to climate scientists preparing the Fourth Asssessment Report (AR4) of the Intergovernmental Panel on Climate Change (IPCC). The IPCC was established by the World Meteorological Organization and the United Nations Environmental Program to assess scientific information on climate change. The IPCC publishes reports that summarize the state of the science.

This unprecedented collection of recent model output is officially known as the “WCRP CMIP3 multi-model dataset.” It is meant to serve IPCC’s Working Group 1, which focuses on the physical climate system — atmosphere, land surface, ocean and sea ice — and the choice of variables archived at the PCMDI reflects this focus. A more comprehensive set of output for a given model may be available from the modeling center that produced it.

With the consent of participating climate modelling groups, the WGCM has declared the CMIP3 multi-model dataset open and free for non-commercial purposes. After registering and agreeing to the “terms of use,” anyone can now obtain model output via the ESG data portal, ftp, or the OPeNDAP server.

As of July 2009, over 36 terabytes of data were in the archive and over 536 terabytes of data had been downloaded among the more than 2500 registered users

Note 2: This idea is explained in Ghosts of Climates Past -Eighteen – “Probably Nonlinearity” of Unknown Origin – what is believed and what is put forward as evidence for the theory that ice age terminations were caused by orbital changes, see especially the section under the heading: Why Theory B is Unsupportable.

Note 3: Some studies use just fixed pre-industrial values, and others compare “natural forcings” with “no forcings”.

“Natural forcings” = radiative changes due to solar insolation variations (which are not known with much confidence) and aerosols from volcanos. “No forcings” is simply fixed pre-industrial values.

Note 4: Chapter 11 (of AR5), p.982:

For the remaining projections in this chapter the spread among the CMIP5 models is used as a simple, but crude, measure of uncertainty. The extent of agreement between the CMIP5 projections provides rough guidance about the likelihood of a particular outcome. But—as partly illustrated by the discussion above—it must be kept firmly in mind that the real world could fall outside of the range spanned by these particular models. See Section 11.3.6 for further discussion.

And p. 1004:

It is possible that the real world might follow a path outside (above or below) the range projected by the CMIP5 models. Such an eventuality could arise if there are processes operating in the real world that are missing from, or inadequately represented in, the models. Two main possibilities must be considered: (1) Future radiative and other forcings may diverge from the RCP4.5 scenario and, more generally, could fall outside the range of all the RCP scenarios; (2) The response of the real climate system to radiative and other forcing may differ from that projected by the CMIP5 models. A third possibility is that internal fluctuations in the real climate system are inadequately simulated in the models. The fidelity of the CMIP5 models in simulating internal climate variability is discussed in Chapter 9..

..The response of the climate system to radiative and other forcing is influenced by a very wide range of processes, not all of which are adequately simulated in the CMIP5 models (Chapter 9). Of particular concern for projections are mechanisms that could lead to major ‘surprises’ such as an abrupt or rapid change that affects global-to-continental scale climate.

Several such mechanisms are discussed in this assessment report; these include: rapid changes in the Arctic (Section 11.3.4 and Chapter 12), rapid changes in the ocean’s overturning circulation (Chapter 12), rapid change of ice sheets (Chapter 13) and rapid changes in regional monsoon systems and hydrological climate (Chapter 14). Additional mechanisms may also exist as synthesized in Chapter 12. These mechanisms have the potential to influence climate in the near term as well as in the long term, albeit the likelihood of substantial impacts increases with global warming and is generally lower for the near term.

And p. 1009 (note that we looked at Rowlands et al 2012 in Part Five – Why Should Observations match Models?):

The CMIP3 and CMIP5 projections are ensembles of opportunity, and it is explicitly recognized that there are sources of uncertainty not simulated by the models. Evidence of this can be seen by comparing the Rowlands et al. (2012) projections for the A1B scenario, which were obtained using a very large ensemble in which the physics parameterizations were perturbed in a single climate model, with the corresponding raw multi-model CMIP3 projections. The former exhibit a substantially larger likely range than the latter. A pragmatic approach to addressing this issue, which was used in the AR4 and is also used in Chapter 12, is to consider the 5 to 95% CMIP3/5 range as a ‘likely’ rather than ‘very likely’ range.

Replacing ‘very likely’ = 90–100% with ‘likely 66–100%’ is a good start. How does this recast chapter 10?

And Chapter 1 of AR5, p. 138:

Model spread is often used as a measure of climate response uncertainty, but such a measure is crude as it takes no account of factors such as model quality (Chapter 9) or model independence (e.g., Masson and Knutti, 2011; Pennell and Reichler, 2011), and not all variables of interest are adequately simulated by global climate models..

..Climate varies naturally on nearly all time and space scales, and quantifying precisely the nature of this variability is challenging, and is characterized by considerable uncertainty.

[Emphasis added in all bold sections above]


Get every new post delivered to your Inbox.

Join 389 other followers