Feeds:
Posts

## Models, On – and Off – the Catwalk – Part One

General Circulation Models or Global Climate Models – aka GCMs – often have a bad reputation outside of the climate science community. Some of it isn’t deserved. We could say that models are misunderstood.

Before we look at models on the catwalk, let’s just consider a few basics

### Introduction

In an earlier series, CO2 – An Insignificant Trace Gas we delved into simpler numerical models. These were 1d models. They were needed to solve the radiative transfer equations through a vertical column in the atmosphere. There was no other way to solve the equations – and that’s the case with most practical engineering and physics problems.

Here’s a model from another world:

Stress analysis in an impeller

Here’s a visualization of “finite element analysis” of stresses in an impeller. See the “wire frame” look, as if the impeller has been created from lots of tiny pieces?

In this totally different application, the problem of calculating the mechanical stresses in the unit is that the “boundary conditions” – the strange shape – make solving the equations by the usual methods of re-arranging and substitution impossible. Instead what happens is the strange shape is turned into lots of little cubes. Now the equations for the stresses in each little cube are easy to calculate. So you end up with 1000′s of “simultaneous” equations. Each cube is next to another cube and so the stress on each common boundary is the same. The computer program uses some clever maths and lots of iterations to eventually find the solution to the 1000′s of equations that satisfy the “boundary conditions”.

Finite element analysis is used successfully in lots of areas of practical problem solving, many orders simpler of course, than GCMs.

### Uses of Models

One use of models is to predict, no project, future climate scenarios. That’s the one that most people are familiar with. And to supply the explanation for recent temperature increases.

But models have more practical uses. They are the only way to provide quantitative analysis of certain situations we want to consider. And they are the only way to test our understanding of the causes of past climate change.

### Analysis

On this blog one commenter asked about how much equivalent radiative forcing would be present if all the Arctic sea ice was gone. That is, with no sea ice, there is less reflection of solar radiation. So more absorption of energy – how do we calculate the amount?

You can start with a very basic idea and just look at the total area of Arctic sea ice as a proportion of the globe, and look at the local change in albedo from around 0.5-0.8 down to 0.03-0.09, multiply by the current percentage area in sea ice to find a number in terms of the change in total albedo of the earth. You can turn that into the change in radiation.

But then you think a little bit deeper and want to take into account the fact that solar radiation is at a much lower angle in the Arctic so the first number you got probably overstated the effect. So now, even without any kind of GCM, you can simply use the equation for the reduction in solar insolation due to the effective angle between the sun and the earth:

I = S cos θ - but because this angle, θ, changes with time of day and time of year for any given latitude you have to plug a straightforward equation into a maths program and do a numerical integration. Or write something up in Visual Basic or whatever your programming language of choice is. Even Excel might be able to handle it.

This approach also gives the opportunity to introduce the dependence of the ocean’s albedo on the angle of sunlight (the albedo of ocean with the sun directly overhead is 0.03 and with the sun almost on the horizon is 0.09).

This will give you a better result. But now you start thinking about the fact that the sun’s rays are travelling in a longer path through the atmosphere because of the low angle in the sky.. how to incorporate that? Is it insignificant or highly significant? Perhaps including or not including this effect would change the “radiative forcing” by a factor of two? (I have no idea).

So if you wanted to quantify the positive feedback effect of melting ice your “model” starts requiring a lot more specifics. Atmospheric absorption by O2 and O3 depending on the angle of the sun. And the model should include the spatial profile of O3 in the stratosphere (i.e., is there less at the poles, or more).

It’s only by doing these calculations that the effect of sea ice albedo can be reliably quantified. So your GCM is suddenly very useful – essential in fact.

Without it, you would simply be doing the same calculations very laboriously, slowly and less accurately on pieces of paper. A bit like how an accounts department used to work before modern PCs and spreadsheets. Now one person in finance can do the job of 10 or 20 people from a few decades ago. Without an accountant someone can just change an exchange rate, or an input cost on a well-created spreadsheet and find out the change in cash-flow, P&L and so on. Armies of people would have been needed before to work out the answers.

And of course, the beauty of the GCM is that you can play around with other factors and find out what effect they have. The albedo of the ocean also changes with waves. So you can try some limits between albedo with no waves and all waves and see the change. If it’s significant then you need a parameter that tells you how calm or stormy the ocean is throughout the year. And if you don’t have that data, you have some idea of the “error”.

Everyone wants their own GCM now..

Of course, in that thought experiment about sea ice albedo we haven’t calculated a “final” answer. Other effects will come into play (clouds).. But as you can see with this little example, different phenomena can be progressively investigated and reasonably quantified.

### Past Climate

Do we understand the causes of past climate change or not? Do the Milankovitch cycles actually explain the end of the last ice age, or the start of it?

This is another area where models are invaluable. Without a GCM, you are just guessing. Perhaps with a GCM you are guessing as well, but just don’t know it.. A topic for another day.

### Common Misconception

The idea floats around that models have “positive feedback” plugged into them. Positive feedback for those few who don’t understand it.. increases in temperature from CO2 will induce more changes (like melting Arctic sea ice) that increase temperature further.

Unless it’s done very secretly, this isn’t the case. The positive feedbacks are the result of the model’s output.

The models have a mixed bag of:

• fundamental equations – like conservation of energy, conservation of momentum
• parameterizations – for equations that are only empirically known, or can’t be easily solved in the “grid” that makes up the 3d “mesh” of the GCM

More on these important points in the next post.

### “Necessary but Not Sufficient”

A last comment before we see them on the catwalk – the catwalk “retrospective” – is that models matching the past is a necessary but not sufficient condition for them to match the future. However, it is – or it would be – depending on what we find.. a great starting point.

### Models On the Catwalk

20th century temperature hindcast vs actual - ensemble

Most people have seen this graph. It comes from the IPCC AR4 (2007).

The IPCC comment:

Models can also simulate many observed aspects of climate change over the instrumental record. One example is that the global temperature trend over the past century (shown in Figure 1) can be modeled with high skill when both human and natural factors that influence climate are included.
And a little later:

In summary, confidence in models comes from their physical basis, and their skill in representing observed climate and past climate changes. Models have proven to be extremely important tools for simulating and understanding climate, and there is considerable confidence that they are able to provide credible quantitative estimates of future climate change, particularly at larger scales. Models continue to have significant limitations, such as in their representation of clouds, which lead to uncertainties in the magnitude and timing, as well as regional details, of predicted climate change. Nevertheless, over several decades of model development, they have consistently provided a robust and unambiguous picture of significant climate warming in response to increasing greenhouse gases.

Now of course, this is a hindcast. Looking backwards. One way to think about a hindcast is that it’s easy to tweak the results to match the past. That’s partly true and, of course, that’s how the model gets improved- until it can match the past.

The other way to think about the hindcast is that it’s a good way to test the model and find out how accurate it is.

The model gets to “past predict” many different scenarios. So if someone could tweak a model so that it accurately ran temperature patterns, rainfall patterns, ocean currents, etc – if it can be tweaked so that everything in the past is accurate – how can that be a bad thing? Also the model “tweaker” can change a parameter but it doesn’t give the flexibility that many would think. Let’s suppose you want to run the model to calculate average temperatures from 1980-1999 (see below) so you put your start conditions into the model, which are values for 1980 for temperature and all other “process variables” and crank up the model.

It’s not like being able to fix up a painting with a spot of paint in the right place – it’s more like tuning an engine and hoping you win the Dhaka rally. After you blew the engine halfway through you get to do a rebuild and guess what to change next. Well, analogies – just illustrations..

Obviously, these results would need to be achieved by equations and parameterizations that matched the real world. If “tweaking” requires non-physical laws then that would create questions. Well, more on this also in later posts.

More model shots.. The top graphic is the one of interest. This is actual temperature (average 1980-1999) in contours with the shading denoting the model error (actual minus model values). Light blue and light orange (or is it white?) are good..

Actual 1980-1999 temperature with shading denoting model error (top graphic)

The model error is not so bad. Not perfect though. (Note that for some reason, not explained, the land temperature average is over a different time period than sea surface temperatures).

Temperature range:

1980-1999 Temperature range in each location and Model error in temperature range

The standard deviation in temperature gives a measure of the range of temperatures experienced. The colors on the globe indicate the difference between the observed and simulated standard deviation of temperatures.

Simplifying, the light blue and light orange areas are where the models are best at working out the monthly temperature range. The darker colors are where the models are worse. Looks pretty good.

Rainfall:

Actual Rainfall vs Model Rainfall, 1980-99

This one is awesome. Remember that rainfall is calculated by physical processes. Temperature, available water sources, clouds, temperature changes, winds, convection..

Ocean temperature:

Ocean potential temperature and model error 1957-1990

Ocean potential temperature, what’s that? Think of it as the real temperature with unstable up and down movements factored out, or read about potential temperature.. Note that the contours are the measurements (averaged over 34 years) and the shaded colors are the deviations of actual – model. So once again the light blue and light orange are very close to reality, the darker colors are further away from reality.

This one you would expect to be easier to get right than rainfall, but still, looking good.

### Conclusion

It’s just the start of the journey into models. There will be more, next we will look at Models Off the Catwalk. So if you have comments it’s perhaps not necessary to write your complete thoughts on past climate, chaos.. Interesting, constructive and thoughtful comments are welcome and encouraged, of course. As are questions.

Hopefully, we can avoid the usual bunfight over whether the last ten years actual match the model’s predictions. Other places are so much better for those “discussions”..

Update – Part Two now published.

### 22 Responses

1. Great post, very interesting. Have you read “Storms of My Grandchildren,”? Hansen talks about models a bit there — he seems to feel that models are less than reliable over the long term, and a better bet is to look at periods in history with comparable forcings and look at what the conditions were then. Your thoughts?

And another question that has defeated my Google-fu: Do you know offhand what kind of assumptions the current models make about future changes methane levels?

I would think it would be difficult to estimate, given that we don’t know why methane levels leveled off there for a while, nor why it has resumed its climb over the past three years.

I am wondering what the consequences would be if the current trend of 0.5% growth in methane continues (knowing full well that by even speculating about a trend from three years of growth I risk the wrath of Gaussia, Muse of Statistical Significance. But I do wonder.)

2. Robert:

I haven’t read the book. You ask the big questions right at the start.. models, reliability, history.. well, we will work through these things I hope.

No idea on methane either, but why not take a look at IPCC AR4 p793 Chapter 10 Climate Projections:
“10.4.3 Simulations of Future Evolution of Methane, Ozone and Oxidants”
I’m sure there will be a good explanation of scenarios and results..

3. The SRES scenarios assume +6 ppb yr–1. Others assume little to no increase. Estimates of the change in the CH4=ozone forcing range from -0.05 to +0.18 to +0.30 W/m^2 for the MFR, CLE, and A2 scenarios. They don’t give a number for the SRES scenario. Although they talk about the increase under a SRES scenario.

I may be just a tad out of my depth here.

4. Robert

Hansen tends to be rather selective in choosing the periods to look at, although the basic approach of looking at what has gone before and basing future suppositions on that is a good one.

The major storms over the last two thousand years have been very well documented by Hubert Lamb, the first director of CRU. This shows storms/floods/hurricanes etc literally shaping landscapes and destrying whole villages in the past and there is nothing comparable over the last 100 years or so.

Whilst his book is -as its title infers-confined to a part of the Northern Hemisphere it is a very well documented part

“Historic storms of the North Sea, British Isles and Northwest Europe” by Hubert Lamb ISBN 0-521-61931-9 published by Cambridge University Press.

When you look at history it is very difficult oi be alarmist about todays events!

Science of Doom -just wanted to say how much I enjoy reading your articles and the even handed manner in which things here are conducted.

Tonyb

5. I know climate scientists, well the IPCC anyway, claim there is no known climate driver that can explain the recent warming other than increased CO2. But what if there is an unknown driver?

They had models for ocean waves for many years indicating that the reported rogue waves by ships crews had to be exaggerations. It wasn’t until the 90s that sattelites confirmed the commonality of rogue ocean waves. They are developing new wave models now based on observations.

An article on that subject can be seen with the below link.

http://www.livescience.com/environment/050503_monster_waves.html

6. on February 28, 2010 at 1:54 pm | Reply Leonard Weinstein

I think there may in fact be an input to the models that forces the positive feedback to appear. This is the assumption of constant relative humidity. In fact, the best data available shows a reduction in relative humidity at the higher altitudes associated with the feedback. The exact trend is complex, and there may have a small positive feedback in the 1990′s, but there appears to be a negative feedback in the 2000 to 2010 time frame (Soloman, et, al.).

7. Leonard Weinstein

By enormous coincidence I am just finishing an article on Historic CO2 readings which commences with referencing the paper written by you;

‘Limitations on Anthropogenic Global Warming’ carried on The Air Vent

My follow up concentrates primarily on Co2 measurements through History, the people who took them, the circumstances, and the instruments and methods used.

This all appears to indicate Ernst Beck probably has a very good case and levels were similar to today during the 19th Century.

If you have the time to have a quick read through it before release please send me a message by clicking on my name. Thanks

Tonyb

8. Robert,

The emissions scenario’s are probably one of the places with the greatest amount of disagreement. Scenario A1 predicts 52 Gt/yr CO2 emissions by 2030 and Scenario B2 predicts 37 Gt/yr CO2 by 2030.

Current EIA projections are 40 Gt/yr by 2030.

Methane gets even more complicated.

9. Ive got a Q… so for the initial input data what is required? Do you need to know the “weather” at all the grids at that time? or will they work of basic atmospheric approximations(averages for grids)? And for the oceans, are they modeled as well? or more just surface conditions?

10. Leonard Weinstein:

think there may in fact be an input to the models that forces the positive feedback to appear. This is the assumption of constant relative humidity.

This is a key point. If relative humidity is assumed but not true..

Hopefully we will get into this critical point in due course..

11. Mike Ewing

Each grid section is pretty big, more than 1′ x 1′ (lat x long). And yes there is a big database of past conditions that is used to find the right average starting value for each grid section for each parameter.

There are lots of different models. The bigger ones are AOGCMs – atmosphere-ocean GCMs. In these the oceans are modeled as well. Usually they can be decoupled if required, so really there are 2 models joined together.

In the next post in this series there will some more about a top model named CCSM3.

12. I think there may in fact be an input to the models that forces the positive feedback to appear. This is the assumption of constant relative humidity. In fact, the best data available shows a reduction in relative humidity at the higher altitudes associated with the feedback.

Solomon et al is about stratospheric water vapor, not water vapor in the troposphere, which is where (somewhat) constant relative humidity is presumed and backed up by observations (recently via detailed observations at various altitudes in the troposphere using the AIRS sensor on the AQUA satellite, which match model predictions very closely).

Water vapor content in the stratosphere is very low and doesn’t get there through the normal evaporation/convection/mixing process seen in the troposphere. The relative humidity argument for the troposphere isn’t challenged by Solomon.

Solomon’s paper addresses natural variability, not the underlying trend. She doesn’t argue that this contradicts anything important in regard to AGW. Note that if there’s a higher than previously known negative feedback as stratospheric water vapor increases, then when it decreases, the decrease in that negative feedback will also be higher than previously known. Bigger swings around the trend, not a different trend, if I understand correctly.

Also note that Solomon’s paper is just one paper, and like most scientific work that presents novel results, should be taken with a grain of salt until more work is done in the area.

13. Thanks harrywr2, sometimes I just need to be told when something is above my pay grade!

14. [...] 23, 2010 by scienceofdoom In Part One, we introduced some climate model basics, including uses of climate models (not all of which are [...]

15. I’ve been reading some of the articles on this site, and I think they are quite good.

I do feel compelled to comment on your characterizations of hind casts. It is of course good and perhaps necessary for a model to work accurately against a known record. But the true test of a model is its ability to continue to work with data that was not available to the modeler. Unfortunately for climate modelers most of the detailed data comes from the relatively short current period.

Continuing to refine models against a very well known data set is interesting, but does little to improve confidence against unknown data sets. (One example of which is the future.)

Having said this the scientific consensus on the gross temperature impact of doubling CO2 has changed very little over the last 30 years, and there is no reason to believe it is incorrect.

16. For some time it has puzzled me that GCMs do a decent job of hindcasting the 20th century, for example as we just looked at in the IPCC figure, while at the same time they have sensitivities that vary by over a factor of three.

How does that circle get squared?

17. SOD: In “Common Misconceptions”, you state that “The idea floats around that models have “positive feedback” plugged into them. ” Feedback and climate sensitivity are obviously not parameters that are directly entered into a model. However, I have read (but can’t find my original source) that some skeptics assert that climate sensitivity is an innate property that is built into a model – not a number that calculated from experiments with GCM’s. While attempting to find my earlier source, I ran across a relevant 2005 Nature paper by Stainforth (http://ww.lse.ac.uk/collections/cats/papersPDFs/66_EvaluatingUncertainty_Nature_2005.pdf) where about 1000 versions of the the HADCM3 model were run with different sets of physically relevant parameters. They found “model versions as realistic as other state-of-the-art climate models, but with climate sensitivities ranging from less than 2 K to more than 11 K.” In other words, the climate sensitivity of the HADCM3 model (3.4 degK/2XCO2) turns out to be determined by an arbitrary choice of unconstrained parameters. Since the choice of parameters appears to be arbitrary, there may be some truth to the claim that climate sensitivity is an input into models (via cloud parameterization).

These results shouldn’t be surprising, because the difference between a climate sensitivity of 2 and infinity is a modest change in total feedback – 1.7 vs 3.3 W/m^2/degK if my calcs are correct.

Question: Does this mean that discerning climate scientists believe that climate sensitivities derived from the IPCC’s models are essentially meaningless? If one really believed that high-sensitivity, low-sensitivity and standard versions of HADCM3 really were equally good at representing today’s climate, this would seem to be an appropriate conclusion. (One could choose among these models using past climate, but climate sensitivity has already been limited to 1.5-4.5 using this approach without GCM’s.)

Lindzen claims that most models use too large a value for thermal diffusivity in the oceans. It would be interesting to know if thermal diffusivity is one of the parameters Stainforth et al varied through the full possible range.

Later Stainforth publications suggest that he is focusing on cloud parameters and thermal diffusivity probably hasn’t been varied – if it is an input parameter that is normally modified. Attempts to select the best models from ensembles weren’t particularly successful:

18. Frank:

Question: Does this mean that discerning climate scientists believe that climate sensitivities derived from the IPCC’s models are essentially meaningless?

Hard question to answer. You can see what climate scientists think in papers they write and, for a few, in their textbooks.

From the survey by Dennis Bray and Hans von Storch there is quite a spectrum of opinions.

19. Thank you for the reply. I read the survey – which was interesting, but out-of-date and not directly relevant to my difficult question. If you don’t think it is unreasonable to discount the predictions models make about feedbacks and climate sensitivity because those predictions depend greatly on unconstrained parameters, your might consider revising the section on “Common Misconceptions”.

The follow up papers by Stainforth mentioned above show that all of his versions of HADCM3 are not equally good as reproducing current climate (especially outgoing radiation), but the ensemble doesn’t narrow the range of any parameter based on deteriorating performance.

Question: Has any useful work been done on the reliability of the parameter models use for the rate at which heat penetrates the oceans (thermal diffusivity)? The issue is relevant to: interpreting the changes seen after Mt. Pinatubo, the idea that 0.5 degK of further warming is “in the pipeline” (even if we stabilized GHG levels today), and probably climate sensitivity itself.

20. [...] For a bit of background generally on models, take a look at the Introduction in Models On – and Off – the Catwalk. [...]

21. Dear SoD

Re your intro to GCM’s, in fact many if not most GCM’s rely on a method that is almost exactly Finite Elements (FE), called Spectral Methods (SM’s). In techno-bable, the primary difference is that FE have local basis functions, while SM’s have global basis functions. The SM’s are used to solve the models “in the plane” (i.e. horizontally)

If the GCM includes a third (upward) dimension, then that is usually managed with Finite Difference (FD).

The reason for splitting the solution into SM/FD format is a little technical, but basically there is a lucky coincidence for GCM’s that allows SM’s to be much more computationally efficient (c.f. doing the whole thing FD, or FE).

BTW, I only saw this page today, and that you had made use of AR4 Fig 8.1 as an illustration of model verification. In fact, as you know by now, I had commented on the “cheating” that is used in exactly that image, previously on another part of your blog.

I can repeat some of those illustrations of the pathologies in those models here (not sure if repetition is desired), or if you like, I have posted a “no volcano cheating” version of IPCC AR4 Fig 8.1 to here (http://www.thebajors.com/climategames.htm) you are welcome to copy it if you wish.

BTW, it is a relatively easy matter to show people, say, the heat equation, to show how the FD version is created, and even how that can be whacked into a spreadsheet (yes, your very own FD PDE solver in a spreadsheet, no coding required).

Cheers

PS. not trying to be picayune, but your statement about the “solutions satisfying the boundary conditions”, should read more like “solutions that satisfy the model equations plus their BC’s and IC’s” The entirety of (P or ODE) models is the PDE’s together with IC’s/BC’s.

22. […] In this article we will start to consider what GCMs can do in falsifying these theories. For some basics on GCMs, take a look at Models On – and Off – the Catwalk. […]