Feeds:
Posts
Comments

The atmosphere cools to space by radiation. Well, without getting into all the details, the surface cools to space as well by radiation but not much radiation is emitted by the surface that escapes directly to space (note 1). Most surface radiation is absorbed by the atmosphere. And of course the surface mostly cools by convection into the troposphere (lower atmosphere).

If there were no radiatively-active gases (aka “GHG”s) in the atmosphere then the atmosphere couldn’t cool to space at all.

Technically, the emissivity of the atmosphere would be zero. Emission is determined by the local temperature of the atmosphere and its emissivity. Wavelength by wavelength emissivity is equal to absorptivity, another technical term, which says what proportion of radiation is absorbed by the atmosphere. If the atmosphere can’t emit, it can’t absorb (note 2).

So as you increase the GHGs in the atmosphere you increase its ability to cool to space. A lot of people realize this at some point during their climate science journey and finally realize how they have been duped by climate science all along! It’s irrefutable – more GHGs more cooling to space, more GHGs mean less global warming!

Ok, it’s true. Now the game’s up, I’ll pack up Science of Doom into a crate and start writing about something else. Maybe cognitive dissonance..

Bye everyone!

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Halfway through boxing everything up I realized there was a little complication to the simplicity of that paragraph. The atmosphere with more GHGs has a higher emissivity, but also a higher absorptivity.

Let’s draw a little diagram. Here are two “layers” (see note 3) of the atmosphere in two different cases. On the left 400 ppmv CO2, on the right 500ppmv CO2 (and relative humidity of water vapor was set at 50%, surface temperature at 288K):

Cooling-to-space-2a

Figure 1

It’s clear that the two layers are both emitting more radiation with more CO2.More cooling to space.

For interest, the “total emissivity” of the top layer is 0.190 in the first case and 0.197 in the second case. The layer below has 0.389 and 0.395.

Let’s take a look at all of the numbers and see what is going on. This diagram is a little busier:

Cooling-to-space-3a

Figure 2

The key point is that the OLR (outgoing longwave radiation) is lower in the case with more CO2. Yet each layer is emitting more radiation. How can this be?

Take a look at the radiation entering the top layer on the left = 265.1, and add to that the emitted radiation = 23.0 – the total is 288.1. Now subtract the radiation leaving through the top boundary = 257.0 and we get the radiation absorbed in the layer. This is 31.1 W/m².

Compare that with the same calculation with more CO2 – the absorption is 32.2 W/m².

This is the case all the way up through the atmosphere – each layer emits more because its emissivity has increased, but it also absorbs more because its absorptivity has increased by the same amount.

So more cooling to space, but unfortunately more absorption of the radiation below – two competing terms.

So why don’t they cancel out?

Emission of radiation is a result of local temperature and emissivity.

Absorption of radiation is the result of the incident radiation and absorptivity. Incident upwards radiation started lower in the atmosphere where it is hotter. So absorption changes always outweigh emission changes (note 4).

Conceptual Problems?

If it’s still not making sense then think about what happens as you reduce the GHGs in the atmosphere. The atmosphere emits less but absorbs even less of the radiation from below. So the outgoing longwave radiation increases. More surface radiation is making it to the top of atmosphere without being absorbed. So there is less cooling to space from the atmosphere, but more cooling to space from the surface and the atmosphere.

If you add lagging to a pipe, the temperature of the pipe increases (assuming of course it is “internally” heated with hot water). And yet, the pipe cools to the surrounding room via the lagging! Does that mean more lagging, more cooling? No, it’s just the transfer mechanism for getting the heat out.

That was just an analogy. Analogies don’t prove anything. If well chosen, they can be useful in illustrating problems. End of analogy disclaimer.

If you want to understand more about how radiation travels through the atmosphere and how GHG changes affect this journey, take a look at the series Visualizing Atmospheric Radiation.

 

Notes

Note 1: For more on the details see

Note 2: A very basic point – absolutely essential for understanding anything at all about climate science – is that the absorptivity of the atmosphere can be (and is) totally different from its emissivity when you are considering different wavelengths. The atmosphere is quite transparent to solar radiation, but quite opaque to terrestrial radiation – because they are at different wavelengths. 99% of solar radiation is at wavelengths less than 4 μm, and 99% of terrestrial radiation is at wavelengths greater than 4 μm. That’s because the sun’s surface is around 6000K while the earth’s surface is around 290K. So the atmosphere has low absorptivity of solar radiation (<4 μm) but high emissivity of terrestrial radiation.

Note 3: Any numerical calculation has to create some kind of grid. This is a very course grid, with 10 layers of roughly equal pressure in the atmosphere from the surface to 200mbar. The grid assumes there is just one temperature for each layer. Of course the temperature is decreasing as you go up. We could divide the atmosphere into 30 layers instead. We would get more accurate results. We would find the same effect.

Note 4: The equations for radiative transfer are found in Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations. The equations prove this effect.

In A Challenge for Bryan I put up a simple heat transfer problem and asked for the equations. Bryan elected not to provide these equations. So I provide the answer, but also attempt some enlightenment for people who don’t think the answer can be correct.

As DeWitt Payne noted, a post with a similar problem posted on Wattsupwiththat managed to gather some (unintentionally) hilarious comments.

Here’s the problem again:

Case 1

Spherical body, A, of radius ra, with an emissivity, εa =1. The sphere is in the vacuum of space.

It is internally heated by a mystery power source (let’s say nuclear, but it doesn’t matter), with power input = P.

The sphere radiates into deep space, let’s say the temperature of deep space = 0K to make the maths simpler.

1. What is the equation for the equilibrium surface temperature of the sphere, Ta?

Case 2

The condition of case A, but now body A is surrounded by a slightly larger spherical shell, B, which of course is itself now surrounded by deep space at 0K.

B has a radius rb, with an emissivity, εb =1. This shell is highly conductive and very thin.

2a. What is the equation for the new equilibrium surface temperature, Ta’?

2b. What is the equation for the equilibrium temperature, Tb, of shell B?

 

Notes:

The reason for the “slightly larger shell” is to avoid “complex” view factor issues. Of course, I’m happy to relax the requirement for “slightly larger” and let Bryan provide the more general answer.

The reason for the “highly conductive” and “thin” outer shell, B, is to avoid any temperature difference between the inside and the outside surfaces of the shell. That is, we can assume the outside surface is at the same temperature as the inside surface – both at temperature, Tb.

This kind of problem is a staple of introductory heat transfer. This is a “find the equilibrium” problem.

How do we solve these kinds of problems? It’s pretty easy once you understand the tools.

The first tool is the first law of thermodynamics. Steady state means temperatures have stabilized and so energy in = energy out. We draw a “boundary” around each body and apply the “boundary condition” of the first law.

The second tool is the set of equations that govern the movement of energy. These are the equations for conduction, convection and radiation. In this case we just have radiation to consider.

For people who see the solution, shake their heads and say, this can’t be, stay on to the end and I will try and shed some light on possible conceptual problems. Of course, if it’s wrong, you should easily be able to provide the correct equations – or even if you can’t write equations you should be able to explain the flaw in the formulation of the equation.

In the original article I put some numbers down – “For anyone who wants to visualize some numbers: ra=1m, P=1000W, rb=1.01m“. I will use these to calculate an answer from the equations. I realize many readers aren’t comfortable with equations and so the answers will help illuminate the meaning of the equations.

I go through the equations in tedious detail, again for people who would like to follow the maths but don’t find maths easy.

Case 1

Energy in, Ein = Energy out, Eout  :  in Watts (Joules per second).

Ein = P

Eout = emission of thermal radiation per unit area x area

The first part is given by the Stefan-Boltzmann equation (σTa4, where σ = 5.67×10-8), and the second part by the equation for the surface area of a sphere (4πra²)

Eout = 4πra² x σTa4 …..[eqn 1]

Therefore, P = 4πra²σTa4 ….[eqn 2]

We have to rearrange the equation to see how Ta changes with the other factors:

Ta = [P / (4πra²σ)]1/4 ….[eqn 3]

If you aren’t comfortable with maths this might seem a little daunting. Let’s put the numbers in:

Ta = 194K (-80ºC)

Now we haven’t said anything about how long it takes to reach this temperature. We don’t have enough information for that. That’s the nice thing about steady state calculations, they are easier than dynamic calculations. We will look at that at the end.

Probably everyone is happy with this equation. Energy is conserved. No surprises and nothing controversial.

Now we will apply the exact same approach to the second case.

Case 2

First we consider “body A”. Given that it is enclosed by another “body” – the shell B – we have to consider any energy being transferred by radiation from B to A. If it turns out to be zero, of course it won’t affect the temperature of body A.

Ein(a) = P + Eb-a ….[eqn 4], where Eb-a is a value we don’t yet know. It is the radiation from B absorbed by A.

Eout(a) = 4πra² x σTa4 ….[eqn 5]- this is the same as in case 1. Emission of radiation from a body only depends on its temperature (and emissivity and area but these aren’t changing between the two cases)

– we will look at shell B and come back to the last term in eqn 4.

Now the shell outer surface:

Radiates out to space

We set space at absolute zero so no radiation is received by the outer surface

Shell inner surface:

Radiates in to A (in fact almost all of the radiation emitted from the inner surface is absorbed by A and for now we will treat it as all) – this was the term Eb-a

Absorbs all of the radiation emitted by A, this is Eout(a)

And we made the shell thin and highly conductive so there is no temperature difference between the two surfaces. Let’s collect the heat transfer terms for shell B under steady state:

Ein(b) = Eout(a) + 0  …..[eqn 6] – energy in is all from the sphere A, and nothing from outside

             =  4πra² x σTa4 ….[eqn 6a] – we just took the value from eqn 5

Eout(b) = 4πrb² x σTb4 + 4πrb² x σTb4 …..[eqn 7] – energy out is the emitted radiation from the inner surface + emitted radiation from the outer surface

                = 2 x 4πrb² x σTb4 ….[eqn 7a]

 And we know that for shell B, Ein = Eout so we equate 6a and 7a:

4πra² x σTa4 = 2 x 4πrb² x σTb4 ….[eqn 8]

and now we can cancel a lot of the common terms:

ra² x Ta4 = 2 x rb² x Tb4 ….[eqn 8a]

and re-arrange to get Ta in terms of Tb:

Ta4 = 2rb²/ra² x Tb4 ….[eqn 8b]

Ta = [2rb²/ra²]1/4 x Tb ….[eqn 8b]

or we can write it the other way round:

Tb = [ra²/2rb²]1/4 x Ta ….[eqn 8c]

Using the numbers given, Ta = 1.2 Tb. So the sphere is 20% warmer than the shell (actually 2 to the power 1/4).

We need to use Ein=Eout for the sphere A to be able to get the full solution. We wrote down: Ein(a) = P + Eb-a ….[eqn 4]. Now we know “Eb-a” – this is one of the terms in eqn 7.

So:

Ein(a) = P + 4πrb² x σTb4 ….[eqn 9]

and Ein(a) = Eout(a), so:

P + 4πrb² x σTb4 = 4πra² x σTa4  ….[eqn 9]

we can substitute the equation for Tb:

P + 4πra² /2 x σTa4 = 4πra² x σTa4  ….[eqn 9a]

the 2nd term on the left and the right hand side can be combined:

P = 2πra² x σTa4  ….[eqn 9a]

And so, voila:

T’a = [P / (2πra²σ)]1/4 ….[eqn 10] – I added a dash to Ta so we can compare it with the original value before the shell arrived.

T’a = 21/4 Ta   ….[eqn 11] – that is, the temperature of the sphere A is about 20% warmer in case 2 compared with case 1.

Using the numbers, T’a = 230 K (-43ºC). And Tb = 193 K (-81ºC)

Explaining the Results

In case 2, the inner sphere, A, has its temperature increase by 36K even though the same energy production takes place inside. Obviously, this can’t be right because we have created energy??.. let’s come back to that shortly.

Notice something very important – Tb in case 2 is almost identical to Ta in case 1. The difference is actually only due to the slight difference in surface area. Why?

The system has an energy production, P, in both cases.

  • In case 1, the sphere A is the boundary transferring energy to space and so its equilibrium temperature must be determined by P
  • In case 2, the shell B is the boundary transferring energy to space and so its equilibrium temperature must be determined by P

Now let’s confirm the mystery unphysical totally fake invented energy.

Let’s compare the flux emitted from A in case 1 and case 2. I’ll call it R.

  • R(case 1) = 80 W/m²
  • R(case 2) = 159 W/m²

This is obviously rubbish. The same energy source inside the sphere and we doubled the sphere’s energy production!!! Get this idiot to take down this post, he has no idea what he is writing..

Yet if we check the energy balance we find that 80 W/m² is being “created” by our power source, and the “extra mystery” energy of 79 W/m² is coming from our outer shell. In any given second no energy is created.

The Mystery Invented Energy – Revealed

When we snapped the outer shell over the sphere we made it harder for heat to get out of the system. Energy in = energy out, in steady state. When we are not in steady state: energy in – energy out = energy retained. Energy retained is internal energy which is manifested as temperature.

We made it hard for heat to get out, which accumulated energy, which increased temperature.. until finally the inner sphere A was hot enough for all of the internally generated energy, P, to get out of the system.

Let’s add some information about the system: the heat capacity of the sphere = 1000 J/K; the heat capacity of the shell = 100 J/K. It doesn’t much matter what they are, it’s just to calculate the transients. We snap the shell – originally at 0K – around the sphere at time t=100 seconds and see what happens.

The top graph shows temperature, the bottom graph shows change in energy of the two objects and how much energy is leaving the system:

Bryan-sphere

At 100 seconds we see that instead of our steady state 1000W leaving the system, instead 0W leaves the system. This is the important part of the mystery energy puzzle.

We put a 0K shell around the sphere. This absorbs all the energy from the sphere. At time t=100s the shell is still at 0K so it emits 0W/m². It heats up pretty quickly, but remember that emission of radiation is not linear with temperature so you don’t see a linear relationship between the temperature of shell B and the energy leaving to space. For example at 100K, the outward emission is 6 W/m², at 150K it is 29 W/m² and at its final temperature of 193K, it is 79 W/m² (=1000 W in total).

As the shell heats up it emits more and more radiation inwards, heating up the sphere A.

The mystery energy has been revealed. The addition of a radiation barrier stopped energy leaving, which stored heat. The way equilibrium is finally restored is due to the temperature increase of the sphere.

Of course, for some strange reason an army of people thinks this is totally false. Well, produce your equations.. (this never happens)

All we have done here is used conservation of energy and the Stefan Boltzmann law of emission of thermal radiation.

Bryan needs no introduction on this blog, but if we were to introduce him it would be as the fearless champion of Gerlich and Tscheuschner.

Bryan has been trying to teach me some basics on heat transfer from the Ladybird Book of Thermodynamics. In hilarious fashion we both already agree on that particular point.

So now here is a problem for Bryan to solve.

Of course, in Game of Thrones fashion, Bryan can nominate his own champion to solve the problem.

Case A

Spherical body, A, of radius ra, with an emissivity, εa =1. The sphere is in the vacuum of space.

It is internally heated by a mystery power source (let’s say nuclear, but it doesn’t matter), with power input = P.

The sphere radiates into deep space, let’s say the temperature of deep space = 0K to make the maths simpler.

1. What is the equation for the equilibrium surface temperature of the sphere, Ta?

Case B

The condition of case A, but now body A is surrounded by a slightly larger spherical shell, B, which of course is itself now surrounded by deep space at 0K.

B has a radius rb, with an emissivity, εb =1. This shell is highly conductive and very thin.

2a. What is the equation for the new equilibrium surface temperature, Ta’?

2b. What is the equation for the equilibrium temperature, Tb, of shell B?

 

Notes:

The reason for the “slightly larger shell” is to avoid “complex” view factor issues. Of course, I’m happy to relax the requirement for “slightly larger” and let Bryan provide the more general answer.

The reason for the “highly conductive” and “thin” outer shell, B, is to avoid any temperature difference between the inside and the outside surfaces of the shell. That is, we can assume the outside surface is at the same temperature as the inside surface – both at temperature, Tb.

For anyone who wants to visualize some numbers: ra=1m, P=1000W, rb=1.01m

This problem takes a couple of minutes to solve on a piece of paper. I suspect we will wait a decade for Bryan’s answer. But I love to be proved wrong!

In Part One we had a look at some introductory ideas. In this article we will look at one of the ground-breaking papers in chaos theory – Deterministic nonperiodic flow, Edward Lorenz (1963). It has been cited more than 13,500 times.

There might be some introductory books on non-linear dynamics and chaos that don’t include a discussion of this paper – or at least a mention – but they will be in a small minority.

Lorenz was thinking about convection in the atmosphere, or any fluid heated from below, and reduced the problem to just three simple equations. However, the equations were still non-linear and because of this they exhibit chaotic behavior.

Cencini et al describe Lorenz’s problem:

Consider a fluid, initially at rest, constrained by two infinite horizontal plates maintained at constant temperature and at a fixed distance from each other. Gravity acts on the system perpendicular to the plates. If the upper plate is maintained hotter than the lower one, the fluid remains at rest and in a state of conduction, i.e., a linear temperature gradient establishes between the two plates.

If the temperatures are inverted, gravity induced buoyancy forces tend to rise toward the top the hotter, and thus lighter fluid, that is at the bottom. This tendency is contrasted by viscous and dissipative forces of the fluid so that the conduction state may persist.

However, as the temperature differential exceeds a certain amount, the conduction state is replaced by a steady convection state: the fluid motion consists of steady counter-rotating vortices (rolls) which transport upwards the hot/light fluid in contact with the bottom plate and downwards the cold heavy fluid in contact with the upper one.

The steady convection state remains stable up to another critical temperature difference above which it becomes unsteady, very irregular and hardly predictable.

Willem Malkus and Lou Howard of MIT came up with an equivalent system – the simplest version is shown in this video:

Figure 1

Steven Strogatz (1994), an excellent introduction to dynamic and chaotic systems – explains and derives the equivalence between the classic Lorenz equations and this tilted waterwheel.

L63 (as I’ll call these equations) has three variables apart from time: intensity of convection (x), temperature difference between ascending and descending currents (y), deviation of temperature from a linear profile (z).

Here are some calculated results for L63  for the “classic” parameter values and three very slightly different initial conditions (blue, red, green in each plot) over 5,000 seconds, showing the start and end 50 seconds – click to expand:

Lorenz63-5ksecs-x-y-vs-time-499px

Figure 2 – click to expand – initial conditions x,y,z = 0, 1, 0;  0, 1.001, 0;  0, 1.002, 0

We can see that quite early on the two conditions diverge, and 5000 seconds later the system still exhibits similar “non-periodic” characteristics.

For interest let’s zoom in on just over 10 seconds of ‘x’ near the start and end:

Lorenz63-5ksecs-x-vs-time-zoom-499px

Figure 3

Going back to an important point from the first post, some chaotic systems will have predictable statistics even if the actual state at any future time is impossible to determine (due to uncertainty over the initial conditions).

So we’ll take a look at the statistics via a running average – click to expand:

Lorenz63-5ksecs-x-y-vs-time-average-499px

Figure 4 – click to expand

Two things stand out – first of all the running average over more than 100 “oscillations” still shows a large amount of variability. So at any one time, if we were to calculate the average from our current and historical experience we could easily end up calculating a value that was far from the “long term average”. Second – the “short term” average, if we can call it that, shows large variation at any given time between our slightly divergent initial conditions.

So we might believe – and be correct – that the long term statistics of slightly different initial conditions are identical, yet be fooled in practice.

Of course, surely it sorts itself out over a longer time scale?

I ran the same simulation (with just the first two starting conditions) for 25,000 seconds and then used a filter window of 1,000 seconds – click to expand:

Lorenz63-25ksecs-x-time-1000s-average-499px

 Figure 5 – click to expand

The total variability is less, but we have a similar problem – it’s just lower in magnitude. Again we see that the statistics of two slightly different initial conditions – if we were to view them by the running average at any one time –  are likely to be different even over this much longer time frame.

From this 25,000 second simulation:

  • take 10,000 random samples each of 25 second length and plot a histogram of the means of each sample (the sample means)
  • same again for 100 seconds
  • same again for 500 seconds
  • same again for 3,000 seconds

Repeat for the data from the other initial condition.

Here is the result:

Lorenz-25000s-histogram-of-means-2-conditions

Figure 6

To make it easier to see, here is the difference between the two sets of histograms, normalized by the maximum value in each set:

Lorenz-25000s-delta-histogram

Figure 7

This is a different way of viewing what we saw in figures 4 & 5.

The spread of sample means shrinks as we increase the time period but the difference between the two data sets doesn’t seem to disappear (note 2).

Attractors and Phase Space

The above plots show how variables change with time. There’s another way to view the evolution of system dynamics and that is by “phase space”. It’s a name for a different kind of plot.

So instead of plotting x vs time, y vs time and z vs time – let’s plot x vs y vs z – click to expand:

Lorenz63-first-50s-x-y-z-499px

Figure 8 – Click to expand – the colors blue, red & green represent the same initial conditions as in figure 2

Without some dynamic animation we can’t now tell how fast the system evolves. But we learn something else that turns out to be quite amazing. The system always end up on the same “phase space”. Perhaps that doesn’t seem amazing yet..

Figure 7 was with three initial conditions that are almost identical. Let’s look at three initial conditions that are very different: x,y,z = 0, 1, 0;   5, 5, 5;   20, 8, 1:

Lorenz63-first-50s-x-y-z-3-different-conditions-499px

Figure 9 – Click to expand

Here’s an example (similar to figure 7) from Strogatz – a set of 10,000 closely separated initial conditions and how they separate at 3, 6, 9 and 15 seconds. The two key points:

  1. the fast separation of initial conditions
  2. the long term position of any of the initial conditions is still on the “attractor”
From Strogatz 1994

From Strogatz 1994

Figure 10

A dynamic visualization on Youtube with 500,000 initial conditions:

Figure 11

There’s lot of theory around all of this as you might expect. But in brief, in a “dissipative system” the “phase volume” contracts exponentially to zero. Yet for the Lorenz system somehow it doesn’t quite manage that. Instead, there are an infinite number of 2-d surfaces. Or something. For the sake of a not overly complex discussion a wide range of initial conditions ends up on something very close to a 2-d surface.

This is known as a strange attractor. And the Lorenz strange attractor looks like a butterfly.

Conclusion

Lorenz 1963 reduced convective flow (e.g., heating an atmosphere from the bottom) to a simple set of equations. Obviously these equations are a massively over-simplified version of anything like the real atmosphere. Yet, even with this very simple set of equations we find chaotic behavior.

Chaotic behavior in this example means:

  • very small differences get amplified extremely quickly so that no matter how much you increase your knowledge of your starting conditions it doesn’t help much (note 3)
  • starting conditions within certain boundaries will always end up within “attractor” boundaries, even though there might be non-periodic oscillations around this attractor
  • the long term (infinite) statistics can be deterministic but over any “smaller” time period the statistics can be highly variable

References

Deterministic nonperiodic flow, EN Lorenz, Journal of the Atmospheric Sciences (1963)

Chaos: From Simple Models to Complex Systems, Cencini, Cecconi & Vulpiani, Series on Advances in Statistical Mechanics – Vol. 17 (2010)

Non Linear Dynamics and Chaos, Steven H. Strogatz, Perseus Books  (1994)

Notes

Note 1: The Lorenz equations:

dx/dt = σ (y-x)

dy/dt = rx – y – xz

dz/dt = xy – bz

where

x = intensity of convection

y = temperature difference between ascending and descending currents

z = devision of temperature from a linear profile

σ = Prandtl number, ratio of momentum diffusivity to thermal diffusivity

r = Rayleigh number

b = “another parameter”

And the “classic parameters” are σ=10, b = 8/3, r = 28

Note 2: Lorenz 1963 has over 13,000 citations so I haven’t been able to find out if this system of equations is transitive or intransitive. Running Matlab on a home Mac reaches some limitations and I maxed out at 25,000 second simulations mapped onto a 0.01 second time step.

However, I’m not trying to prove anything specifically about the Lorenz 1963 equations, more illustrating some important characteristics of chaotic systems

Note 3: Small differences in initial conditions grow exponentially, until we reach the limits of the attractor. So it’s easy to show the “benefit” of more accurate data on initial conditions.

If we increase our precision on initial conditions by 1,000,000 times the increase in prediction time is a massive 2½ times longer.

There are many classes of systems but in the climate blogosphere world two ideas about climate seem to be repeated the most.

In camp A:

We can’t forecast the weather two weeks ahead so what chance have we got of forecasting climate 100 years from now.

And in camp B:

Weather is an initial value problem, whereas climate is a boundary value problem. On the timescale of decades, every planetary object has a mean temperature mainly given by the power of its star according to Stefan-Boltzmann’s law combined with the greenhouse effect. If the sources and sinks of CO2 were chaotic and could quickly release and sequester large fractions of gas perhaps the climate could be chaotic. Weather is chaotic, climate is not.

Of course, like any complex debate, simplified statements don’t really help. So this article kicks off with some introductory basics.

Many inhabitants of the climate blogosphere already know the answer to this subject and with much conviction. A reminder for new readers that on this blog opinions are not so interesting, although occasionally entertaining. So instead, try to explain what evidence is there for your opinion. And, as suggested in About this Blog:

And sometimes others put forward points of view or “facts” that are obviously wrong and easily refuted.  Pretend for a moment that they aren’t part of an evil empire of disinformation and think how best to explain the error in an inoffensive way.

Pendulums

The equation for a simple pendulum is “non-linear”, although there is a simplified version of the equation, often used in introductions, which is linear. However, the number of variables involved is only two:

  • angle
  • speed

and this isn’t enough to create a “chaotic” system.

If we have a double pendulum, one pendulum attached at the bottom of another pendulum, we do get a chaotic system. There are some nice visual simulations around, which St. Google might help interested readers find.

If we have a forced damped pendulum like this one:

Pendulum-forced

Figure 1 – the blue arrows indicate that the point O is being driven up and down by an external force

-we also get a chaotic system.

What am I talking about? What is linear & non-linear? What is a “chaotic system”?

Digression on Non-Linearity for Non-Technical People

Common experience teaches us about linearity. If I pick up an apple in the supermarket it weighs about 0.15 kg or 150 grams (also known in some countries as “about 5 ounces”). If I take 10 apples the collection weighs 1.5 kg. That’s pretty simple stuff. Most of our real world experience follows this linearity and so we expect it.

On the other hand, if I was near a very cold black surface held at 170K (-103ºC) and measured the radiation emitted it would be 47 W/m². Then we double the temperature of this surface to 340K (67ºC) what would I measure? 94 W/m²? Seems reasonable – double the absolute temperature and get double the radiation.. But it’s not correct.

The right answer is 758 W/m², which is 16x the amount. Surprising, but most actual physics, engineering and chemistry is like this. Double a quantity and you don’t get double the result.

It gets more confusing when we consider the interaction of other variables.

Let’s take riding a bike [updated thanks to Pekka]. Once you get above a certain speed most of the resistance comes from the wind so we will focus on that. Typically the wind resistance increases as the square of the speed. So if you double your speed you get four times the wind resistance. Work done = force x distance moved, so with no head wind power input has to go up as the cube of speed (note 4). This means you have to put in 8x the effort to get 2x the speed.

On Sunday you go for a ride and the wind speed is zero. You get to 25 km/hr (16 miles/hr) by putting a bit of effort in – let’s say you are producing 150W of power (I have no idea what the right amount is). You want your new speedo to register 50 km/hr – so you have to produce 1,200W.

On Monday you go for a ride and the wind speed is 20 km/hr into your face. Probably should have taken the day off.. Now with 150W you get to only 14 km/hr, it takes almost 500W to get to your basic 25 km/hr, and to get to 50 km/hr it takes almost 2,400W. No chance of getting to that speed!

On Tuesday you go for a ride and the wind speed is the same so you go in the opposite direction and take the train home. Now with only 6W you get to go 25 km/hr, to get to 50km/hr you only need to pump out 430W.

In mathematical terms it’s quite simple: F = k(v-w)², Force = (a constant, k) x (road speed – wind speed) squared. Power, P = Fv = kv(v-w)². But notice that the effect of the “other variable”, the wind speed, has really complicated things.

To double your speed on the first day you had to produce eight times the power. To double your speed the second day you had to produce almost five times the power. To double your speed the third day you had to produce just over 70 times the power. All with the same physics.

The real problem with nonlinearity isn’t the problem of keeping track of these kind of numbers. You get used to the fact that real science – real world relationships – has these kind of factors and you come to expect them. And you have an equation that makes calculating them easy. And you have computers to do the work.

No, the real problem with non-linearity (the real world) is that many of these equations link together and solving them is very difficult and often only possible using “numerical methods”.

It is also the reason why something like climate feedback is very difficult to measure. Imagine measuring the change in power required to double speed on the Monday. It’s almost 5x, so you might think the relationship is something like the square of speed. On Tuesday it’s about 70 times, so you would come up with a completely different relationship. In this simple case know that wind speed is a factor, we can measure it, and so we can “factor it out” when we do the calculation. But in a more complicated system, if you don’t know the “confounding variables”, or the relationships, what are you measuring? We will return to this question later.

When you start out doing maths, physics, engineering.. you do “linear equations”. These teach you how to use the tools of the trade. You solve equations. You rearrange relationships using equations and mathematical tricks, and these rearranged equations give you insight into how things work. It’s amazing. But then you move to “nonlinear” equations, aka the real world, which turns out to be mostly insoluble. So nonlinear isn’t something special, it’s normal. Linear is special. You don’t usually get it.

..End of digression

Back to Pendulums

Let’s take a closer look at a forced damped pendulum. Damped, in physics terms, just means there is something opposing the movement. We have friction from the air and so over time the pendulum slows down and stops. That’s pretty simple. And not chaotic. And not interesting.

So we need something to keep it moving. We drive the pivot point at the top up and down and now we have a forced damped pendulum. The equation that results (note 1) has the massive number of three variables – position, speed and now time to keep track of the driving up and down of the pivot point. Three variables seems to be the minimum to create a chaotic system (note 2).

As we increase the ratio of the forcing amplitude to the length of the pendulum (β in note 1) we can move through three distinct types of response:

  • simple response
  • a “chaotic start” followed by a deterministic oscillation
  • a chaotic system

This is typical of chaotic systems – certain parameter values or combinations of parameters can move the system between quite different states.

Here is a plot (note 3) of position vs time for the chaotic system, β=0.7, with two initial conditions, only different from each other by 0.1%:

Forced damped harmonic pendulum, b=0.7: Start angular speed 0.1; 0.1001

Forced damped harmonic pendulum, b=0.7: Start angular speed 0.1; 0.1001

Figure 1

It’s a little misleading to view the angle like this because it is in radians and so needs to be mapped between 0-2π (but then we get a discontinuity on a graph that doesn’t match the real world). We can map the graph onto a cylinder plot but it’s a mess of reds and blues.

Another way of looking at the data is via the statistics – so here is a histogram of the position (θ), mapped to 0-2π, and angular speed (dθ/dt) for the two starting conditions over the first 10,000 seconds:

Histograms for 10k seconds

Histograms for 10,000 seconds

Figure 2

We can see they are similar but not identical (note the different scales on the y-axis).

That might be due to the shortness of the run, so here are the results over 100,000 seconds:

Pendulum-0.7-100k seconds-2 conditions-hist

Histogram for 100,000 seconds

Figure 3

As we increase the timespan of the simulation the statistics of two slightly different initial conditions become more alike.

So if we want to know the state of a chaotic system at some point in the future, very small changes in the initial conditions will amplify over time, making the result unknowable – or no different from picking the state from a random time in the future. But if we look at the statistics of the results we might find that they are very predictable. This is typical of many (but not all) chaotic systems.

Orbits of the Planets

The orbits of the planets in the solar system are chaotic. In fact, even 3-body systems moving under gravitational attraction have chaotic behavior. So how did we land a man on the moon? This raises the interesting questions of timescales and amount of variation. Planetary movement – for our purposes – is extremely predictable over a few million years. But over 10s of millions of years we might have trouble predicting exactly the shape of the earth’s orbit – eccentricity, time of closest approach to the sun, obliquity.

However, it seems that even over a much longer time period the planets will still continue in their orbits – they won’t crash into the sun or escape the solar system. So here we see another important aspect of some chaotic systems – the “chaotic region” can be quite restricted. So chaos doesn’t mean unbounded.

According to Cencini, Cecconi & Vulpiani (2010):

Therefore, in principle, the Solar system can be chaotic, but not necessarily this implies events such as collisions or escaping planets..

However, there is evidence that the Solar system is “astronomically” stable, in the sense that the 8 largest planets seem to remain bound to the Sun in low eccentricity and low inclination orbits for time of the order of a billion years. In this respect, chaos mostly manifest in the irregular behavior of the eccentricity and inclination of the less massive planets, Mercury and Mars. Such variations are not large enough to provoke catastrophic events before extremely large time. For instance, recent numerical investigations show that for catastrophic events, such as “collisions” between Mercury and Venus or Mercury failure into the Sun, we should wait at least a billion years.

And bad luck, Pluto.

Deterministic, non-Chaotic, Systems with Uncertainty

Just to round out the picture a little, even if a system is not chaotic and is deterministic we might lack sufficient knowledge to be able to make useful predictions. If you take a look at figure 3 in Ensemble Forecasting you can see that with some uncertainty of the initial velocity and a key parameter the resulting velocity of an extremely simple system has quite a large uncertainty associated with it.

This case is quantitively different of course. By obtaining more accurate values of the starting conditions and the key parameters we can reduce our uncertainty. Small disturbances don’t grow over time to the point where our calculation of a future condition might as well just be selected from a randomly time in the future.

Transitive, Intransitive and “Almost Intransitive” Systems

Many chaotic systems have deterministic statistics. So we don’t know the future state beyond a certain time. But we do know that a particular position, or other “state” of the system, will be between a given range for x% of the time, taken over a “long enough” timescale. These are transitive systems.

Other chaotic systems can be intransitive. That is, for a very slight change in initial conditions we can have a different set of long term statistics. So the system has no “statistical” predictability. Lorenz 1968 gives a good example.

Lorenz introduces the concept of almost intransitive systems. This is where, strictly speaking, the statistics over infinite time are independent of the initial conditions, but the statistics over “long time periods” are dependent on the initial conditions. And so he also looks at the interesting case (Lorenz 1990) of moving between states of the system (seasons), where we can think of the precise starting conditions each time we move into a new season moving us into a different set of long term statistics. I find it hard to explain this clearly in one paragraph, but Lorenz’s papers are very readable.

Conclusion?

This is just a brief look at some of the basic ideas.

Other Articles in the Series

Part Two – Lorenz 1963

References

Chaos: From Simple Models to Complex Systems, Cencini, Cecconi & Vulpiani, Series on Advances in Statistical Mechanics – Vol. 17 (2010)

Climatic Determinism, Edward Lorenz (1968) – free paper

Can chaos and intransivity lead to interannual variation, Edward Lorenz, Tellus (1990) – free paper

Notes

Note 1 – The equation is easiest to “manage” after the original parameters are transformed so that tω->t. That is, the period of external driving, T0=2π under the transformed time base.

Then:

Pendulum-forced-equation

where θ = angle, γ’ = γ/ω, α = g/Lω², β =h0/L;

these parameters based on γ = viscous drag coefficient, ω = angular speed of driving, g = acceleration due to gravity = 9.8m/s², L = length of pendulum, h0=amplitude of driving of pivot point

Note 2 – This is true for continuous systems. Discrete systems can be chaotic with less parameters

Note 3 – The results were calculated numerically using Matlab’s ODE (ordinary differential equation) solver, ode45.

Note 4 – Force = k(v-w)2 where k is a constant, v=velocity, w=wind speed. Work done = Force x distance moved so Power, P = Force x velocity.

Therefore:

P = kv(v-w)2

If we know k, v & w we can find P. If we have P, k & w and want to find v it is a cubic equation that needs solving.

In The “Greenhouse” Effect Explained in Simple Terms I list, and briefly explain, the main items that create the “greenhouse” effect. I also explain why more CO2 (and other GHGs) will, all other things remaining equal, increase the surface temperature. I recommend that article as the place to go for the straightforward explanation of the “greenhouse” effect. It also highlights that the radiative balance higher up in the troposphere is the most important component of the “greenhouse” effect.

However, someone recently commented on my first Kramm & Dlugi article and said I was “plainly wrong”. Kramm & Dlugi were in complete agreement with Gerlich and Tscheuschner because they both claim the “purported greenhouse effect simply doesn’t exist in the real world”.

If it’s just about flying a flag or wearing a football jersey then I couldn’t agree more. However, science does rely on tedious detail and “facts” rather than football jerseys. As I pointed out in New Theory Proves AGW Wrong! two contradictory theories don’t add up to two theories making the same case..

In the case of the first Kramm & Dlugi article I highlighted one point only. It wasn’t their main point. It wasn’t their minor point. They weren’t even making a point of it at all.

Many people believe the “greenhouse” effect violates the second law of thermodynamics, these are herein called “the illuminati”.

Kramm & Dlugi’s equation demonstrates that the illuminati are wrong. I thought this was worth pointing out.

The “illuminati” don’t understand entropy, can’t provide an equation for entropy, or even demonstrate the flaw in the simplest example of why the greenhouse effect is not in violation of the second law of thermodynamics. Therefore, it is necessary to highlight the (published) disagreement between celebrated champions of the illuminati – even if their demonstration of the disagreement was unintentional.

Let’s take a look.

Here is the one of the most popular G&T graphics in the blogosphere:

From Gerlich & Tscheuschner

From Gerlich & Tscheuschner

Figure 1

It’s difficult to know how to criticize an imaginary diagram. We could, for example, point out that it is imaginary. But that would be picky.

We could say that no one draws this diagram in atmospheric physics. That should be sufficient. But as so many of the illuminati have learnt their application of the second law of thermodynamics to the atmosphere from this fictitious diagram I feel the need to press forward a little.

Here is an extract from a widely-used undergraduate textbook on heat transfer, with a little annotation (red & blue):

From Incropera & DeWitt (2007)

From “Fundamentals of Heat and Mass Transfer” by Incropera & DeWitt (2007)

Figure 2

This is the actual textbook, before the Gerlich manoeuvre as I would like to describe it. We can see in the diagram and in the text that radiation travels both ways and there is a net transfer which is from the hotter to the colder. The term “net” is not really capable of being confused. It means one minus the other, “x-y”. Not “x”. (For extracts from six heat transfer textbooks and their equations read Amazing Things we Find in Textbooks – The Real Second Law of Thermodynamics).

Now let’s apply the Gerlich manoeuvre (compare fig. 2):

Fundamentals-of-heat-and-mass-transfer-post-G&T

Not from “Fundamentals of Heat and Mass Transfer”, or from any textbook ever

Figure 3

So hopefully that’s clear. Proof by parody. This is “now” a perpetual motion machine and so heat transfer textbooks are wrong. All of them. Somehow.

Just for comparison, we can review the globally annually averaged values of energy transfer in the atmosphere, including radiation, from Kiehl & Trenberth (I use the 1997 version because it is so familiar even though values were updated more recently):

From Kiehl & Trenberth (1997)

From Kiehl & Trenberth (1997)

Figure 4

It should be clear that the radiation from the hotter surface is higher than the radiation from the colder atmosphere. If anyone wants this explained, please ask.

I could apply the Gerlich manoeuvre to this diagram but they’ve already done that in their paper (as shown above in figure 1).

So lastly, we return to Kramm & Dlugi, and their “not even tiny point”, which nevertheless makes a useful point. They don’t provide a diagram, they provide an equation for energy balance at the surface – and I highlight each term in the equation to assist the less mathematically inclined:

Kramm-Dlugi-2011-eqn-highlight

 

Figure 5

The equation says, the sum of all fluxes – at one point on the surface = 0. This is an application of the famous first law of thermodynamics, that is, energy cannot be created or destroyed.

The red term – absorbed atmospheric radiation – is the radiation from the colder atmosphere absorbed by the hotter surface. This is also known as “DLR” or “downward longwave radiation, and as “back-radiation”.

Now, let’s assume that the atmospheric radiation increases in intensity over a small period. What happens?

The only way this equation can continue to be true is for one or more of the last 4 terms to increase.

  • The emitted surface radiation – can only increase if the surface temperature increases
  • The latent heat transfer – can only increase if there is an increase in wind speed or in the humidity differential between the surface and the atmosphere just above
  • The sensible heat transfer – can only increase if there is an increase in wind speed or in the temperature differential between the surface and the atmosphere just above
  • The heat transfer into the ground – can only increase if the surface temperature increases or the temperature below ground spontaneously cools

So, when atmospheric radiation increases the surface temperature must increase (or amazingly the humidity differential spontaneously increases to balance, but without a surface temperature change). According to G&T and the illuminati this surface temperature increase is impossible. According to Kramm & Dlugi, this is inevitable.

I would love it for Gerlich or Tscheuschner to show up and confirm (or deny?):

  • yes the atmosphere does emit thermal radiation
  • yes the surface of the earth does absorb atmospheric thermal radiation
  • yes this energy does not disappear (1st law of thermodynamics)
  • yes this energy must increase the temperature of the earth’s surface above what it would be if this radiation did not exist (1st law of thermodynamics)

Or even, which one of the above is wrong. That would be outstanding.

Of course, I know they won’t do that – even though I’m certain they believe all of the above points. (Likewise, Kramm & Dlugi won’t answer the question I have posed of them).

Well, we all know why

Hopefully, the illuminati can contact Kramm & Dlugi and explain to them where they went wrong. I have my doubts that any of the illuminati have grasped the first law of thermodynamics or the equation for temperature change and heat capacity, but who could say.

In Ensemble Forecasting we had a look at the principles behind “ensembles of initial conditions” and “ensembles of parameters” in forecasting weather. Climate models are a little different from weather forecasting models but use the same physics and the same framework.

A lot of people, including me, have questions about “tuning” climate models. While looking for what the latest IPCC report (AR5) had to say about ensembles of climate models I found a reference to Tuning the climate of a global model by Mauritsen et al (2012). Unless you work in the field of climate modeling you don’t know the magic behind the scenes. This free paper (note 1) gives some important insights and is very readable:

The need to tune models became apparent in the early days of coupled climate modeling, when the top of the atmosphere (TOA) radiative imbalance was so large that models would quickly drift away from the observed state. Initially, a practice to input or extract heat and freshwater from the model, by applying flux-corrections, was invented to address this problem. As models gradually improved to a point when flux-corrections were no longer necessary, this practice is now less accepted in the climate modeling community.

Instead, the radiation balance is controlled primarily by tuning cloud-related parameters at most climate modeling centers while others adjust the ocean surface albedo or scale the natural aerosol climatology to achieve radiation balance. Tuning cloud parameters partly masks the deficiencies in the simulated climate, as there is considerable uncertainty in the representation of cloud processes. But just like adding flux-corrections, adjusting cloud parameters involves a process of error compensation, as it is well appreciated that climate models poorly represent clouds and convective processes. Tuning aims at balancing the Earth’s energy budget by adjusting a deficient representation of clouds, without necessarily aiming at improving the latter.

A basic requirement of a climate model is reproducing the temperature change from pre-industrial times (mid 1800s) until today. So the focus is on temperature change, or in common terminology, anomalies.

It was interesting to see that if we plot the “actual modeled temperatures” from 1850 to present the picture doesn’t look so good (the grey curves are models from the coupled model inter-comparison projects: CMIP3 and CMIP5):

From Mauritsen et al 2012

From Mauritsen et al 2012

Figure 1

The authors state:

There is considerable coherence between the model realizations and the observations; models are generally able to reproduce the observed 20th century warming of about 0.7 K..

Yet, the span between the coldest and the warmest model is almost 3 K, distributed equally far above and below the best observational estimates, while the majority of models are cold-biased. Although the inter-model span is only one percent relative to absolute zero, that argument fails to be reassuring. Relative to the 20th century warming the span is a factor four larger, while it is about the same as our best estimate of the climate response to a doubling of CO2, and about half the difference between the last glacial maximum and present.

They point out that adjusting parameters might just be offsetting one error against another..

In addition to targeting a TOA radiation balance and a global mean temperature, model tuning might strive to address additional objectives, such as a good representation of the atmospheric circulation, tropical variability or sea-ice seasonality. But in all these cases it is usually to be expected that improved performance arises not because uncertain or non-observable parameters match their intrinsic value – although this would clearly be desirable – rather that compensation among model errors is occurring. This raises the question as to whether tuning a model influences model-behavior, and places the burden on the model developers to articulate their tuning goals, as including quantities in model evaluation that were targeted by tuning is of little value. Evaluating models based on their ability to represent the TOA radiation balance usually reflects how closely the models were tuned to that particular target, rather than the models intrinsic qualities.

[Emphasis added]. And they give a bit more insight into the tuning process:

A few model properties can be tuned with a reasonable chain of understanding from model parameter to the impact on model representation, among them the global mean temperature. It is comprehendible that increasing the models low-level cloudiness, by for instance reducing the precipitation efficiency, will cause more reflection of the incoming sunlight, and thereby ultimately reduce the model’s surface temperature.

Likewise, we can slow down the Northern Hemisphere mid-latitude tropospheric jets by increasing orographic drag, and we can control the amount of sea ice by tinkering with the uncertain geometric factors of ice growth and melt. In a typical sequence, first we would try to correct Northern Hemisphere tropospheric wind and surface pressure biases by adjusting parameters related to the parameterized orographic gravity wave drag. Then, we tune the global mean temperature as described in Sections 2.1 and 2.3, and, after some time when the coupled model climate has come close to equilibrium, we will tune the Arctic sea ice volume (Section 2.4).

In many cases, however, we do not know how to tune a certain aspect of a model that we care about representing with fidelity, for example tropical variability, the Atlantic meridional overturning circulation strength, or sea surface temperature (SST) biases in specific regions. In these cases we would rather monitor these aspects and make decisions on the basis of a weak understanding of the relation between model formulation and model behavior.

Here we see how CMIP3 & 5 models “drift” – that is, over a long period of simulation time how the surface temperature varies with TOA flux imbalance (and also we see the cold bias of the models):

From Mauritsen et al 2012

From Mauritsen et al 2012

Figure 2

If a model equilibrates at a positive radiation imbalance it indicates that it leaks energy, which appears to be the case in the majority of models, and if the equilibrium balance is negative it means that the model has artificial energy sources. We speculate that the fact that the bulk of models exhibit positive TOA radiation imbalances, and at the same time are cold-biased, is due to them having been tuned without account for energy leakage.

[Emphasis added].

From that graph they discuss the implied sensitivity to radiative forcing of each model (the slope of each model and how it compares with the blue and red “sensitivity” curves).

We get to see some of the parameters that are played around with (a-h in the figure):

From Mauritsen et al 2012

From Mauritsen et al 2012

Figure 3

And how changing some of these parameters affects (over a short run) “headline” parameters like TOA imbalance and cloud cover:

From Mauritsen et al 2012

From Mauritsen et al 2012

Figure 4 – Click to Enlarge

There’s also quite a bit in the paper about tuning the Arctic sea ice that will be of interest for Arctic sea ice enthusiasts.

In some of the final steps we get a great insight into how the whole machine goes through its final tune up:

..After these changes were introduced, the first parameter change was a reduction in two non-dimensional parameters controlling the strength of orographic wave drag from 0.7 to 0.5.

This greatly reduced the low zonal mean wind- and sea-level pressure biases in the Northern Hemisphere in atmosphere-only simulations, and further had a positive impact on the global to Arctic temperature gradient and made the distribution of Arctic sea-ice far more realistic when run in coupled mode.

In a second step the conversion rate of cloud water to rain in convective clouds was doubled from 1×10-4 s-1 to 2×10-4 s-1 in order to raise the OLR to be closer to the CERES satellite estimates.

At this point it was clear that the new coupled model was too warm compared to our target pre- industrial temperature. Different measures using the convection entrainment rates, convection overshooting fraction and the cloud homogeneity factors were tested to reduce the global mean temperature.

In the end, it was decided to use primarily an increased homogeneity factor for liquid clouds from 0.70 to 0.77 combined with a slight reduction of the convective overshooting fraction from 0.22 to 0.21, thereby making low-level clouds more reflective to reduce the surface temperature bias. Now the global mean temperature was sufficiently close to our target value and drift was very weak. At this point we decided to increase the Arctic sea ice volume from 18×1012 m3 to 22×1012 m3 by raising the cfreeze parameter from 1/2 to 2/3. ECHAM5/MPIOM had this parameter set to 4/5. These three final parameter settings were done while running the model in coupled mode.

Some of the paper’s results (not shown here) are some “parallel worlds” with different parameters. In essence, while working through the model development phase they took a lot of notes of what they did, what they changed, and at the end they went back and created some alternatives from some of their earlier choices. The parameter choices along with a set of resulting climate properties are shown in their table 10.

Some summary statements:

Parameter tuning is the last step in the climate model development cycle, and invariably involves making sequences of choices that influence the behavior of the model. Some of the behavioral changes are desirable, and even targeted, but others may be a side effect of the tuning. The choices we make naturally depend on our preconceptions, preferences and objectives. We choose to tune our model because the alternatives – to either drift away from the known climate state, or to introduce flux-corrections – are less attractive. Within the foreseeable future climate model tuning will continue to be necessary as the prospects of constraining the relevant unresolved processes with sufficient precision are not good.

Climate model tuning has developed well beyond just controlling global mean temperature drift. Today, we tune several aspects of the models, including the extratropical wind- and pressure fields, sea-ice volume and to some extent cloud-field properties. By doing so we clearly run the risk of building the models’ performance upon compensating errors, and the practice of tuning is partly masking these structural errors. As one continues to evaluate the models, sooner or later these compensating errors will become apparent, but the errors may prove tedious to rectify without jeopardizing other aspects of the model that have been adjusted to them.

Climate models ability to simulate the 20th century temperature increase with fidelity has become something of a show-stopper as a model unable to reproduce the 20th century would probably not see publication, and as such it has effectively lost its purpose as a model quality measure. Most other observational datasets sooner or later meet the same destiny, at least beyond the first time they are applied for model evaluation. That is not to say that climate models can be readily adapted to fit any dataset, but once aware of the data we will compare with model output and invariably make decisions in the model development on the basis of the results. Rather, our confidence in the results provided by climate models is gained through the development of a fundamental physical understanding of the basic processes that create climate change. More than a century ago it was first realized that increasing the atmospheric CO2 concentration leads to surface warming, and today the underlying physics and feedback mechanisms are reasonably understood (while quantitative uncertainty in climate sensitivity is still large). Coupled climate models are just one of the tools applied in gaining this understanding..

[Emphasis added].

..In this paper we have attempted to illustrate the tuning process, as it is being done currently at our institute. Our hope is to thereby help de-mystify the practice, and to demonstrate what can and cannot be achieved. The impacts of the alternative tunings presented were smaller than we thought they would be in advance of this study, which in many ways is reassuring. We must emphasize that our paper presents only a small glimpse at the actual development and evaluation involved in preparing a comprehensive coupled climate model – a process that continues to evolve as new datasets emerge, model parameterizations improve, additional computational resources become available, as our interests, perceptions and objectives shift, and as we learn more about our model and the climate system itself.

 Note 1: The link to the paper gives the html version. From there you can click the “Get pdf” link and it seems to come up ok – no paywall. If not try the link to the draft paper (but the formatting makes it not so readable)

Follow

Get every new post delivered to your Inbox.

Join 373 other followers