Feeds:
Posts
Comments

Archive for the ‘Climate Models’ Category

In Latent heat and Parameterization I showed a formula for calculating latent heat transfer from the surface into the atmosphere, as well as the “real” formula. The parameterized version has horizontal wind speed x humidity difference (between the surface and some reference height in the atmosphere, typically 10m) x “a coefficient”.

One commenter asked:

Why do we expect that vertical transport of water vapor to vary linearly with horizontal wind speed? Is this standard turbulent mixing?

The simple answer is “almost yes”. But as someone famously said, make it simple, but not too simple.

Charting a course between too simple and too hard is a challenge with this subject. By contrast, radiative physics is a cakewalk. I’ll begin with some preamble and eventually get to the destination.

There’s a set of equations describing motion of fluids and what they do is conserve momentum in 3 directions (x,y,z) – these are the Navier-Stokes equations, and they conserve mass. Then there are also equations to conserve humidity and heat. There is an exact solution to the equations but there is a bit of a problem in practice. The Navier-Stokes equations in a rotating frame can be seen in The Coriolis Effect and Geostrophic Motion under “Some Maths”.

Simple linear equations with simple boundary conditions can be re-arranged and you get a nice formula for the answer. Then you can plot this against that and everyone can see how the relationships change with different material properties or boundary conditions. In real life equations are not linear and the boundary conditions are not simple. So there is no “analytical solution”, where we want to know say the velocity of the fluid in the east-west direction as a function of time and get a nice equation for the answer. Instead we have to use numerical methods.

Let’s take a simple problem – if you want to know heat flow through an odd-shaped metal plate that is heated in one corner and cooled by steady air flow on the rest of its surface you can use these numerical methods and usually get a very accurate answer.

Turbulence is a lot more difficult due to the range of scales involved. Here’s a nice image of turbulence:

Figure 1

There is a cascade of energy from the largest scales down to the point where viscosity “eats up” the kinetic energy. In the atmosphere this is the sub 1mm scale. So if you want to accurately numerically model atmospheric motion across a 100km scale you need a grid size probably 100,000,000 x 100,000,000 x 10,000,000 and solving sub-second for a few days. Well, that’s a lot of calculation. I’m not sure where turbulence modeling via “direct numerical simulation” has got to but I’m pretty sure that is still too hard and in a decade it will still be a long way off. The computing power isn’t there.

Anyway, for atmospheric modeling you don’t really want to know the velocity in the x,y,z direction (usually annotated as u,v,w) at trillions of points every second. Who is going to dig through that data? What you want is a statistical description of the key features.

So if we take the Navier-Stokes equation and average, what do we get? We get a problem.

For the mathematically inclined the following is obvious, but of course many readers aren’t, so here’s a simple example:

Let’s take 3 numbers: 1, 10, 100:   the average = (1+10+100)/3 = 37.

Now let’s look at the square of those numbers: 1, 100, 10000:  the average of the square of those numbers = (1+100+10000)/3 = 3367.

But if we take the average of our original numbers and square it, we get 37² = 1369. It’s strange but the average squared is not the same as the average of the squared numbers. That’s non-linearity for you.

In the Navier Stokes equations we have values like east velocity x upwards velocity, written as uw. The average of uw, written as \overline{uw} is not equal to the average of u x the average of w, written as \overline{u}.\overline{w}. For the same reason we just looked at.

When we create the Reynolds averaged Navier-Stokes (RANS) equations we get lots of new terms like\overline{uw}. That is, we started with the original equations which gave us a complete solution – the same number of equations as unknowns. But when we average we end up with more unknowns than equations.

It’s like saying x + y = 1, what is x and y? No one can say. Perhaps 1 & 0. Perhaps 1000 & -999.

Digression on RANS for Slightly Interested People

The Reynolds approach is to take a value like u,v,w (velocity in 3 directions) and decompose into a mean and a “rapidly varying” turbulent component.

So u = \overline{u} + u', where \overline{u} = mean value;  u’ = the varying component. So \overline{u'} = 0. Likewise for the other directions.

And \overline{uw} = \overline{u} . \overline{w} + \overline{u'w'}

So in the original equation where we have a term like u . \frac{\partial u}{\partial x}, it turns into  (\overline{u} + u') . \frac{\partial (\overline{u} + u')}{\partial x}, which, when averaged, becomes:

\overline{u} . \frac{\partial \overline{u}}{\partial x} +\overline{u' . \frac{\partial u'}{\partial x}}

So 2 unknowns instead of 1. The first term is the averaged flow, the second term is the turbulent flow. (Well, it’s an advection term for the change in velocity following the flow)

When we look at the conservation of energy equation we end up with terms for the movement of heat upwards due to average flow (almost zero) and terms for the movement of heat upwards due to turbulent flow (often significant). That is, a term like \overline{\theta'w'} which is “the mean of potential temperature variations x upwards eddy velocity”.

Or, in plainer English, how heat gets moved up by turbulence.

..End of Digression

Closure and the Invention of New Ideas

“Closure” is a maths term. To “close the equations” when we have more unknowns that equations means we have to invent a new idea. Some geniuses like Reynolds, Prandtl and Kolmogoroff did come up with some smart new ideas.

Often the smart ideas are around “dimensionless terms” or “scaling terms”. The first time you encounter these ideas they seem odd or just plain crazy. But like everything, over time strange ideas start to seem normal.

The Reynolds number is probably the simplest to get used to. The Reynolds number seeks to relate fluid flows to other similar fluid flows. You can have fluid flow through a massive pipe that is identical in the way turbulence forms to that in a tiny pipe – so long as the viscosity and density change accordingly.

The Reynolds number, Re = density x length scale x mean velocity of the fluid / viscosity

And regardless of the actual physical size of the system and the actual velocity, turbulence forms for flow over a flat plate when the Reynolds number is about 500,000. By the way, for the atmosphere and ocean this is true most of the time.

Kolmogoroff came up with an idea in 1941 about the turbulent energy cascade using dimensional analysis and came to the conclusion that the energy of eddies increases with their size to the power 2/3 (in the “inertial subrange”). This is usually written vs frequency where it becomes a -5/3 power. Here’s a relatively recent experimental verification of this power law.

From Durbin & Reif 2010

From Durbin & Reif 2010

 Figure 2

In less genius like manner, people measure stuff and use these measured values to “close the equations” for “similar” circumstances. Unfortunately, the measurements are only valid in a small range around the experiments and with turbulence it is hard to predict where the cutoff is.

A nice simple example, to which I hope to return because it is critical in modeling climate, is vertical eddy diffusivity in the ocean. By way of introduction to this, let’s look at heat transfer by conduction.

If only all heat transfer was as simple as conduction. That’s why it’s always first on the list in heat transfer courses..

If have a plate of thickness d, and we hold one side at temperature T1 and the other side at temperature T2, the heat conduction per unit area:

H_z = \frac{k(T_2-T_1)}{d}

where k is a material property called conductivity. We can measure this property and it’s always the same. It might vary with temperature but otherwise if you take a plate of the same material and have widely different temperature differences, widely different thicknesses – the heat conduction always follows the same equation.

Now using these ideas, we can take the actual equation for vertical heat flux via turbulence:

H_z =\rho c_p\overline{w'\theta'}

where w = vertical velocity, θ = potential temperature

And relate that to the heat conduction equation and come up with (aka ‘invent’):

H_z = \rho c_p K . \frac{\partial \theta}{\partial z}

Now we have an equation we can actually use because we can measure how potential temperature changes with depth. The equation has a new “constant”, K. But this one is not really a constant, it’s not really a material property – it’s a property of the turbulent fluid in question. Many people have measured the “implied eddy diffusivity” and come up with a range of values which tells us how heat gets transferred down into the depths of the ocean.

Well, maybe it does. Maybe it doesn’t tell us very much that is useful. Let’s come back to that topic and that “constant” another day.

The Main Dish – Vertical Heat Transfer via Horizontal Wind

Back to the original question. If you imagine a sheet of paper as big as your desk then that pretty much gives you an idea of the height of the troposphere (lower atmosphere where convection is prominent).

It’s as thin as a sheet of desk size paper in comparison to the dimensions of the earth. So any large scale motion is horizontal, not vertical. Mean vertical velocities – which doesn’t include turbulence via strong localized convection – are very low. Mean horizontal velocities can be the order of 5 -10 m/s near the surface of the earth. Mean vertical velocities are the order of cm/s.

Let’s look at flow over the surface under “neutral conditions”. This means that there is little buoyancy production due to strong surface heating. In this case the energy for turbulence close to the surface comes from the kinetic energy of the mean wind flow – which is horizontal.

There is a surface drag which gets transmitted up through the boundary layer until there is “free flow” at some height. By using dimensional analysis, we can figure out what this velocity profile looks like in the absence of strong convection. It’s logarithmic:

Surface-wind-profile

Figure 3 – for typical ocean surface

Lots of measurements confirm this logarithmic profile.

We can then calculate the surface drag – or how momentum is transferred from the atmosphere to the ocean – using the simple formula derived and we come up with a simple expression:

\tau_0 = \rho C_D U_r^2

Where Ur is the velocity at some reference height (usually 10m), and CD is a constant calculated from the ratio of the reference height to the roughness height and the von Karman constant.

Using similar arguments we can come up with heat transfer from the surface. The principles are very similar. What we are actually modeling in the surface drag case is the turbulent vertical flux of horizontal momentum \rho \overline{u'w'} with a simple formula that just has mean horizontal velocity. We have “closed the equations” by some dimensional analysis.

Adding the Richardson number for non-neutral conditions we end up with a temperature difference along with a reference velocity to model the turbulent vertical flux of sensible heat \rho c_p . \overline{w'\theta'}. Similar arguments give latent heat flux L\rho . \overline{w'q'} in a simple form.

Now with a bit more maths..

At the surface the horizontal velocity must be zero. The vertical flux of horizontal momentum creates a drag on the boundary layer wind. The vertical gradient of the mean wind, U, can only depend on height z, density ρ and surface drag.

So the “characteristic wind speed” for dimensional analysis is called the friction velocity, u*, and u* = \sqrt\frac{\tau_0}{\rho}

This strange number has the units of velocity: m/s  – ask if you want this explained.

So dimensional analysis suggests that \frac{z}{u*} . \frac{\partial U}{\partial z} should be a constant – “scaled wind shear”. The inverse of that constant is known as the Von Karman constant, k = 0.4.

So a simple re-arrangement and integration gives:

U(z) = \frac{u*}{k} . ln(\frac{z}{z_0})

where z0 is a constant from the integration, which is roughness height – a physical property of the surface where the mean wind reaches zero.

The “real form” of the friction velocity is:

u*^2 = \frac{\tau_0}{\rho} = (\overline{u'w'}^2 + \overline{v'w'}^2)^\frac{1}{2},  where these eddy values are at the surface

we can pick a horizontal direction along the line of the mean wind (rotate coordinates) and come up with:

u*^2 = \overline{u'w'}

If we consider a simple constant gradient argument:

\tau = - \rho . \overline{u'w'} = \rho K \frac{\partial \overline{u}}{\partial z}

where the first expression is the “real” equation and the second is the “invented” equation, or “our attempt to close the equation” from dimensional analysis.

Of course, this is showing how momentum is transferred, but the approach is pretty similar, just slightly more involved, for sensible and latent heat.

Conclusion

Turbulence is a hard problem. The atmosphere and ocean are turbulent so calculating anything is difficult. Until a new paradigm in computing comes along, the real equations can’t be numerically solved from the small scales needed where viscous dissipation damps out the kinetic energy of the turbulence up to the large scale of the whole earth, or even of a synoptic scale event. However, numerical analysis has been used a lot to test out ideas that are hard to test in laboratory experiments. And can give a lot of insight into parts of the problems.

In the meantime, experiments, dimensional analysis and intuition have provided a lot of very useful tools for modeling real climate problems.

Read Full Post »

In Ensemble Forecasting I wrote a short section on parameterization using the example of latent heat transfer and said:

Are we sure that over Connecticut the parameter CDE = 0.004, or should it be 0.0035? In fact, parameters like this are usually calculated from the average of a number of experiments. They conceal as much as they reveal. The correct value probably depends on other parameters. In so far as it represents a real physical property it will vary depending on the time of day, seasons and other factors. It might even be, “on average”, wrong. Because “on average” over the set of experiments was an imperfect sample. And “on average” over all climate conditions is a different sample.

Interestingly, a new paper has just shown up in JGR (“accepted for publication” and on their website in the pre-publishing format): Seasonal changes in physical processes controlling evaporation over an inland water, Qianyu Zhang & Heping Liu.

They carried out detailed measurements over a large reservoir (134 km² and 4-8m deep) in Mississippi for the winter and summer months of 2008. What were they trying to do?

Understanding physical processes that control turbulent fluxes of energy, heat, water vapor, and trace gases over inland water surfaces is critical in quantifying their influences on local, regional, and global climate. Since direct measurements of turbulent fluxes of sensible heat (H) and latent heat (LE) over inland waters with eddy covariance systems are still rare, process-based understanding of water-atmosphere interactions remains very limited..

..Many numerical weather prediction and climate models use the bulk transfer relations to estimate H and LE over water surfaces. Given substantial biases in modeling results against observations, process-based analysis and model validations are essential in improving parameterizations of water-atmosphere exchange processes..

Before we get into their paper, here is a relevant quote on parameterization from a different discipline. This is from Turbulent dispersion in the ocean, Garrett (2006):

Including the effects of processes that are unresolved in models is one of the central problems in oceanography.

In particular, for temperature, salinity, or some other scalar, one seeks to parameterize the eddy flux in terms of quantities that are resolved by the models. This has been much discussed, with determinations of the correct parameterization relying on a combination of deductions from the large-scale models, observations of the eddy fluxes or associated quantities, and the development of an understanding of the processes responsible for the fluxes.

The key remark to make is that it is only through process studies that we can reach an understanding leading to formulae that are valid in changing conditions, rather than just having numerical values which may only be valid in present conditions.

[Emphasis added]

Background

Latent heat transfer is the primary mechanism globally for transferring the solar radiation that is absorbed at the surface up into the atmosphere. Sensible heat is a lot smaller by comparison with latent heat. Both are “convection” in a broad term – the movement of heat by the bulk movement of air. But one is carrying the “extra heat” of evaporated water. When the evaporated water condenses (usually higher up in the atmosphere) it releases this stored heat.

Let’s take a look at the standard parameterization in use (adopting their notation) for latent heat:

LE = ρaLCEU(qw −qa)

LE = latent heat transfer, ρa = air density, L = latent heat of vaporization (2.5×106 J kg–1), CE = bulk transfer coefficient for moisture, U = wind speed, qw & qa are the respective specific humidity in the water-atmosphere interface and the over-water atmosphere

The values  ρa and L are a fundamental values. The formula says that the key parameters are:

  • wind speed (horizontal)
  • the difference between the humidity at the water surface (this is the saturated value which varies strongly with temperature) and the humidity in the air above

We would expect the differential of humidity to be important – if the air above is saturated then latent heat transfer will be zero, because there is no way to move any more water vapor into the air above. At the other extreme, if the air above is completely dry then we have maximized the potential for moving water vapor into the atmosphere.

The product of wind speed and humidity difference indicate how much mixing is going on due to air flow. There is a lot of theory and experiment behind the ideas, going back into the 1950s or further, but in the end it is an over-simplification.

That’s what all parameterizations are – over-simplifications.

The real formula is much simpler:

 LE = ρaL<w’q’>, where the brackets denote averages,w’q’ = the turbulent moisture flux

w is the upwards velocity, q is moisture; and the ‘ denoting eddies

Note to commenters, if you write < or > in the comment it gets dropped because WordPress treats it like a html tag. You need to write &lt; or &gt;

The key part of this equation just says “how much moisture is being carried upwards by turbulent flow”. That’s the real value so why don’t we measure that instead?

Here’s a graph of horizontal wind over a short time period from Stull (1988):

From Stull 1988

From Stull 1988

Figure 1

And any given location the wind varies across every timescale. Pick another location and the results are different. This is the problem of turbulence.

And to get accurate measurements for the paper we are looking at now, they had quite a setup:

Zhang 2014-instruments

Figure 2

Here’s the description of the instrumentation:

An eddy covariance system at a height of 4 m above the water surface consisted of a three-dimensional sonic anemometer (model CSAT3, Campbell Scientific, Inc.) and an open path CO2/H2O infrared gas analyzer (IRGA; Model LI-7500, LI-COR, Inc.).

A datalogger (model CR5000, Campbell Scientific, Inc.) recorded three-dimensional wind velocity components and sonic virtual temperature from the sonic anemometer and densities of carbon dioxide and water vapor from the IRGA at a frequency of 10 Hz.

Other microclimate variables were also measured, including Rn at 1.2 m (model Q-7.1, Radiation and Energy Balance Systems, Campbell Scientific, Inc.), air temperature (Ta) and relative humidity (RH) (model HMP45C, Vaisala, Inc.) at approximately 1.9, 3.0, 4.0, and 5.5 m, wind speeds (U) and wind direction (WD) (model 03001, RM Young, Inc.) at 5.5 m.

An infrared temperature sensor (model IRR-P, Apogee, Inc.) was deployed to measure water skin temperature (Tw).

Vapor pressure (ew) in the water-air interface was equivalent to saturation vapor pressure at Tw [Buck, 1981].

The same datalogger recorded signals from all the above microclimate sensors at 30-min intervals. Six deep cycling marine batteries charged by two solar panels (model SP65, 65 Watt Solar Panel, Campbell Scientific, Inc.) powered all instruments. A monthly visit to the tower was scheduled to provide maintenance and download the 10-Hz time-series data.

I don’t know the price tag but I don’t think the equipment is cheap. So this kind of setup can be used for research, but we can’t put one each every 1km across a country or an ocean and collect continuous data.

That’s why we need parameterizations if we want to get some climatological data. Of course, these need verifying, and that’s what this paper (and many others) are about.

Results

When we look back at the parameterized equation for latent heat it’s clear that latent heat should vary linearly with the product of wind speed and humidity differential. The top graph is sensible heat which we won’t focus on, the bottom graph is latent heat. Δe is humidity, expressed as partial pressure rather than g/kg. We see that the correlation between LE and wind speed x humidity differential is very different in summer and winter:

From Zhang & Liu 2014

From Zhang & Liu 2014

Figure 2

The scatterplots showing the same information:

From Zhang & Liu 2014

From Zhang & Liu 2014

Figure 3

The authors looked at the diurnal cycle – averaging the result for the time of day over the period of the results, separated into winter and summer.

Our results also suggest that the influences of U on LE may not be captured simply by the product of U and Δe [humidity differential] on short timescales, especially in summer. This situation became more serious when the ASL (atmospheric surface layer, see note 1) became more unstable, as reflected by our summer cases (i.e., more unstable) versus the winter cases.

They selected one period to review in detail. First the winter results:

From Zhang & Liu 2014

From Zhang & Liu 2014

Figure 4

On March 18, Δe was small (i.e., 0 ~ 0.2 kPa) and it experienced little diurnal variations, leading to limited water vapor supply (Fig. 5a).

The ASL (see note 1) during this period was slightly stable (Fig. 5b), which suppressed turbulent exchange of LE. As a result, LE approached zero and even became negative, though strong wind speeds of approximately around 10 ms–1 were present, indicating a strong mechanical turbulent mixing in the ASL.

On March 19, with an increased Δe up to approximately 1.0 kPa, LE closely followed Δe and increased from zero to more than 200 Wm–2. Meanwhile, the ASL experienced a transition from stable to unstable conditions (Fig. 5b), coinciding with an increase in LE.

On March 20, however, the continuous increase of Δe did not lead to an increase in LE. Instead, LE decreased gradually from 200 Wm–2 to about zero, which was closely associated with the steady decrease in U from 10 ms–1 to nearly zero and with the decreased instability.

These results suggest that LE was strongly limited by Δe, instead of U when Δe was low; and LE was jointly regulated by variations in Δe and U once a moderate Δe level was reached and maintained, indicating a nonlinear response of LE to U and Δe induced by ASL stability. The ASL stability largely contributed to variations in LE in winter.

Then the summer results:

From Zhang & Liu 2014

From Zhang & Liu 2014

Figure 5

In summer (i.e., July 23 – 25 in Fig. 6), Δe was large with a magnitude of 1.5 ~ 3.0 kPa, providing adequate water vapor supply for evaporation, and had strong diurnal variations (Fig. 6a).

U exhibited diurnal variations from about 0 to 8 ms–1. LE was regulated by both Δe and U, as reflected by the fact that LE variations on the July 24 afternoon did not follow solely either the variations of U or the variations of Δe. When the diurnal variations of Δe and U were small in July 25, LE was also regulated by both U and Δe or largely by U when the change in U was apparent.

Note that during this period, the ASL was strongly unstable in the morning and weakly unstable in the afternoon and evening (Fig. 6b), negatively corresponding to diurnal variations in LE. This result indicates that the ASL stability had minor impacts on diurnal variations in LE during this period.

Another way to see the data is by plotting the results to see how valid the parameterized equation appears. Here we should have a straight line between LE/U and Δe as the caption explains:

From Zhang & Liu 2014

From Zhang & Liu 2014

Figure 6

One method to determine the bulk transfer coefficients is to use the mass transfer relations (Eqs. 1, 2) by quantifying the slopes of the linear regression of LE against UΔe. Our results suggest that using this approach to determine the bulk transfer coefficient may cause large bias, given the fact that one UΔe value may correspond to largely different LE values.

They conclude:

Our results suggest that these highly nonlinear responses of LE to environmental variables may not be represented in the bulk transfer relations in an appropriate manner, which requires further studies and discussion.

Conclusion

Parameterizations are inevitable. Understanding their limitations is very difficult. A series of studies might indicate that there is a “linear” relationship with some scatter, but that might just be disguising or ignoring a variable that never appears in the parameterization.

As Garrett commented “..having numerical values which may only be valid in present conditions”. That is, if the mean state of another climate variable shifts the parameterization will be invalid, or less accurate.

Alternatively, given the non-linear nature of climate process, changes don’t “average out”. So the mean state of another climate variable may not shift, the mean state might be constant, but its variation with time or another variable may introduce a change in the real process that results in an overall shift in climate.

There are other problems with calculating latent heat transfer – even accepting the parameterization as the best version of “the truth” – there are large observational gaps in the parameters we need to measure (wind speed and humidity above the ocean) even at the resolution of current climate models. This is one reason why there is a need for reanalysis products.

I found it interesting to see how complicated latent heat variations were over a water surface.

References

Seasonal changes in physical processes controlling evaporation over an inland water, Qianyu Zhang & Heping Liu, JGR (2014)

Turbulent dispersion in the ocean, Chris Garrett, Progress in Oceanography (2006)

Notes

Note 1:  The ASL (atmospheric surface layer) stability is described by the Obukhov stability parameter:

ζ = z/L0

where z is the height above ground level and L0 is the Obukhov parameter.

L0 = −θvu*3/[kg(w'θv')s ]

where θv is virtual potential temperature (K), u* is frictional velocity by the eddy covariance system (ms–1), k is Von Karman constant (0.4), g is acceleration due to gravity (9.8 ms–2), w is vertical velocity (m s–1), and (w’θv‘)s is the flux of virtual potential temperature by the eddy covariance system

Read Full Post »

In Part One we had a look at some introductory ideas. In this article we will look at one of the ground-breaking papers in chaos theory – Deterministic nonperiodic flow, Edward Lorenz (1963). It has been cited more than 13,500 times.

There might be some introductory books on non-linear dynamics and chaos that don’t include a discussion of this paper – or at least a mention – but they will be in a small minority.

Lorenz was thinking about convection in the atmosphere, or any fluid heated from below, and reduced the problem to just three simple equations. However, the equations were still non-linear and because of this they exhibit chaotic behavior.

Cencini et al describe Lorenz’s problem:

Consider a fluid, initially at rest, constrained by two infinite horizontal plates maintained at constant temperature and at a fixed distance from each other. Gravity acts on the system perpendicular to the plates. If the upper plate is maintained hotter than the lower one, the fluid remains at rest and in a state of conduction, i.e., a linear temperature gradient establishes between the two plates.

If the temperatures are inverted, gravity induced buoyancy forces tend to rise toward the top the hotter, and thus lighter fluid, that is at the bottom. This tendency is contrasted by viscous and dissipative forces of the fluid so that the conduction state may persist.

However, as the temperature differential exceeds a certain amount, the conduction state is replaced by a steady convection state: the fluid motion consists of steady counter-rotating vortices (rolls) which transport upwards the hot/light fluid in contact with the bottom plate and downwards the cold heavy fluid in contact with the upper one.

The steady convection state remains stable up to another critical temperature difference above which it becomes unsteady, very irregular and hardly predictable.

Willem Malkus and Lou Howard of MIT came up with an equivalent system – the simplest version is shown in this video:

Figure 1

Steven Strogatz (1994), an excellent introduction to dynamic and chaotic systems – explains and derives the equivalence between the classic Lorenz equations and this tilted waterwheel.

L63 (as I’ll call these equations) has three variables apart from time: intensity of convection (x), temperature difference between ascending and descending currents (y), deviation of temperature from a linear profile (z).

Here are some calculated results for L63  for the “classic” parameter values and three very slightly different initial conditions (blue, red, green in each plot) over 5,000 seconds, showing the start and end 50 seconds – click to expand:

Lorenz63-5ksecs-x-y-vs-time-499px

Figure 2 – click to expand – initial conditions x,y,z = 0, 1, 0;  0, 1.001, 0;  0, 1.002, 0

We can see that quite early on the two conditions diverge, and 5000 seconds later the system still exhibits similar “non-periodic” characteristics.

For interest let’s zoom in on just over 10 seconds of ‘x’ near the start and end:

Lorenz63-5ksecs-x-vs-time-zoom-499px

Figure 3

Going back to an important point from the first post, some chaotic systems will have predictable statistics even if the actual state at any future time is impossible to determine (due to uncertainty over the initial conditions).

So we’ll take a look at the statistics via a running average – click to expand:

Lorenz63-5ksecs-x-y-vs-time-average-499px

Figure 4 – click to expand

Two things stand out – first of all the running average over more than 100 “oscillations” still shows a large amount of variability. So at any one time, if we were to calculate the average from our current and historical experience we could easily end up calculating a value that was far from the “long term average”. Second – the “short term” average, if we can call it that, shows large variation at any given time between our slightly divergent initial conditions.

So we might believe – and be correct – that the long term statistics of slightly different initial conditions are identical, yet be fooled in practice.

Of course, surely it sorts itself out over a longer time scale?

I ran the same simulation (with just the first two starting conditions) for 25,000 seconds and then used a filter window of 1,000 seconds – click to expand:

Lorenz63-25ksecs-x-time-1000s-average-499px

 Figure 5 – click to expand

The total variability is less, but we have a similar problem – it’s just lower in magnitude. Again we see that the statistics of two slightly different initial conditions – if we were to view them by the running average at any one time –  are likely to be different even over this much longer time frame.

From this 25,000 second simulation:

  • take 10,000 random samples each of 25 second length and plot a histogram of the means of each sample (the sample means)
  • same again for 100 seconds
  • same again for 500 seconds
  • same again for 3,000 seconds

Repeat for the data from the other initial condition.

Here is the result:

Lorenz-25000s-histogram-of-means-2-conditions

Figure 6

To make it easier to see, here is the difference between the two sets of histograms, normalized by the maximum value in each set:

Lorenz-25000s-delta-histogram

Figure 7

This is a different way of viewing what we saw in figures 4 & 5.

The spread of sample means shrinks as we increase the time period but the difference between the two data sets doesn’t seem to disappear (note 2).

Attractors and Phase Space

The above plots show how variables change with time. There’s another way to view the evolution of system dynamics and that is by “phase space”. It’s a name for a different kind of plot.

So instead of plotting x vs time, y vs time and z vs time – let’s plot x vs y vs z – click to expand:

Lorenz63-first-50s-x-y-z-499px

Figure 8 – Click to expand – the colors blue, red & green represent the same initial conditions as in figure 2

Without some dynamic animation we can’t now tell how fast the system evolves. But we learn something else that turns out to be quite amazing. The system always end up on the same “phase space”. Perhaps that doesn’t seem amazing yet..

Figure 7 was with three initial conditions that are almost identical. Let’s look at three initial conditions that are very different: x,y,z = 0, 1, 0;   5, 5, 5;   20, 8, 1:

Lorenz63-first-50s-x-y-z-3-different-conditions-499px

Figure 9 - Click to expand

Here’s an example (similar to figure 7) from Strogatz – a set of 10,000 closely separated initial conditions and how they separate at 3, 6, 9 and 15 seconds. The two key points:

  1. the fast separation of initial conditions
  2. the long term position of any of the initial conditions is still on the “attractor”
From Strogatz 1994

From Strogatz 1994

Figure 10

A dynamic visualization on Youtube with 500,000 initial conditions:

Figure 11

There’s lot of theory around all of this as you might expect. But in brief, in a “dissipative system” the “phase volume” contracts exponentially to zero. Yet for the Lorenz system somehow it doesn’t quite manage that. Instead, there are an infinite number of 2-d surfaces. Or something. For the sake of a not overly complex discussion a wide range of initial conditions ends up on something very close to a 2-d surface.

This is known as a strange attractor. And the Lorenz strange attractor looks like a butterfly.

Conclusion

Lorenz 1963 reduced convective flow (e.g., heating an atmosphere from the bottom) to a simple set of equations. Obviously these equations are a massively over-simplified version of anything like the real atmosphere. Yet, even with this very simple set of equations we find chaotic behavior.

Chaotic behavior in this example means:

  • very small differences get amplified extremely quickly so that no matter how much you increase your knowledge of your starting conditions it doesn’t help much (note 3)
  • starting conditions within certain boundaries will always end up within “attractor” boundaries, even though there might be non-periodic oscillations around this attractor
  • the long term (infinite) statistics can be deterministic but over any “smaller” time period the statistics can be highly variable

References

Deterministic nonperiodic flow, EN Lorenz, Journal of the Atmospheric Sciences (1963)

Chaos: From Simple Models to Complex Systems, Cencini, Cecconi & Vulpiani, Series on Advances in Statistical Mechanics – Vol. 17 (2010)

Non Linear Dynamics and Chaos, Steven H. Strogatz, Perseus Books  (1994)

Notes

Note 1: The Lorenz equations:

dx/dt = σ (y-x)

dy/dt = rx – y – xz

dz/dt = xy – bz

where

x = intensity of convection

y = temperature difference between ascending and descending currents

z = devision of temperature from a linear profile

σ = Prandtl number, ratio of momentum diffusivity to thermal diffusivity

r = Rayleigh number

b = “another parameter”

And the “classic parameters” are σ=10, b = 8/3, r = 28

Note 2: Lorenz 1963 has over 13,000 citations so I haven’t been able to find out if this system of equations is transitive or intransitive. Running Matlab on a home Mac reaches some limitations and I maxed out at 25,000 second simulations mapped onto a 0.01 second time step.

However, I’m not trying to prove anything specifically about the Lorenz 1963 equations, more illustrating some important characteristics of chaotic systems

Note 3: Small differences in initial conditions grow exponentially, until we reach the limits of the attractor. So it’s easy to show the “benefit” of more accurate data on initial conditions.

If we increase our precision on initial conditions by 1,000,000 times the increase in prediction time is a massive 2½ times longer.

Read Full Post »

There are many classes of systems but in the climate blogosphere world two ideas about climate seem to be repeated the most.

In camp A:

We can’t forecast the weather two weeks ahead so what chance have we got of forecasting climate 100 years from now.

And in camp B:

Weather is an initial value problem, whereas climate is a boundary value problem. On the timescale of decades, every planetary object has a mean temperature mainly given by the power of its star according to Stefan-Boltzmann’s law combined with the greenhouse effect. If the sources and sinks of CO2 were chaotic and could quickly release and sequester large fractions of gas perhaps the climate could be chaotic. Weather is chaotic, climate is not.

Of course, like any complex debate, simplified statements don’t really help. So this article kicks off with some introductory basics.

Many inhabitants of the climate blogosphere already know the answer to this subject and with much conviction. A reminder for new readers that on this blog opinions are not so interesting, although occasionally entertaining. So instead, try to explain what evidence is there for your opinion. And, as suggested in About this Blog:

And sometimes others put forward points of view or “facts” that are obviously wrong and easily refuted.  Pretend for a moment that they aren’t part of an evil empire of disinformation and think how best to explain the error in an inoffensive way.

Pendulums

The equation for a simple pendulum is “non-linear”, although there is a simplified version of the equation, often used in introductions, which is linear. However, the number of variables involved is only two:

  • angle
  • speed

and this isn’t enough to create a “chaotic” system.

If we have a double pendulum, one pendulum attached at the bottom of another pendulum, we do get a chaotic system. There are some nice visual simulations around, which St. Google might help interested readers find.

If we have a forced damped pendulum like this one:

Pendulum-forced

Figure 1 – the blue arrows indicate that the point O is being driven up and down by an external force

-we also get a chaotic system.

What am I talking about? What is linear & non-linear? What is a “chaotic system”?

Digression on Non-Linearity for Non-Technical People

Common experience teaches us about linearity. If I pick up an apple in the supermarket it weighs about 0.15 kg or 150 grams (also known in some countries as “about 5 ounces”). If I take 10 apples the collection weighs 1.5 kg. That’s pretty simple stuff. Most of our real world experience follows this linearity and so we expect it.

On the other hand, if I was near a very cold black surface held at 170K (-103ºC) and measured the radiation emitted it would be 47 W/m². Then we double the temperature of this surface to 340K (67ºC) what would I measure? 94 W/m²? Seems reasonable – double the absolute temperature and get double the radiation.. But it’s not correct.

The right answer is 758 W/m², which is 16x the amount. Surprising, but most actual physics, engineering and chemistry is like this. Double a quantity and you don’t get double the result.

It gets more confusing when we consider the interaction of other variables.

Let’s take riding a bike [updated thanks to Pekka]. Once you get above a certain speed most of the resistance comes from the wind so we will focus on that. Typically the wind resistance increases as the square of the speed. So if you double your speed you get four times the wind resistance. Work done = force x distance moved, so with no head wind power input has to go up as the cube of speed (note 4). This means you have to put in 8x the effort to get 2x the speed.

On Sunday you go for a ride and the wind speed is zero. You get to 25 km/hr (16 miles/hr) by putting a bit of effort in – let’s say you are producing 150W of power (I have no idea what the right amount is). You want your new speedo to register 50 km/hr – so you have to produce 1,200W.

On Monday you go for a ride and the wind speed is 20 km/hr into your face. Probably should have taken the day off.. Now with 150W you get to only 14 km/hr, it takes almost 500W to get to your basic 25 km/hr, and to get to 50 km/hr it takes almost 2,400W. No chance of getting to that speed!

On Tuesday you go for a ride and the wind speed is the same so you go in the opposite direction and take the train home. Now with only 6W you get to go 25 km/hr, to get to 50km/hr you only need to pump out 430W.

In mathematical terms it’s quite simple: F = k(v-w)², Force = (a constant, k) x (road speed – wind speed) squared. Power, P = Fv = kv(v-w)². But notice that the effect of the “other variable”, the wind speed, has really complicated things.

To double your speed on the first day you had to produce eight times the power. To double your speed the second day you had to produce almost five times the power. To double your speed the third day you had to produce just over 70 times the power. All with the same physics.

The real problem with nonlinearity isn’t the problem of keeping track of these kind of numbers. You get used to the fact that real science – real world relationships – has these kind of factors and you come to expect them. And you have an equation that makes calculating them easy. And you have computers to do the work.

No, the real problem with non-linearity (the real world) is that many of these equations link together and solving them is very difficult and often only possible using “numerical methods”.

It is also the reason why something like climate feedback is very difficult to measure. Imagine measuring the change in power required to double speed on the Monday. It’s almost 5x, so you might think the relationship is something like the square of speed. On Tuesday it’s about 70 times, so you would come up with a completely different relationship. In this simple case know that wind speed is a factor, we can measure it, and so we can “factor it out” when we do the calculation. But in a more complicated system, if you don’t know the “confounding variables”, or the relationships, what are you measuring? We will return to this question later.

When you start out doing maths, physics, engineering.. you do “linear equations”. These teach you how to use the tools of the trade. You solve equations. You rearrange relationships using equations and mathematical tricks, and these rearranged equations give you insight into how things work. It’s amazing. But then you move to “nonlinear” equations, aka the real world, which turns out to be mostly insoluble. So nonlinear isn’t something special, it’s normal. Linear is special. You don’t usually get it.

..End of digression

Back to Pendulums

Let’s take a closer look at a forced damped pendulum. Damped, in physics terms, just means there is something opposing the movement. We have friction from the air and so over time the pendulum slows down and stops. That’s pretty simple. And not chaotic. And not interesting.

So we need something to keep it moving. We drive the pivot point at the top up and down and now we have a forced damped pendulum. The equation that results (note 1) has the massive number of three variables – position, speed and now time to keep track of the driving up and down of the pivot point. Three variables seems to be the minimum to create a chaotic system (note 2).

As we increase the ratio of the forcing amplitude to the length of the pendulum (β in note 1) we can move through three distinct types of response:

  • simple response
  • a “chaotic start” followed by a deterministic oscillation
  • a chaotic system

This is typical of chaotic systems – certain parameter values or combinations of parameters can move the system between quite different states.

Here is a plot (note 3) of position vs time for the chaotic system, β=0.7, with two initial conditions, only different from each other by 0.1%:

Forced damped harmonic pendulum, b=0.7: Start angular speed 0.1; 0.1001

Forced damped harmonic pendulum, b=0.7: Start angular speed 0.1; 0.1001

Figure 1

It’s a little misleading to view the angle like this because it is in radians and so needs to be mapped between 0-2π (but then we get a discontinuity on a graph that doesn’t match the real world). We can map the graph onto a cylinder plot but it’s a mess of reds and blues.

Another way of looking at the data is via the statistics – so here is a histogram of the position (θ), mapped to 0-2π, and angular speed (dθ/dt) for the two starting conditions over the first 10,000 seconds:

Histograms for 10k seconds

Histograms for 10,000 seconds

Figure 2

We can see they are similar but not identical (note the different scales on the y-axis).

That might be due to the shortness of the run, so here are the results over 100,000 seconds:

Pendulum-0.7-100k seconds-2 conditions-hist

Histogram for 100,000 seconds

Figure 3

As we increase the timespan of the simulation the statistics of two slightly different initial conditions become more alike.

So if we want to know the state of a chaotic system at some point in the future, very small changes in the initial conditions will amplify over time, making the result unknowable – or no different from picking the state from a random time in the future. But if we look at the statistics of the results we might find that they are very predictable. This is typical of many (but not all) chaotic systems.

Orbits of the Planets

The orbits of the planets in the solar system are chaotic. In fact, even 3-body systems moving under gravitational attraction have chaotic behavior. So how did we land a man on the moon? This raises the interesting questions of timescales and amount of variation. Planetary movement – for our purposes – is extremely predictable over a few million years. But over 10s of millions of years we might have trouble predicting exactly the shape of the earth’s orbit – eccentricity, time of closest approach to the sun, obliquity.

However, it seems that even over a much longer time period the planets will still continue in their orbits – they won’t crash into the sun or escape the solar system. So here we see another important aspect of some chaotic systems – the “chaotic region” can be quite restricted. So chaos doesn’t mean unbounded.

According to Cencini, Cecconi & Vulpiani (2010):

Therefore, in principle, the Solar system can be chaotic, but not necessarily this implies events such as collisions or escaping planets..

However, there is evidence that the Solar system is “astronomically” stable, in the sense that the 8 largest planets seem to remain bound to the Sun in low eccentricity and low inclination orbits for time of the order of a billion years. In this respect, chaos mostly manifest in the irregular behavior of the eccentricity and inclination of the less massive planets, Mercury and Mars. Such variations are not large enough to provoke catastrophic events before extremely large time. For instance, recent numerical investigations show that for catastrophic events, such as “collisions” between Mercury and Venus or Mercury failure into the Sun, we should wait at least a billion years.

And bad luck, Pluto.

Deterministic, non-Chaotic, Systems with Uncertainty

Just to round out the picture a little, even if a system is not chaotic and is deterministic we might lack sufficient knowledge to be able to make useful predictions. If you take a look at figure 3 in Ensemble Forecasting you can see that with some uncertainty of the initial velocity and a key parameter the resulting velocity of an extremely simple system has quite a large uncertainty associated with it.

This case is quantitively different of course. By obtaining more accurate values of the starting conditions and the key parameters we can reduce our uncertainty. Small disturbances don’t grow over time to the point where our calculation of a future condition might as well just be selected from a randomly time in the future.

Transitive, Intransitive and “Almost Intransitive” Systems

Many chaotic systems have deterministic statistics. So we don’t know the future state beyond a certain time. But we do know that a particular position, or other “state” of the system, will be between a given range for x% of the time, taken over a “long enough” timescale. These are transitive systems.

Other chaotic systems can be intransitive. That is, for a very slight change in initial conditions we can have a different set of long term statistics. So the system has no “statistical” predictability. Lorenz 1968 gives a good example.

Lorenz introduces the concept of almost intransitive systems. This is where, strictly speaking, the statistics over infinite time are independent of the initial conditions, but the statistics over “long time periods” are dependent on the initial conditions. And so he also looks at the interesting case (Lorenz 1990) of moving between states of the system (seasons), where we can think of the precise starting conditions each time we move into a new season moving us into a different set of long term statistics. I find it hard to explain this clearly in one paragraph, but Lorenz’s papers are very readable.

Conclusion?

This is just a brief look at some of the basic ideas.

Other Articles in the Series

Part Two – Lorenz 1963

References

Chaos: From Simple Models to Complex Systems, Cencini, Cecconi & Vulpiani, Series on Advances in Statistical Mechanics – Vol. 17 (2010)

Climatic Determinism, Edward Lorenz (1968) – free paper

Can chaos and intransivity lead to interannual variation, Edward Lorenz, Tellus (1990) – free paper

Notes

Note 1 – The equation is easiest to “manage” after the original parameters are transformed so that tω->t. That is, the period of external driving, T0=2π under the transformed time base.

Then:

Pendulum-forced-equation

where θ = angle, γ’ = γ/ω, α = g/Lω², β =h0/L;

these parameters based on γ = viscous drag coefficient, ω = angular speed of driving, g = acceleration due to gravity = 9.8m/s², L = length of pendulum, h0=amplitude of driving of pivot point

Note 2 – This is true for continuous systems. Discrete systems can be chaotic with less parameters

Note 3 – The results were calculated numerically using Matlab’s ODE (ordinary differential equation) solver, ode45.

Note 4 – Force = k(v-w)2 where k is a constant, v=velocity, w=wind speed. Work done = Force x distance moved so Power, P = Force x velocity.

Therefore:

P = kv(v-w)2

If we know k, v & w we can find P. If we have P, k & w and want to find v it is a cubic equation that needs solving.

Read Full Post »

In Ensemble Forecasting we had a look at the principles behind “ensembles of initial conditions” and “ensembles of parameters” in forecasting weather. Climate models are a little different from weather forecasting models but use the same physics and the same framework.

A lot of people, including me, have questions about “tuning” climate models. While looking for what the latest IPCC report (AR5) had to say about ensembles of climate models I found a reference to Tuning the climate of a global model by Mauritsen et al (2012). Unless you work in the field of climate modeling you don’t know the magic behind the scenes. This free paper (note 1) gives some important insights and is very readable:

The need to tune models became apparent in the early days of coupled climate modeling, when the top of the atmosphere (TOA) radiative imbalance was so large that models would quickly drift away from the observed state. Initially, a practice to input or extract heat and freshwater from the model, by applying flux-corrections, was invented to address this problem. As models gradually improved to a point when flux-corrections were no longer necessary, this practice is now less accepted in the climate modeling community.

Instead, the radiation balance is controlled primarily by tuning cloud-related parameters at most climate modeling centers while others adjust the ocean surface albedo or scale the natural aerosol climatology to achieve radiation balance. Tuning cloud parameters partly masks the deficiencies in the simulated climate, as there is considerable uncertainty in the representation of cloud processes. But just like adding flux-corrections, adjusting cloud parameters involves a process of error compensation, as it is well appreciated that climate models poorly represent clouds and convective processes. Tuning aims at balancing the Earth’s energy budget by adjusting a deficient representation of clouds, without necessarily aiming at improving the latter.

A basic requirement of a climate model is reproducing the temperature change from pre-industrial times (mid 1800s) until today. So the focus is on temperature change, or in common terminology, anomalies.

It was interesting to see that if we plot the “actual modeled temperatures” from 1850 to present the picture doesn’t look so good (the grey curves are models from the coupled model inter-comparison projects: CMIP3 and CMIP5):

From Mauritsen et al 2012

From Mauritsen et al 2012

Figure 1

The authors state:

There is considerable coherence between the model realizations and the observations; models are generally able to reproduce the observed 20th century warming of about 0.7 K..

Yet, the span between the coldest and the warmest model is almost 3 K, distributed equally far above and below the best observational estimates, while the majority of models are cold-biased. Although the inter-model span is only one percent relative to absolute zero, that argument fails to be reassuring. Relative to the 20th century warming the span is a factor four larger, while it is about the same as our best estimate of the climate response to a doubling of CO2, and about half the difference between the last glacial maximum and present.

They point out that adjusting parameters might just be offsetting one error against another..

In addition to targeting a TOA radiation balance and a global mean temperature, model tuning might strive to address additional objectives, such as a good representation of the atmospheric circulation, tropical variability or sea-ice seasonality. But in all these cases it is usually to be expected that improved performance arises not because uncertain or non-observable parameters match their intrinsic value – although this would clearly be desirable – rather that compensation among model errors is occurring. This raises the question as to whether tuning a model influences model-behavior, and places the burden on the model developers to articulate their tuning goals, as including quantities in model evaluation that were targeted by tuning is of little value. Evaluating models based on their ability to represent the TOA radiation balance usually reflects how closely the models were tuned to that particular target, rather than the models intrinsic qualities.

[Emphasis added]. And they give a bit more insight into the tuning process:

A few model properties can be tuned with a reasonable chain of understanding from model parameter to the impact on model representation, among them the global mean temperature. It is comprehendible that increasing the models low-level cloudiness, by for instance reducing the precipitation efficiency, will cause more reflection of the incoming sunlight, and thereby ultimately reduce the model’s surface temperature.

Likewise, we can slow down the Northern Hemisphere mid-latitude tropospheric jets by increasing orographic drag, and we can control the amount of sea ice by tinkering with the uncertain geometric factors of ice growth and melt. In a typical sequence, first we would try to correct Northern Hemisphere tropospheric wind and surface pressure biases by adjusting parameters related to the parameterized orographic gravity wave drag. Then, we tune the global mean temperature as described in Sections 2.1 and 2.3, and, after some time when the coupled model climate has come close to equilibrium, we will tune the Arctic sea ice volume (Section 2.4).

In many cases, however, we do not know how to tune a certain aspect of a model that we care about representing with fidelity, for example tropical variability, the Atlantic meridional overturning circulation strength, or sea surface temperature (SST) biases in specific regions. In these cases we would rather monitor these aspects and make decisions on the basis of a weak understanding of the relation between model formulation and model behavior.

Here we see how CMIP3 & 5 models “drift” – that is, over a long period of simulation time how the surface temperature varies with TOA flux imbalance (and also we see the cold bias of the models):

From Mauritsen et al 2012

From Mauritsen et al 2012

Figure 2

If a model equilibrates at a positive radiation imbalance it indicates that it leaks energy, which appears to be the case in the majority of models, and if the equilibrium balance is negative it means that the model has artificial energy sources. We speculate that the fact that the bulk of models exhibit positive TOA radiation imbalances, and at the same time are cold-biased, is due to them having been tuned without account for energy leakage.

[Emphasis added].

From that graph they discuss the implied sensitivity to radiative forcing of each model (the slope of each model and how it compares with the blue and red “sensitivity” curves).

We get to see some of the parameters that are played around with (a-h in the figure):

From Mauritsen et al 2012

From Mauritsen et al 2012

Figure 3

And how changing some of these parameters affects (over a short run) “headline” parameters like TOA imbalance and cloud cover:

From Mauritsen et al 2012

From Mauritsen et al 2012

Figure 4 – Click to Enlarge

There’s also quite a bit in the paper about tuning the Arctic sea ice that will be of interest for Arctic sea ice enthusiasts.

In some of the final steps we get a great insight into how the whole machine goes through its final tune up:

..After these changes were introduced, the first parameter change was a reduction in two non-dimensional parameters controlling the strength of orographic wave drag from 0.7 to 0.5.

This greatly reduced the low zonal mean wind- and sea-level pressure biases in the Northern Hemisphere in atmosphere-only simulations, and further had a positive impact on the global to Arctic temperature gradient and made the distribution of Arctic sea-ice far more realistic when run in coupled mode.

In a second step the conversion rate of cloud water to rain in convective clouds was doubled from 1×10-4 s-1 to 2×10-4 s-1 in order to raise the OLR to be closer to the CERES satellite estimates.

At this point it was clear that the new coupled model was too warm compared to our target pre- industrial temperature. Different measures using the convection entrainment rates, convection overshooting fraction and the cloud homogeneity factors were tested to reduce the global mean temperature.

In the end, it was decided to use primarily an increased homogeneity factor for liquid clouds from 0.70 to 0.77 combined with a slight reduction of the convective overshooting fraction from 0.22 to 0.21, thereby making low-level clouds more reflective to reduce the surface temperature bias. Now the global mean temperature was sufficiently close to our target value and drift was very weak. At this point we decided to increase the Arctic sea ice volume from 18×1012 m3 to 22×1012 m3 by raising the cfreeze parameter from 1/2 to 2/3. ECHAM5/MPIOM had this parameter set to 4/5. These three final parameter settings were done while running the model in coupled mode.

Some of the paper’s results (not shown here) are some “parallel worlds” with different parameters. In essence, while working through the model development phase they took a lot of notes of what they did, what they changed, and at the end they went back and created some alternatives from some of their earlier choices. The parameter choices along with a set of resulting climate properties are shown in their table 10.

Some summary statements:

Parameter tuning is the last step in the climate model development cycle, and invariably involves making sequences of choices that influence the behavior of the model. Some of the behavioral changes are desirable, and even targeted, but others may be a side effect of the tuning. The choices we make naturally depend on our preconceptions, preferences and objectives. We choose to tune our model because the alternatives – to either drift away from the known climate state, or to introduce flux-corrections – are less attractive. Within the foreseeable future climate model tuning will continue to be necessary as the prospects of constraining the relevant unresolved processes with sufficient precision are not good.

Climate model tuning has developed well beyond just controlling global mean temperature drift. Today, we tune several aspects of the models, including the extratropical wind- and pressure fields, sea-ice volume and to some extent cloud-field properties. By doing so we clearly run the risk of building the models’ performance upon compensating errors, and the practice of tuning is partly masking these structural errors. As one continues to evaluate the models, sooner or later these compensating errors will become apparent, but the errors may prove tedious to rectify without jeopardizing other aspects of the model that have been adjusted to them.

Climate models ability to simulate the 20th century temperature increase with fidelity has become something of a show-stopper as a model unable to reproduce the 20th century would probably not see publication, and as such it has effectively lost its purpose as a model quality measure. Most other observational datasets sooner or later meet the same destiny, at least beyond the first time they are applied for model evaluation. That is not to say that climate models can be readily adapted to fit any dataset, but once aware of the data we will compare with model output and invariably make decisions in the model development on the basis of the results. Rather, our confidence in the results provided by climate models is gained through the development of a fundamental physical understanding of the basic processes that create climate change. More than a century ago it was first realized that increasing the atmospheric CO2 concentration leads to surface warming, and today the underlying physics and feedback mechanisms are reasonably understood (while quantitative uncertainty in climate sensitivity is still large). Coupled climate models are just one of the tools applied in gaining this understanding..

[Emphasis added].

..In this paper we have attempted to illustrate the tuning process, as it is being done currently at our institute. Our hope is to thereby help de-mystify the practice, and to demonstrate what can and cannot be achieved. The impacts of the alternative tunings presented were smaller than we thought they would be in advance of this study, which in many ways is reassuring. We must emphasize that our paper presents only a small glimpse at the actual development and evaluation involved in preparing a comprehensive coupled climate model – a process that continues to evolve as new datasets emerge, model parameterizations improve, additional computational resources become available, as our interests, perceptions and objectives shift, and as we learn more about our model and the climate system itself.

 Note 1: The link to the paper gives the html version. From there you can click the “Get pdf” link and it seems to come up ok – no paywall. If not try the link to the draft paper (but the formatting makes it not so readable)

Read Full Post »

Ensemble Forecasting

I’ve had questions about the use of ensembles of climate models for a while. I was helped by working through a bunch of papers which explain the origin of ensemble forecasting. I still have my questions but maybe this post will help to create some perspective.

The Stochastic Sixties

Lorenz encapulated the problem in the mid-1960’s like this:

The proposed procedure chooses a finite ensemble of initial states, rather than the single observed initial state. Each state within the ensemble resembles the observed state closely enough so that the difference might be ascribed to errors or inadequacies in observation. A system of dynamic equations previously deemed to be suitable for forecasting is then applied to each member of the ensemble, leading to an ensemble of states at any future time. From an ensemble of future states, the probability of occurrence of any event, or such statistics as the ensemble mean and ensemble standard deviation of any quantity, may be evaluated.

Between the near future, when all states within an ensemble will look about alike, and the very distant future, when two states within an ensemble will show no more resemblance than two atmospheric states chosen at random, it is hoped that there will be an extended range when most of the states in an ensemble, while not constituting good pin-point forecasts, will possess certain important features in common. It is for this extended range that the procedure may prove useful.

[Emphasis added].

Epstein picked up some of these ideas in two papers in 1969. Here’s an extract from The Role of Initial Uncertainties in Prediction.

While it has long been recognized that the initial atmospheric conditions upon which meteorological forecasts are based are subject to considerable error, little if any explicit use of this fact has been made.

Operational forecasting consists of applying deterministic hydrodynamic equations to a single “best” initial condition and producing a single forecast value for each parameter..

..One of the questions which has been entirely ignored by the forecasters.. is whether of not one gets the “best” forecast by applying the deterministic equations to the “best” values of the initial conditions and relevant parameters..

..one cannot know a uniquely valid starting point for each forecast. There is instead an ensemble of possible starting points, but the identification of the one and only one of these which represents the “true” atmosphere is not possible.

In essence, the realization that small errors can grow in a non-linear system like weather and climate leads us to ask what the best method is of forecasting the future. In this paper Epstein takes a look at a few interesting simple problems to illustrate the ensemble approach.

Let’s take a look at one very simple example – the slowing of a body due to friction.

Rate of change of velocity (dv/dt) is proportional to the velocity, v. The “proportional” term is k, which increases with more friction.

dv/dt = -kv, therefore, v = v0.exp(-kt), where v0 = initial velocity

With a starting velocity of 10 m/s and k = 10-4 (in units of 1/s), how does velocity change with time?

Ensemble-velocity-mean

Figure 1 – note the logarithm of time on the time axis, time runs from 10 secs – 100,000 secs

Probably no surprises there.

Now let’s consider in the real world that we don’t know the starting velocity precisely, and also we don’t know the coefficient of friction precisely. Instead, we might have some idea about the possible values, which could be expressed as a statistical spread. Epstein looks at the case for v0 with a normal distribution and k with a gamma distribution (for specific reasons not that important).

 Mean of v0:   <v0> = 10 m/s

Standard deviation of v0:   σv = 1m/s

Mean of k:    <k> = 10-4 /s

Standard deviation of k:   σk = 3×10-5 /s

The particular example he gave has equations that can be easily manipulated, allowing him to derive an analytical result. In 1969 that was necessary. Now we have computers and some lucky people have Matlab. My approach uses Matlab.

What I did was create a set of 1,000 random normally distributed values for v0, with the mean and standard deviation above. Likewise, a set of gamma distributed values for k.

Then we take each pair in turn and produce the velocity vs time curve. Then we look at the stats of the 1,000 curves.

Interestingly the standard deviation increases before fading away to zero. It’s easy to see why the standard deviation will end up at zero – because the final velocity is zero. So we could easily predict that. But it’s unlikely we would have predicted that the standard deviation of velocity would start to increase after 3,000 seconds and then peak at around 9,000 seconds.

Here is the graph of standard deviation of velocity vs time:

Ensemble-std-dev-v

Figure 2

Now let’s look at the spread of results. The blue curves in the top graph (below) are each individual ensemble member and the green is the mean of the ensemble results. The red curve is the calculation of velocity against time using the mean of v0 and k:

Ensemble-velocity-spread-vs-mean

Figure 3

The bottom curve zooms in on one portion (note the time axis is now linear), with the thin green lines being 1 standard deviation in each direction.

What is interesting is the significant difference between the mean of the ensemble members and the single value calculated using the mean parameters. This is quite usual with “non-linear” equations (aka the real world).

So, if you aren’t sure about your parameters or your initial conditions, taking the “best value” and running the simulation can well give you a completely different result from sampling the parameters and initial conditions and taking the mean of this “ensemble” of results..

Epstein concludes in his paper:

In general, the ensemble mean value of a variable will follow a different course than that of any single member of the ensemble. For this reason it is clearly not an optimum procedure to forecast the atmosphere by applying deterministic hydrodynamic equations to any single initial condition, no matter how well it fits the available, but nevertheless finite and fallible, set of observations.

In Epstein’s other 1969 paper, Stochastic Dynamic Prediction, is more involved. He uses Lorenz’s “minimum set of atmospheric equations” and compares the results after 3 days from using the “best value” starting point vs an ensemble approach. The best value approach has significant problems compared with the ensemble approach:

Note that this does not mean the deterministic forecast is wrong, only that it is a poor forecast. It is possible that the deterministic solution would be verified in a given situation but the stochastic solutions would have better average verification scores.

Parameterizations

One of the important points in the earlier work on numerical weather forecasting was the understanding that parameterizations also have uncertainty associated with them.

For readers who haven’t seen them, here’s an example of a parameterization, for latent heat flux, LH:

LH = LρCDEUr(qs-qa)

which says Latent Heat flux = latent heat of vaporization x density x “aerodynamic transfer coefficient” x wind speed at the reference level x ( humidity at the surface – humidity in the air at the reference level)

The “aerodynamic transfer coefficient” is somewhere around 0.001 over ocean to 0.004 over moderately rough land.

The real formula for latent heat transfer is much simpler:

LH = the covariance of upwards moisture with vertical eddy velocity x density x latent heat of vaporization

These are values that vary even across very small areas and across many timescales. Across one “grid cell” of a numerical model we can’t use the “real formula” because we only get to put in one value for upwards eddy velocity and one value for upwards moisture flux and anyway we have no way of knowing the values “sub-grid”, i.e., at the scale we would need to know them to do an accurate calculation.

That’s why we need parameterizations. By the way, I don’t know whether this is a current formula in use in NWP, but it’s typical of what we find in standard textbooks.

So right away it should be clear why we need to apply the same approach of ensembles to the parameters describing these sub-grid processes as well as to initial conditions. Are we sure that over Connecticut the parameter CDE = 0.004, or should it be 0.0035? In fact, parameters like this are usually calculated from the average of a number of experiments. They conceal as much as they reveal. The correct value probably depends on other parameters. In so far as it represents a real physical property it will vary depending on the time of day, seasons and other factors. It might even be, “on average”, wrong. Because “on average” over the set of experiments was an imperfect sample. And “on average” over all climate conditions is a different sample.

The Numerical Naughties

The insights gained in the stochastic sixties weren’t so practically useful until some major computing power came along in the 90s and especially the 2000s.

Here is Palmer et al (2005):

Ensemble prediction provides a means of forecasting uncertainty in weather and climate prediction. The scientific basis for ensemble forecasting is that in a non-linear system, which the climate system surely is, the finite-time growth of initial error is dependent on the initial state. For example, Figure 2 shows the flow-dependent growth of initial uncertainty associated with three different initial states of the Lorenz (1963) model. Hence, in Figure 2a uncertainties grow significantly more slowly than average (where local Lyapunov exponents are negative), and in Figure 2c uncertainties grow significantly more rapidly than one would expect on average. Conversely, estimates of forecast uncertainty based on some sort of climatological-average error would be unduly pessimistic in Figure 2a and unduly optimistic in Figure 2c.

Ensemble-Palmer-2005-Lorenz-fig2

From Palmer et al 2005

 

Figure 4

The authors then provide an interesting example to demonstrate the practical use of ensemble forecasts. In the top left are the “deterministic predictions” using the “best estimate” of initial conditions. The rest of the charts 1-50 are the ensemble forecast members each calculated from different initial conditions. We can see that there was a low yet significant chance of a severe storm:

Ensemble-Palmer-2005-fig3-499px

From Palmer et al 2005

Figure 5 – Click to enlarge

In fact a severe storm did occur so the probabilistic forecast was very useful, in that it provided information not available with the deterministic forecast.

This is a nice illustration of some benefits. It doesn’t tell us how well NWPs perform in general.

One measure is the forecast spread of certain variables as the forecast time increases. Generally single model ensembles don’t do so well – they under-predict the spread of results at later time periods.

Here’s an example of the performance of a multi-model ensemble vs single-model ensembles on saying whether an event will occur or not. (Intuitively, the axes seem the wrong way round). The single model versions are over-confident – so when the forecast probability is 1.0 (certain) the reality is 0.7; when the forecast probability is 0.8, the reality is 0.6; and so on:

From Palmer et al 2005

From Palmer et al 2005

Figure 6

We can see that, at least in this case, the multi-model did a pretty good job. However, similar work on forecasting precipitation events showed much less success.

In their paper, Palmer and his colleagues contrast multi-model vs multi-parameterization within one model. I am not clear what the difference is – whether it is a technicality or something fundamentally different in approach. The example above is multi-model. They do give some examples of multi-parameterizations (with a similar explanation to what I gave in the section above). Their paper is well worth a read, as is the paper by Lewis (see links below).

Discussion

The concept of taking a “set of possible initial conditions” for weather forecasting makes a lot of sense. The concept of taking a “set of possible parameterizations” also makes sense although it might be less obvious at first sight.

In the first case we know that we don’t know the precise starting point because observations have errors and we lack a perfect observation system. In the second case we understand that a parameterization is some empirical formula which is clearly not “the truth”, but some approximation that is the best we have for the forecasting job at hand, and the “grid size” we are working to. So in both cases creating an ensemble around “the truth” has some clear theoretical basis.

Now what is also important for this theoretical basis is that we can test the results – at least with weather prediction (NWP). That’s because of the short time periods under consideration.

A statement from Palmer (1999) will resonate in the hearts of many readers:

A desirable if not necessary characteristic of any physical model is an ability to make falsifiable predictions

When we come to ensembles of climate models the theoretical case for multi-model ensembles is less clear (to me). There’s a discussion in IPCC AR5 that I have read. I will follow up the references and perhaps produce another article.

References

The Role of Initial Uncertainties in Prediction, Edward Epstein, Journal of Applied Meteorology (1969) – free paper

Stochastic Dynamic Prediction, Edward Epstein, Tellus (1969) – free paper

On the possible reasons for long-period fluctuations of the general circulation. Proc. WMO-IUGG Symp. on Research and Development Aspects of Long-Range Forecasting, Boulder, CO, World Meteorological Organization, WMO Tech. EN Lorenz (1965) – cited from Lewis (2005)

Roots of Ensemble Forecasting, John Lewis, Monthly Weather Forecasting (2005) – free paper

Predicting uncertainty in forecasts of weather and climate, T.N. Palmer (1999), also published as ECMWF Technical Memorandum No. 294 – free paper

Representing Model Uncertainty in Weather and Climate Prediction, TN Palmer, GJ Shutts, R Hagedorn, FJ Doblas-Reyes, T Jung & M Leutbecher, Annual Review Earth Planetary Sciences (2005) – free paper

Read Full Post »

In Part Seven – GCM I  through Part Ten – GCM IV we looked at GCM simulations of ice ages.

These were mostly attempts at “glacial inception”, that is, starting an ice age. But we also saw a simulation of the last 120 kyrs which attempted to model a complete ice age cycle including the last termination. As we saw, there were lots of limitations..

One condition for glacial inception, “perennial snow cover at high latitudes”, could be produced with a high-resolution coupled atmosphere-ocean GCM (AOGCM), but that model did suffer from the problem of having a cold bias at high latitudes.

The (reasonably accurate) simulation of a whole cycle including inception and termination came by virtue of having the internal feedbacks (ice sheet size & height and CO2 concentration) prescribed.

Just to be clear to new readers, these comments shouldn’t indicate that I’ve uncovered some secret that climate scientists are trying to hide, these points are all out in the open and usually highlighted by the authors of the papers.

In Part Nine – GCM III, one commenter highlighted a 2013 paper by Ayako Abe-Ouchi and co-workers, where the journal in question, Nature, had quite a marketing pitch on the paper. I made brief comment on it in a later article in response to another question, including that I had emailed the lead author asking a question about the modeling work (how was a 120 kyr cycle actually simulated?).

Most recently, in Eighteen – “Probably Nonlinearity” of Unknown Origin, another commented highlighted it, which rekindled my enthusiasm, and I went back and read the paper again. It turns out that my understanding of the paper had been wrong. It wasn’t really a GCM paper at all. It was an ice sheet paper.

There is a whole field of papers on ice sheet models deserving attention.

GCM review

Let’s review GCMs first of all to help us understand where ice sheet models fit in the hierarchy of climate simulations.

GCMs consist of a number of different modules coupled together. The first GCMs were mostly “atmospheric GCMs” = AGCMs, and either they had a “swamp ocean” = a mixed layer of fixed depth, or had prescribed ocean boundary conditions set from an ocean model or from an ocean reconstruction.

Less commonly, unless you worked just with oceans, there were ocean GCMs with prescribed atmospheric boundary conditions (prescribed heat and momentum flux from the atmosphere).

Then coupled atmosphere-ocean GCMs came along = AOGCMs. It was a while before these two parts matched up to the point where there was no “flux drift”, that is, no disappearing heat flux from one part of the model.

Why so difficult to get these two models working together? One important reason comes down to the time-scales involved, which result from the difference in heat capacity and momentum of the two parts of the climate system. The heat capacity and momentum of the ocean is much much higher than that of the atmosphere.

And when we add ice sheets models – ISMs – we have yet another time scale to consider.

  • the atmosphere changes in days, weeks and months
  • the ocean changes in years, decades and centuries
  • the ice sheets changes in centuries, millennia and tens of millenia

This creates a problem for climate scientists who want to apply the fundamental equations of heat, mass & momentum conservation along with parameterizations for “stuff not well understood” and “stuff quite-well-understood but whose parameters are sub-grid”. To run a high resolution AOGCM for a 1,000 years simulation might consume 1 year of supercomputer time and the ice sheet has barely moved during that period.

Ice Sheet Models

Scientists who study ice sheets have a whole bunch of different questions. They want to understand how the ice sheets developed.

What makes them grow, shrink, move, slide, melt.. What parameters are important? What parameters are well understood? What research questions are most deserving of attention? And:

Does our understanding of ice sheet dynamics allow us to model the last glacial cycle?

To answer that question we need a model for ice sheet dynamics, and to that we need to apply some boundary conditions from some other “less interesting” models, like GCMs. As a result, there are a few approaches to setting the boundary conditions so we can do our interesting work of modeling ice sheets.

Before we look at that, let’s look at the dynamics of ice sheets themselves.

Ice Sheet Dynamics

First, in the theme of the last paper, Eighteen – “Probably Nonlinearity” of Unknown Origin, here is Marshall & Clark 2002:

The origin of the dominant 100-kyr ice-volume cycle in the absence of substantive radiation forcing remains one of the most vexing questions in climate dynamics

We can add that to the 34 papers reviewed in that previous article. This paper by Marshall & Clark is definitely a good quick read for people who want to understand ice sheets a little more.

Ice doesn’t conduct a lot of heat – it is a very good insulator. So the important things with ice sheets happen at the top and the bottom.

At the top, ice melts, and the water refreezes, runs off or evaporates. In combination, the loss is called ablation. Then we have precipitation that adds to the ice sheet. So the net effect determines what happens at the top of the ice sheet.

At the bottom, when the ice sheet is very thin, heat can be conducted through from the atmosphere to the base and make it melt – if the atmosphere is warm enough. As the ice sheet gets thicker, very little heat is conducted through. However, there are two important sources of heat for surface heating which results in “basal sliding”. One source is geothermal energy. This is around 0.1 W/m² which is very small unless we are dealing with an insulating material (like ice) and lots of time (like ice sheets). The other source is the shear stress in the ice sheet which can create a lot of heat via the mechanics of deformation.

Once the ice sheet is able to start sliding, the dynamics create a completely different result compared to an ice sheet “cold-pinned” to the rock underneath.

Some comments from Marshall and Clark:

Ice sheet deglaciation involves an amount of energy larger than that provided directly from high-latitude radiation forcing associated with orbital variations. Internal glaciologic, isostatic, and climatic feedbacks are thus essential to explain the deglaciation.

..Moreover, our results suggest that thermal enabling of basal flow does not occur in response to surface warming, which may explain why the timing of the Termination II occurred earlier than predicted by orbital forcing [Gallup et al., 2002].

Results suggest that basal temperature evolution plays an important role in setting the stage for glacial termination. To confirm this hypothesis, model studies need improved basal process physics to incorporate the glaciological mechanisms associated with ice sheet instability (surging, streaming flow).

..Our simulations suggest that a substantial fraction (60% to 80%) of the ice sheet was frozen to the bed for the first 75 kyr of the glacial cycle, thus strongly limiting basal flow. Subsequent doubling of the area of warm-based ice in response to ice sheet thickening and expansion and to the reduction in downward advection of cold ice may have enabled broad increases in geologically- and hydrologically-mediated fast ice flow during the last deglaciation.

Increased dynamical activity of the ice sheet would lead to net thinning of the ice sheet interior and the transport of large amounts of ice into regions of intense ablation both south of the ice sheet and at the marine margins (via calving). This has the potential to provide a strong positive feedback on deglaciation.

The timescale of basal temperature evolution is of the same order as the 100-kyr glacial cycle, suggesting that the establishment of warm-based ice over a large enough area of the ice sheet bed may have influenced the timing of deglaciation. Our results thus reinforce the notion that at a mature point in their life cycle, 100-kyr ice sheets become independent of orbital forcing and affect their own demise through internal feedbacks.

[Emphasis added]

In this article we will focus on a 2007 paper by Ayako Abe-Ouchi, T Segawa & Fuyuki Saito. This paper is essentially the same modeling approach used in Abe-Ouchi’s 2013 Nature paper.

The Ice Model

The ice sheet model has a time step of 2 years, with 1° grid from 30°N to the north pole, 1° longitude and 20 vertical levels.

Equations for the ice sheet include sliding velocity, ice sheet deformation, the heat transfer through the lithosphere, the bedrock elevation and the accumulation rate on the ice sheet.

Note, there is a reference that some of the model is based on work described in Sensitivity of Greenland ice sheet simulation to the numerical procedure employed for ice sheet dynamics, F Saito & A Abe-Ouchi, Ann. Glaciol., (2005) – but I don’t have access to this journal. (If anyone does, please email the paper to me at scienceofdoom – you know what goes here – gmail.com).

How did they calculate the accumulation on the ice sheet? There is an equation:

Acc=Aref×(1+dP)Ts

Ts is the surface temperature, dP is a measure of aridity and Aref is a reference value for accumulation. This is a highly parameterized method of calculating how much thicker or thinner the ice sheet is growing. The authors reference Marshall et al 2002 for this equation, and that paper is very instructive in how poorly understood ice sheet dynamics actually are.

Here is one part of the relevant section in Marshall et al 2002:

..For completeness here, note that we have also experimented with spatial precipitation patterns that are based on present-day distributions.

Under this treatment, local precipitation rates diminish exponentially with local atmospheric cooling, reflecting the increased aridity that can be expected under glacial conditions (Tarasov and Peltier, 1999).

Paleo-precipitation under this parameterization has the form:

P(λ,θ,t) = Pobs(λ,θ)(1+dp)ΔT(λ,θ,t) x exp[βp.max[hs(λ,θ,t)-ht,0]]       (18)

The parameter dP in this equation represents the percentage of drying per 1C; Tarasov and Peltier (1999) choose a value of 3% per °C; dp = 0:03.

[Emphasis added, color added to highlight the relevant part of the equation]

So dp is a parameter that attempts to account for increasing aridity in colder glacial conditions, and in their 2002 paper Marshall et al describe it as 1 of 4 “free parameters” that are investigated to see what effect they have on ice sheet development around the LGM.

Abe-Ouchi and co-authors took a slightly different approach that certainly seems like an improvement over Marshall et al 2002:

Abe-Ouchi-eqn11

So their value of aridity is just a linear function of ice sheet area – from zero to a fixed value, rather than a fixed value no matter the ice sheet size.

How is Ts calculated? That comes, in a way, from the atmospheric GCM, but probably not in a way that readers might expect. So let’s have a look at the GCM then come back to this calculation of Ts.

Atmospheric GCM Simulations

There were three groups of atmospheric GCM simulations, with parameters selected to try and tease out which factors have the most impact.

Group One: high resolution GCM – 1.1º latitude and longitude and 20 atmospheric vertical levels with fixed sea surface temperature. So there is no ocean model, the ocean temperature are prescribed. Within this group, four experiments:

  • A control experiment – modern day values
  • LGM (last glacial maximum) conditions for CO2 (note 1) and orbital parameters with
    • no ice
    • LGM ice extent but zero thickness
    • LGM ice extent and LGM thickness

So the idea is to compare results with and without the actual ice sheet so see how much impact orbital and CO2 values have vs the effect of the ice sheet itself – and then for the ice sheet to see whether the albedo or the elevation has the most impact. Why the elevation? Well, if an ice sheet is 1km thick then the surface temperature will be something like 6ºC colder. (Exactly how much colder is an interesting question because we don’t know what the lapse rate actually was). There will also be an effect on atmospheric circulation – you’ve stuck a “mountain range” in the path of wind so this changes the circulation.

Each of the four simulations was run for 11 or 13 years and the last 10 years’ results used:

From Abe-Ouchi et al 2007

From Abe-Ouchi et al 2007

Figure 1

It’s clear from this simulation that the full result (left graphic) is mostly caused by the ice sheet (right graphic) rather than CO2, orbital parameters and the SSTs (middle graphic). And the next figure in the paper shows the breakdown between the albedo effect and the height of the ice sheet:

From Abe-Ouchi et al 2007

From Abe-Ouchi et al 2007

Figure 2 – same color legend as figure 1

Now a lapse rate of 5K/km was used. What happens if the lapse rate of 9K/km was used instead? There were no simulations done with different lapse rates.

..Other lapse rates could be used which vary depending on the altitude or location, while a lapse rate larger than 7 K/km or smaller than 4 K/km is inconsistent with the overall feature. This is consistent with the finding of Krinner and Genthon (1999), who suggest a lapse rate of 5.5 K/km, but is in contrast with other studies which have conventionally used lapse rates of 8 K/km or 6.5 K/km to drive the ice sheet models..

Group Two – medium resolution GCM 2.8º latitude and longitude and 11 atmospheric vertical levels, with a “slab ocean” – this means the ocean is treated as one temperature through the depth of some fixed layer, like 50m. So it is allowing the ocean to be there as a heat sink/source responding to climate, but no heat transfer through to a deeper ocean.

There were five simulations in this group, one control (modern day everything) and four with CO2 & orbital parameters at the LGM:

  • no ice sheet
  • LGM ice extent, but flat
  • 12 kyrs ago ice extent, but flat
  • 12 kyrs ago ice extent and height

So this group takes a slightly more detailed look at ice sheet impact. Not surprisingly the simulation results give intermediate values for the ice sheet extent at 12 kyrs ago.

Group Three – medium resolution GCM as in group two, and ice sheets either at present day or LGM, with nine simulations covering different orbital values, different CO2 values of present day, 280 or 200 ppm.

There was also some discussion of the impact of different climate models. I found this fascinating because the difference between CCSM and the other models appears to be as great as the difference in figure 2 (above) which identifies the albedo effect as more significant than the lapse rate effect:

From Abe-Ouchi et al 2007

From Abe-Ouchi et al 2007

Figure 3

And this naturally has me wondering about how much significance to put on the GCM simulation results shown in the paper. The authors also comment:

Based on these GCM results we conclude there remains considerable uncertainty over the actual size of the albedo effect.

Given there is also uncertainty over the lapse rate that actually occurred, it seems there is considerable uncertainty over everything.

Now let’s return to the ice sheet model, because so far we haven’t seen any output from the ice sheet model.

GCM Inputs into the Ice Sheet Model

The equation which calculates the change in accumulation on the ice sheet used a fairly arbitrary parameter dp, with (1+dp) raised to the power of Ts.

The ice sheet model has a 2 year time step. The GCM results don’t provide Ts across the surface grid every 2 years, they are snapshots for certain conditions. The ice sheet model uses this calculation for Ts:

Ts = Tref + ΔTice + ΔTco2 + ΔTinsol + ΔTnonlinear

Tref is the reference temperature which is present day climatology. The other ΔT (change in temperature) values are basically a linear interpolation from two values of the GCM simulations. Here is the ΔTCo2 value:

Abe-Ouchi-2007-eqn6

 

So think of it like this – we have found Ts at one value of CO2 higher and one value of CO2 lower from some snapshot GCM simulations. We plot a graph with Co2 on the x-axis and Ts on the y-axis with just two points on the graph from these two experiments and we draw a straight line between the two points.

To calculate Ts at say 50 kyrs ago we look up the CO2 value at 50 kyrs from ice core data, and read the value of TCO2 from the straight line on the graph.

Likewise for the other parameters. Here is ΔTinsol:

Abe-Ouchi-eqn7

 

So the method is extremely basic. Of course the model needs something..

Now, given that we have inputs for accumulation on the ice sheet, the ice sheet model can run. Here are the results. The third graph (3) is the sea level from proxy results so is our best estimate of reality, with (4) providing model outputs for different parameters of d0 (“desertification” or aridity) and lapse rate, and (5) providing outputs for different parameters of albedo and lapse rate:

From Abe-Ouchi et al 2007

From Abe-Ouchi et al 2007

Figure 4

There are three main points of interest.

Firstly, small changes in the parameters cause huge changes in the final results. The idea of aridity over ice sheets as just linear function of ice sheet size is very questionable itself. The idea of a constant lapse rate is extremely questionable. Together, using values that appear realistic, we can model much less ice sheet growth (sea level drop) or many times greater ice sheet growth than actually occurred.

Secondly, notice that the time of maximum ice sheet (lowest sea level) for realistic results show sea level starting to rise around 12 kyrs, rather than the actual 18 kyrs. This might be due to the impact of orbital factors which were at quite a low level (i.e., high latitude summer insolation was at quite a low level) when the last ice age finished, but have quite an impact in the model. Of course, we have covered this “problem” in a few previous articles in this series. In the context of this model it might be that the impact of the southern hemisphere leading the globe out of the last ice age is completely missing.

Thirdly – while this might be clear to some people, but for many new to this kind of model it won’t be obvious – the inputs for the model are some limits of the actual history. The model doesn’t simulate the actual start and end of the last ice age “by itself”. We feed into the GCM model a few CO2 values. We feed into the GCM model a few ice sheet extent and heights that (as best as can be reconstructed) actually occurred. The GCM gives us some temperature values for these snapshot conditions.

In the case of this ice sheet model, every 2 years (each time step of the ice sheet model) we “look up” the actual value of ice sheet extent and atmospheric CO2 and we linearly interpolate the GCM output temperatures for the current year. And then we crudely parameterize these values into some accumulation rate on the ice sheet.

Conclusion

This is our first foray into ice sheet models. It should be clear that the results are interesting but we are at a very early stage in modeling ice sheets.

The problems are:

  • the computational load required to run a GCM coupled with an ice sheet model over 120 kyrs is much too high, so it can’t be done
  • the resulting tradeoff uses a few GCM snapshot values to feed linearly interpolated temperatures into a parameterized accumulation equation
  • the effect of lapse rate on the results is extremely large and the actual value for lapse rate over ice sheets is very unlikely to be a constant and is also not known
  • our understanding of ice sheet fundamental equations are still at an early stage, as readers can see by reviewing the first two papers below, especially the second one

 Articles in this Series

Part One – An introduction

Part Two – Lorenz – one point of view from the exceptional E.N. Lorenz

Part Three – Hays, Imbrie & Shackleton – how everyone got onto the Milankovitch theory

Part Four – Understanding Orbits, Seasons and Stuff – how the wobbles and movements of the earth’s orbit affect incoming solar radiation

Part Five – Obliquity & Precession Changes – and in a bit more detail

Part Six – “Hypotheses Abound” – lots of different theories that confusingly go by the same name

Part Seven – GCM I – early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Seven and a Half – Mindmap – my mind map at that time, with many of the papers I have been reviewing and categorizing plus key extracts from those papers

Part Eight – GCM II – more recent work from the “noughties” – GCM results plus EMIC (earth models of intermediate complexity) again trying to produce perennial snow cover

Part Nine – GCM III – very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age – a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Eleven – End of the Last Ice age – latest data showing relationship between Southern Hemisphere temperatures, global temperatures and CO2

Twelve – GCM V – Ice Age Termination – very recent work from He et al 2013, using a high resolution GCM (CCSM3) to analyze the end of the last ice age and the complex link between Antarctic and Greenland

Thirteen – Terminator II – looking at the date of Termination II, the end of the penultimate ice age – and implications for the cause of Termination II

Fourteen – Concepts & HD Data – getting a conceptual feel for the impacts of obliquity and precession, and some ice age datasets in high resolution

Fifteen – Roe vs Huybers – reviewing In Defence of Milankovitch, by Gerard Roe

Sixteen – Roe vs Huybers II – remapping a deep ocean core dataset and updating the previous article

Seventeen – Proxies under Water I – explaining the isotopic proxies and what they actually measure

Eighteen – “Probably Nonlinearity” of Unknown Origin – what is believed and what is put forward as evidence for the theory that ice age terminations were caused by orbital changes

References

Basal temperature evolution of North American ice sheets and implications for the 100-kyr cycle, SJ Marshall & PU Clark, GRL (2002) – free paper

North American Ice Sheet reconstructions at the Last Glacial Maximum, SJ Marshall, TS James, GKC Clarke, Quaternary Science Reviews (2002) – free paper

Climatic Conditions for modelling the Northern Hemisphere ice sheets throughout the ice age cycle, A Abe-Ouchi, T Segawa, and F Saito, Climate of the Past (2007) – free paper

Insolation-driven 100,000-year glacial cycles and hysteresis of ice-sheet volume, Ayako Abe-Ouchi, Fuyuki Saito, Kenji Kawamura, Maureen E. Raymo, Jun’ichi Okuno, Kunio Takahashi & Heinz Blatter, Nature (2013) – paywall paper

Notes

Note 1 – the value of CO2 used in these simulations was 200 ppm, while CO2 at the LGM was actually 180 ppm. Apparently this value of 200 ppm was used in a major inter-comparison project (the PMIP), but I don’t know the reason why. PMIP = Paleoclimate Modelling Intercomparison Project, Joussaume and Taylor, 1995.

Read Full Post »

Older Posts »

Follow

Get every new post delivered to your Inbox.

Join 291 other followers