Feeds:
Posts
Comments

Archive for the ‘Atmospheric Physics’ Category

The atmosphere cools to space by radiation. Well, without getting into all the details, the surface cools to space as well by radiation but not much radiation is emitted by the surface that escapes directly to space (note 1). Most surface radiation is absorbed by the atmosphere. And of course the surface mostly cools by convection into the troposphere (lower atmosphere).

If there were no radiatively-active gases (aka “GHG”s) in the atmosphere then the atmosphere couldn’t cool to space at all.

Technically, the emissivity of the atmosphere would be zero. Emission is determined by the local temperature of the atmosphere and its emissivity. Wavelength by wavelength emissivity is equal to absorptivity, another technical term, which says what proportion of radiation is absorbed by the atmosphere. If the atmosphere can’t emit, it can’t absorb (note 2).

So as you increase the GHGs in the atmosphere you increase its ability to cool to space. A lot of people realize this at some point during their climate science journey and finally realize how they have been duped by climate science all along! It’s irrefutable – more GHGs more cooling to space, more GHGs mean less global warming!

Ok, it’s true. Now the game’s up, I’ll pack up Science of Doom into a crate and start writing about something else. Maybe cognitive dissonance..

Bye everyone!

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Halfway through boxing everything up I realized there was a little complication to the simplicity of that paragraph. The atmosphere with more GHGs has a higher emissivity, but also a higher absorptivity.

Let’s draw a little diagram. Here are two “layers” (see note 3) of the atmosphere in two different cases. On the left 400 ppmv CO2, on the right 500ppmv CO2 (and relative humidity of water vapor was set at 50%, surface temperature at 288K):

Cooling-to-space-2a

Figure 1

It’s clear that the two layers are both emitting more radiation with more CO2.More cooling to space.

For interest, the “total emissivity” of the top layer is 0.190 in the first case and 0.197 in the second case. The layer below has 0.389 and 0.395.

Let’s take a look at all of the numbers and see what is going on. This diagram is a little busier:

Cooling-to-space-3a

Figure 2

The key point is that the OLR (outgoing longwave radiation) is lower in the case with more CO2. Yet each layer is emitting more radiation. How can this be?

Take a look at the radiation entering the top layer on the left = 265.1, and add to that the emitted radiation = 23.0 – the total is 288.1. Now subtract the radiation leaving through the top boundary = 257.0 and we get the radiation absorbed in the layer. This is 31.1 W/m².

Compare that with the same calculation with more CO2 – the absorption is 32.2 W/m².

This is the case all the way up through the atmosphere – each layer emits more because its emissivity has increased, but it also absorbs more because its absorptivity has increased by the same amount.

So more cooling to space, but unfortunately more absorption of the radiation below – two competing terms.

So why don’t they cancel out?

Emission of radiation is a result of local temperature and emissivity.

Absorption of radiation is the result of the incident radiation and absorptivity. Incident upwards radiation started lower in the atmosphere where it is hotter. So absorption changes always outweigh emission changes (note 4).

Conceptual Problems?

If it’s still not making sense then think about what happens as you reduce the GHGs in the atmosphere. The atmosphere emits less but absorbs even less of the radiation from below. So the outgoing longwave radiation increases. More surface radiation is making it to the top of atmosphere without being absorbed. So there is less cooling to space from the atmosphere, but more cooling to space from the surface and the atmosphere.

If you add lagging to a pipe, the temperature of the pipe increases (assuming of course it is “internally” heated with hot water). And yet, the pipe cools to the surrounding room via the lagging! Does that mean more lagging, more cooling? No, it’s just the transfer mechanism for getting the heat out.

That was just an analogy. Analogies don’t prove anything. If well chosen, they can be useful in illustrating problems. End of analogy disclaimer.

If you want to understand more about how radiation travels through the atmosphere and how GHG changes affect this journey, take a look at the series Visualizing Atmospheric Radiation.

 

Notes

Note 1: For more on the details see

Note 2: A very basic point – absolutely essential for understanding anything at all about climate science – is that the absorptivity of the atmosphere can be (and is) totally different from its emissivity when you are considering different wavelengths. The atmosphere is quite transparent to solar radiation, but quite opaque to terrestrial radiation – because they are at different wavelengths. 99% of solar radiation is at wavelengths less than 4 μm, and 99% of terrestrial radiation is at wavelengths greater than 4 μm. That’s because the sun’s surface is around 6000K while the earth’s surface is around 290K. So the atmosphere has low absorptivity of solar radiation (<4 μm) but high emissivity of terrestrial radiation.

Note 3: Any numerical calculation has to create some kind of grid. This is a very course grid, with 10 layers of roughly equal pressure in the atmosphere from the surface to 200mbar. The grid assumes there is just one temperature for each layer. Of course the temperature is decreasing as you go up. We could divide the atmosphere into 30 layers instead. We would get more accurate results. We would find the same effect.

Note 4: The equations for radiative transfer are found in Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations. The equations prove this effect.

Read Full Post »

In Part One we had a look at some introductory ideas. In this article we will look at one of the ground-breaking papers in chaos theory – Deterministic nonperiodic flow, Edward Lorenz (1963). It has been cited more than 13,500 times.

There might be some introductory books on non-linear dynamics and chaos that don’t include a discussion of this paper – or at least a mention – but they will be in a small minority.

Lorenz was thinking about convection in the atmosphere, or any fluid heated from below, and reduced the problem to just three simple equations. However, the equations were still non-linear and because of this they exhibit chaotic behavior.

Cencini et al describe Lorenz’s problem:

Consider a fluid, initially at rest, constrained by two infinite horizontal plates maintained at constant temperature and at a fixed distance from each other. Gravity acts on the system perpendicular to the plates. If the upper plate is maintained hotter than the lower one, the fluid remains at rest and in a state of conduction, i.e., a linear temperature gradient establishes between the two plates.

If the temperatures are inverted, gravity induced buoyancy forces tend to rise toward the top the hotter, and thus lighter fluid, that is at the bottom. This tendency is contrasted by viscous and dissipative forces of the fluid so that the conduction state may persist.

However, as the temperature differential exceeds a certain amount, the conduction state is replaced by a steady convection state: the fluid motion consists of steady counter-rotating vortices (rolls) which transport upwards the hot/light fluid in contact with the bottom plate and downwards the cold heavy fluid in contact with the upper one.

The steady convection state remains stable up to another critical temperature difference above which it becomes unsteady, very irregular and hardly predictable.

Willem Malkus and Lou Howard of MIT came up with an equivalent system – the simplest version is shown in this video:

Figure 1

Steven Strogatz (1994), an excellent introduction to dynamic and chaotic systems – explains and derives the equivalence between the classic Lorenz equations and this tilted waterwheel.

L63 (as I’ll call these equations) has three variables apart from time: intensity of convection (x), temperature difference between ascending and descending currents (y), deviation of temperature from a linear profile (z).

Here are some calculated results for L63  for the “classic” parameter values and three very slightly different initial conditions (blue, red, green in each plot) over 5,000 seconds, showing the start and end 50 seconds – click to expand:

Lorenz63-5ksecs-x-y-vs-time-499px

Figure 2 – click to expand – initial conditions x,y,z = 0, 1, 0;  0, 1.001, 0;  0, 1.002, 0

We can see that quite early on the two conditions diverge, and 5000 seconds later the system still exhibits similar “non-periodic” characteristics.

For interest let’s zoom in on just over 10 seconds of ‘x’ near the start and end:

Lorenz63-5ksecs-x-vs-time-zoom-499px

Figure 3

Going back to an important point from the first post, some chaotic systems will have predictable statistics even if the actual state at any future time is impossible to determine (due to uncertainty over the initial conditions).

So we’ll take a look at the statistics via a running average – click to expand:

Lorenz63-5ksecs-x-y-vs-time-average-499px

Figure 4 – click to expand

Two things stand out – first of all the running average over more than 100 “oscillations” still shows a large amount of variability. So at any one time, if we were to calculate the average from our current and historical experience we could easily end up calculating a value that was far from the “long term average”. Second – the “short term” average, if we can call it that, shows large variation at any given time between our slightly divergent initial conditions.

So we might believe – and be correct – that the long term statistics of slightly different initial conditions are identical, yet be fooled in practice.

Of course, surely it sorts itself out over a longer time scale?

I ran the same simulation (with just the first two starting conditions) for 25,000 seconds and then used a filter window of 1,000 seconds – click to expand:

Lorenz63-25ksecs-x-time-1000s-average-499px

 Figure 5 – click to expand

The total variability is less, but we have a similar problem – it’s just lower in magnitude. Again we see that the statistics of two slightly different initial conditions – if we were to view them by the running average at any one time –  are likely to be different even over this much longer time frame.

From this 25,000 second simulation:

  • take 10,000 random samples each of 25 second length and plot a histogram of the means of each sample (the sample means)
  • same again for 100 seconds
  • same again for 500 seconds
  • same again for 3,000 seconds

Repeat for the data from the other initial condition.

Here is the result:

Lorenz-25000s-histogram-of-means-2-conditions

Figure 6

To make it easier to see, here is the difference between the two sets of histograms, normalized by the maximum value in each set:

Lorenz-25000s-delta-histogram

Figure 7

This is a different way of viewing what we saw in figures 4 & 5.

The spread of sample means shrinks as we increase the time period but the difference between the two data sets doesn’t seem to disappear (note 2).

Attractors and Phase Space

The above plots show how variables change with time. There’s another way to view the evolution of system dynamics and that is by “phase space”. It’s a name for a different kind of plot.

So instead of plotting x vs time, y vs time and z vs time – let’s plot x vs y vs z – click to expand:

Lorenz63-first-50s-x-y-z-499px

Figure 8 – Click to expand – the colors blue, red & green represent the same initial conditions as in figure 2

Without some dynamic animation we can’t now tell how fast the system evolves. But we learn something else that turns out to be quite amazing. The system always end up on the same “phase space”. Perhaps that doesn’t seem amazing yet..

Figure 7 was with three initial conditions that are almost identical. Let’s look at three initial conditions that are very different: x,y,z = 0, 1, 0;   5, 5, 5;   20, 8, 1:

Lorenz63-first-50s-x-y-z-3-different-conditions-499px

Figure 9 – Click to expand

Here’s an example (similar to figure 7) from Strogatz – a set of 10,000 closely separated initial conditions and how they separate at 3, 6, 9 and 15 seconds. The two key points:

  1. the fast separation of initial conditions
  2. the long term position of any of the initial conditions is still on the “attractor”
From Strogatz 1994

From Strogatz 1994

Figure 10

A dynamic visualization on Youtube with 500,000 initial conditions:

Figure 11

There’s lot of theory around all of this as you might expect. But in brief, in a “dissipative system” the “phase volume” contracts exponentially to zero. Yet for the Lorenz system somehow it doesn’t quite manage that. Instead, there are an infinite number of 2-d surfaces. Or something. For the sake of a not overly complex discussion a wide range of initial conditions ends up on something very close to a 2-d surface.

This is known as a strange attractor. And the Lorenz strange attractor looks like a butterfly.

Conclusion

Lorenz 1963 reduced convective flow (e.g., heating an atmosphere from the bottom) to a simple set of equations. Obviously these equations are a massively over-simplified version of anything like the real atmosphere. Yet, even with this very simple set of equations we find chaotic behavior.

Chaotic behavior in this example means:

  • very small differences get amplified extremely quickly so that no matter how much you increase your knowledge of your starting conditions it doesn’t help much (note 3)
  • starting conditions within certain boundaries will always end up within “attractor” boundaries, even though there might be non-periodic oscillations around this attractor
  • the long term (infinite) statistics can be deterministic but over any “smaller” time period the statistics can be highly variable

Articles in the Series

Natural Variability and Chaos – One – Introduction

Natural Variability and Chaos – Two – Lorenz 1963

Natural Variability and Chaos – Three – Attribution & Fingerprints

Natural Variability and Chaos – Four – The Thirty Year Myth

Natural Variability and Chaos – Five – Why Should Observations match Models?

Natural Variability and Chaos – Six – El Nino

Natural Variability and Chaos – Seven – Attribution & Fingerprints Or Shadows?

Natural Variability and Chaos – Eight – Abrupt Change

References

Deterministic nonperiodic flow, EN Lorenz, Journal of the Atmospheric Sciences (1963)

Chaos: From Simple Models to Complex Systems, Cencini, Cecconi & Vulpiani, Series on Advances in Statistical Mechanics – Vol. 17 (2010)

Non Linear Dynamics and Chaos, Steven H. Strogatz, Perseus Books  (1994)

Notes

Note 1: The Lorenz equations:

dx/dt = σ (y-x)

dy/dt = rx – y – xz

dz/dt = xy – bz

where

x = intensity of convection

y = temperature difference between ascending and descending currents

z = devision of temperature from a linear profile

σ = Prandtl number, ratio of momentum diffusivity to thermal diffusivity

r = Rayleigh number

b = “another parameter”

And the “classic parameters” are σ=10, b = 8/3, r = 28

Note 2: Lorenz 1963 has over 13,000 citations so I haven’t been able to find out if this system of equations is transitive or intransitive. Running Matlab on a home Mac reaches some limitations and I maxed out at 25,000 second simulations mapped onto a 0.01 second time step.

However, I’m not trying to prove anything specifically about the Lorenz 1963 equations, more illustrating some important characteristics of chaotic systems

Note 3: Small differences in initial conditions grow exponentially, until we reach the limits of the attractor. So it’s easy to show the “benefit” of more accurate data on initial conditions.

If we increase our precision on initial conditions by 1,000,000 times the increase in prediction time is a massive 2½ times longer.

Read Full Post »

Over the last few years I’ve written lots of articles relating to the inappropriately-named “greenhouse” effect and covered some topics in great depth. I’ve also seen lots of comments and questions which has helped me understand common confusion and misunderstandings.

This article, with huge apologies to regular long-suffering readers, covers familiar ground in simple terms. It’s a reference article. I’ve referenced other articles and series as places to go to understand a particular topic in more detail.

One of the challenges of writing a short simple explanation is it opens you up to the criticism of having omitted important technical details that you left out in order to keep it short. Remember this is the simple version..

Preamble

First of all, the “greenhouse” effect is not AGW. In maths, physics, engineering and other hard sciences, one block is built upon another block. AGW is built upon the “greenhouse” effect. If AGW is wrong, it doesn’t invalidate the greenhouse effect. If the greenhouse effect is wrong, it does invalidate AGW.

The greenhouse effect is built on very basic physics, proven for 100 years or so, that is not in any dispute in scientific circles. Fantasy climate blogs of course do dispute it.

Second, common experience of linearity in everyday life cause many people to question how a tiny proportion of “radiatively-active” molecules can have such a profound effect. Common experience is not a useful guide. Non-linearity is the norm in real science. Since the enlightenment at least, scientists have measured things rather than just assumed consequences based on everyday experience.

The Elements of the “Greenhouse” Effect

Atmospheric Absorption

1. The “radiatively-active” gases in the atmosphere:

  • water vapor
  • CO2
  • CH4
  • N2O
  • O3
  • and others

absorb radiation from the surface and transfer this energy via collision to the local atmosphere. Oxygen and nitrogen absorb such a tiny amount of terrestrial radiation that even though they constitute an overwhelming proportion of the atmosphere their radiative influence is insignificant (note 1).

How do we know all this? It’s basic spectroscopy, as detailed in exciting journals like the Journal of Quantitative Spectroscopy and Radiative Transfer over many decades. Shine radiation of a specific wavelength through a gas and measure the absorption. Simple stuff and irrefutable.

Atmospheric Emission

2. The “radiatively-active” gases in the atmosphere also emit radiation. Gases that absorb at a wavelength also emit at that wavelength. Gases that don’t absorb at that wavelength don’t emit at that wavelength. This is a consequence of Kirchhoff’s law.

The intensity of emission of radiation from a local portion of the atmosphere is set by the atmospheric emissivity and the temperature.

Convection

3. The transfer of heat within the troposphere is mostly by convection. The sun heats the surface of the earth through the (mostly) transparent atmosphere (note 2). The temperature profile, known as the “lapse rate”, is around 6K/km in the tropics. The lapse rate is principally determined by non-radiative factors – as a parcel of air ascends it expands into the lower pressure and cools during that expansion (note 3).

The important point is that the atmosphere is cooler the higher you go (within the troposphere).

Energy Balance

4. The overall energy in the climate system is determined by the absorbed solar radiation and the emitted radiation from the climate system. The absorbed solar radiation – globally annually averaged – is approximately 240 W/m² (note 4). Unsurprisingly, the emitted radiation from the climate system is also (globally annually averaged) approximately 240 W/m². Any change in this and the climate is cooling or warming.

Emission to Space

5. Most of the emission of radiation to space by the climate system is from the atmosphere, not from the surface of the earth. This is a key element of the “greenhouse” effect. The intensity of emission depends on the local atmosphere. So the temperature of the atmosphere from which the emission originates determines the amount of radiation.

If the place of emission of radiation – on average – moves upward for some reason then the intensity decreases. Why? Because it is cooler the higher up you go in the troposphere. Likewise, if the place of emission – on average – moves downward for some reason, then the intensity increases (note 5).

More GHGs

6. If we add more radiatively-active gases (like water vapor and CO2) then the atmosphere becomes more “opaque” to terrestrial radiation and the consequence is the emission to space from the atmosphere moves higher up (on average). Higher up is colder. See note 6.

So this reduces the intensity of emission of radiation, which reduces the outgoing radiation, which therefore adds energy into the climate system. And so the climate system warms (see note 7).

That’s it!

It’s as simple as that. The end.

A Few Common Questions

CO2 is Already Saturated

There are almost 315,000 individual absorption lines for CO2 recorded in the HITRAN database. Some absorption lines are stronger than others. At the strongest point of absorption – 14.98 μm (667.5 cm-1), 95% of radiation is absorbed in only 1m of the atmosphere (at standard temperature and pressure at the surface). That’s pretty impressive.

By contrast, from 570 – 600 cm-1 (16.7 – 17.5 μm) and 730 – 770 cm-1 (13.0 – 13.7 μm) the CO2 absorption through the atmosphere is nowhere near “saturated”. It’s more like 30% absorbed through a 1km path.

You can see the complexity of these results in many graphs in Atmospheric Radiation and the “Greenhouse” Effect – Part Nine – calculations of CO2 transmittance vs wavelength in the atmosphere using the 300,000 absorption lines from the HITRAN database, and see also Part Eight – interesting actual absorption values of CO2 in the atmosphere from Grant Petty’s book

The complete result combining absorption and emission is calculated in Visualizing Atmospheric Radiation – Part Seven – CO2 increases – changes to TOA in flux and spectrum as CO2 concentration is increased

CO2 Can’t Absorb Anything of Note Because it is Only .04% of the Atmosphere

See the point above. Many spectroscopy professionals have measured the absorptivity of CO2. It has a huge variability in absorption, but the most impressive is that 95% of 14.98 μm radiation is absorbed in just 1m. How can that happen? Are spectroscopy professionals charlatans? You need evidence, not incredulity. Science involves measuring things and this has definitely been done. See the HITRAN database.

Water Vapor Overwhelms CO2

This is an interesting point, although not correct when we consider energy balance for the climate. See Visualizing Atmospheric Radiation – Part Four – Water Vapor – results of surface (downward) radiation and upward radiation at TOA as water vapor is changed.

The key point behind all the detail is that the top of atmosphere radiation change (as CO2 changes) is the important one. The surface change (forcing) from increasing CO2 is not important, is definitely much weaker and is often insignificant. Surface radiation changes from CO2 will, in many cases, be overwhelmed by water vapor.

Water vapor does not overwhelm CO2 high up in the atmosphere because there is very little water vapor there – and the radiative effect of water vapor is dramatically impacted by its concentration, due to the “water vapor continuum”.

The Calculation of the “Greenhouse” Effect is based on “Average Surface Temperature” and there is No Such Thing

Simplified calculations of the “greenhouse” effect use some averages to make some points. They help to create a conceptual model.

Real calculations, using the equations of radiative transfer, don’t use an “average” surface temperature and don’t rely on a 33K “greenhouse” effect. Would the temperature decrease 33K if all of the GHGs were removed from the atmosphere? Almost certainly not. Because of feedbacks. We don’t know the effect of all of the feedbacks. But would the climate be colder? Definitely.

See The Rotational Effect – why the rotation of the earth has absolutely no effect on climate, or so a parody article explains..

The Second Law of Thermodynamics Prohibits the Greenhouse Effect, or so some Physicists Demonstrated..

See The Three Body Problem – a simple example with three bodies to demonstrate how a “with atmosphere” earth vs a “without atmosphere earth” will generate different equilibrium temperatures. Please review the entropy calculations and explain (you will be the first) where they are wrong or perhaps, or perhaps explain why entropy doesn’t matter (and revolutionize the field).

See Gerlich & Tscheuschner for the switch and bait routine by this operatic duo.

And see Kramm & Dlugi On Dodging the “Greenhouse” Bullet – Kramm & Dlugi demonstrate that the “greenhouse” effect doesn’t exist by writing a few words in a conclusion but carefully dodging the actual main point throughout their entire paper. However, they do recover Kepler’s laws and point out a few errors in a few websites. And note that one of the authors kindly showed up to comment on this article but never answered the important question asked of him. Probably just too busy.. Kramm & Dlugi also helpfully (unintentionally) explain that G&T were wrong, see Kramm & Dlugi On Illuminating the Confusion of the Unclear – Kramm & Dlugi step up as skeptics of the “greenhouse” effect, fans of Gerlich & Tscheuschner and yet clarify that colder atmospheric radiation is absorbed by the warmer earth..

And for more on that exciting subject, see Confusion over the Basics under the sub-heading The Second Law of Thermodynamics.

Feedbacks overwhelm the Greenhouse Effect

This is a totally different question. The “greenhouse” effect is the “greenhouse” effect. If the effect of more CO2 is totally countered by some feedback then that will be wonderful. But that is actually nothing to do with the “greenhouse” effect. It would be a consequence of increasing temperature.

As noted in the preamble, it is important to separate out the different building blocks in understanding climate.

Miskolczi proved that the Greenhouse Effect has no Effect

Miskolczi claimed that the greenhouse effect was true. He also claimed that more CO2 was balanced out by a corresponding decrease in water vapor. See the Miskolczi series for a tedious refutation of his paper that was based on imaginary laws of thermodynamics and questionable experimental evidence.

Once again, it is important to be able to separate out two ideas. Is the greenhouse effect false? Or is the greenhouse effect true but wiped out by a feedback?

If you don’t care, so long as you get the right result you will be in ‘good’ company (well, you will join an extremely large company of people). But this blog is about science. Not wishful thinking. Don’t mix the two up..

Convection “Short-Circuits” the Greenhouse Effect

Let’s assume that regardless of the amount of energy arriving at the earth’s surface, that the lapse rate stays constant and so the more heat arriving, the more heat leaves. That is, the temperature profile stays constant. (It’s a questionable assumption that also impacts the AGW question).

It doesn’t change the fact that with more GHGs, the radiation to space will be from a higher altitude. A higher altitude will be colder. Less radiation to space and so the climate warms – even with this “short-circuit”.

In a climate without convection, the surface temperature will start off higher, and the GHG effect from doubling CO2 will be higher. See Radiative Atmospheres with no Convection.

In summary, this isn’t an argument against the greenhouse effect, this is possibly an argument about feedbacks. The issue about feedbacks is a critical question in AGW, not a critical question for the “greenhouse” effect. Who can say whether the lapse rate will be constant in a warmer world?

Notes

Note 1 – An important exception is O2 absorbing solar radiation high up above the troposphere (lower atmosphere). But O2 does not absorb significant amounts of terrestrial radiation.

Note 2 – 99% of solar radiation has a wavelength <4μm. In these wavelengths, actually about 1/3 of solar radiation is absorbed in the atmosphere. By contrast, most of the terrestrial radiation, with a wavelength >4μm, is absorbed in the atmosphere.

Note 3 – see:

Density, Stability and Motion in Fluids – some basics about instability
Potential Temperature – explaining “potential temperature” and why the “potential temperature” increases with altitude
Temperature Profile in the Atmosphere – The Lapse Rate – lots more about the temperature profile in the atmosphere

Note 4 – see Earth’s Energy Budget – a series on the basics of the energy budget

Note 5 – the “place of emission” is a useful conceptual tool but in reality the emission of radiation takes place from everywhere between the surface and the stratosphere. See Visualizing Atmospheric Radiation – Part Three – Average Height of Emission – the complex subject of where the TOA radiation originated from, what is the “Average Height of Emission” and other questions.

Also, take a look at the complete series: Visualizing Atmospheric Radiation.

Note 6 – the balance between emission and absorption are found in the equations of radiative transfer. These are derived from fundamental physics – see Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations – the equations of radiative transfer including the plane parallel assumption and it’s nothing to do with blackbodies. The fundamental physics is not just proven in the lab, spectral measurements at top of atmosphere and the surface match the calculated values using the radiative transfer equations – see Theory and Experiment – Atmospheric Radiation – real values of total flux and spectra compared with the theory.

Also, take a look at the complete series: Atmospheric Radiation and the “Greenhouse” Effect

Note 7 – this calculation is under the assumption of “all other things being equal”. Of course, in the real climate system, all other things are not equal. However, to understand an effect “pre-feedback” we need to separate it from the responses to the system.

Read Full Post »

If we open an introductory atmospheric physics textbook, we find that the temperature profile in the troposphere (lower atmosphere) is mostly explained by convection. (See for example, Things Climate Science has Totally Missed? – Convection)

We also find that the temperature profile in the stratosphere is mostly determined by radiation. And that the overall energy balance of the climate system is determined by radiation.

Many textbooks introduce the subject of convection in this way:

  • what would the temperature profile be like if there was no convection, only radiation for heat transfer
  • why is the temperature profile actually different
  • how does pressure reduce with height
  • what happens to air when it rises and expands in the lower pressure environment
  • derivation of the “adiabatic lapse rate”, which in layman’s terms is the temperature change when we have relatively rapid movements of air
  • how the real world temperature profile (lapse rate) compares with the calculated adiabatic lapse rate and why

We looked at the last four points in some detail in a few articles:

Density, Stability and Motion in Fluids – some basics about instability
Potential Temperature – explaining “potential temperature” and why the “potential temperature” increases with altitude
Temperature Profile in the Atmosphere – The Lapse Rate – lots more about the temperature profile in the atmosphere

In this article we will look at the first point.

All of the atmospheric physics textbooks I have seen use a very simple model for explaining the temperature profile in a fictitious “radiation only” environment. The simple model is great for giving insight into how radiation travels.

Physics textbooks, good ones anyway, try and use the simplest models to explain a phenomenon.

The simple model, in brief, is the “semi-gray approximation”. This says the atmosphere is completely transparent to solar radiation, but opaque to terrestrial radiation. Its main simplification is having a constant absorption with wavelength. This makes the problem nice and simple analytically – which means we can rewrite the starting equations and plot a nice graph of the result.

However, atmospheric absorption is the total opposite of constant. Here is an example of the absorption vs wavelength of a minor “greenhouse” gas:

From Vardavas & Taylor (2007)

From Vardavas & Taylor (2007)

Figure 1

So from time to time I’ve wondered what the “no convection” atmosphere would look like with real GHG absorption lines. I also thought it would be especially interesting to see the effect of doubling CO2 in this fictitious environment.

This article is for curiosity value only, and for helping people understand radiative transfer a little better.

We will use the Matlab program seen in the series Visualizing Atmospheric Radiation. This does a line by line calculation of radiative transfer for all of the GHGs, pulling the absorption data out of the HITRAN database.

I updated the program in a few subtle ways. Mainly the different treatment of the stratosphere – the place where convection stops – was removed. Because, in this fictitious world there is no convection in the lower atmosphere either.

Here is a simulation based on 380 ppm CO2, 1775 ppb CH4, 319 ppb N2O and 50% relative humidity all through the atmosphere. Top of atmosphere was 100 mbar and the atmosphere was divided into 40 layers of equal pressure. Absorbed solar radiation was set to 240 W/m² with no solar absorption in the atmosphere. That is (unlike in the real world), the atmosphere has been made totally transparent to solar radiation.

The starting point was a surface temperature of 288K (15ºC) and a lapse rate of 6.5K/km – with no special treatment of the stratosphere. The final surface temperature was 326K (53ºC), an increase of 38ºC:

Temp-profile-no-convection-current-GHGs-40-levels-50%RH

Figure 2

The ocean depth was only 5m. This just helps get to a new equilibrium faster. If we change the heat capacity of a system like this the end result is the same, the only difference is the time taken.

Water vapor was set at a relative humidity of 50%. For these first results I didn’t get the simulation to update the absolute humidity as the temperature changed. So the starting temperature was used to calculate absolute humidity and that mixing ratio was kept constant:

wv-conc-no-convection-current-GHGs-40-levels-50%RH

Figure 3

The lapse rate, or temperature drop per km of altitude:

LapseRate-noconvection-current-GHGs-40-levels-50%RH

Figure 4

The flux down and flux up vs altitude:

Flux-noconvection-current-GHGs-40-levels-50%RH

Figure 5

The top of atmosphere upward flux is 240 W/m² (actually at the 500 day point it was 239.5 W/m²) – the same as the absorbed solar radiation (note 1). The simulation doesn’t “force” the TOA flux to be this value. Instead, any imbalance in flux in each layer causes a temperature change, moving the surface and each part of the atmosphere into a new equilibrium.

A bit more technically for interested readers.. For a given layer we sum:

  • upward flux at the bottom of a layer minus upward flux at the top of a layer
  • downward flux at the top of a layer minus downward flux at the bottom of a layer

This sum equates to the “heating rate” of the layer. We then use the heat capacity and time to work out the temperature change. Then the next iteration of the simulation redoes the calculation.

And even more technically:

  • the upwards flux at the top of a layer = the upwards flux at the bottom of the layer x transmissivity of the layer plus the emission of that layer
  • the downwards flux at the bottom of a layer = the downwards flux at the top of the layer x transmissivity of the layer plus the emission of that layer

End of “more technically”..

Anyway, the main result is the surface is much hotter and the temperature drop per km of altitude is much greater than the real atmosphere. This is because it is “harder” for heat to travel through the atmosphere when radiation is the only mechanism. As the atmosphere thins out, which means less GHGs, radiation becomes progressively more effective at transferring heat. This is why the lapse rate is lower higher up in the atmosphere.

Now let’s have a look at what happens when we double CO2 from its current value (380ppm -> 760 ppm):

Temp-profile-no-convection-doubled-GHGs-40-levels-50%RH

Figure 6 – with CO2 doubled instantaneously from 380ppm at 500 days

The final surface temperature is 329.4, increased from 326.2K. This is an increase (no feedback of 3.2K).

The “pseudo-radiative forcing” = 18.9 W/m² (which doesn’t include any change to solar absorption). This radiative forcing is the immediate change in the TOA forcing. (It isn’t directly comparable to the IPCC standard definition which is at the tropopause and after the stratosphere has come back into equilibrium – none of these have much meaning in a world without convection).

Let’s also look at the “standard case” of an increase from pre-industrial CO2 of 280 ppm to a doubling of 560 ppm. I ran this one for longer – 1000 days before doubling CO2 and 2000 days in total- because the starting point was less in balance. At the start, the TOA flux (outgoing longwave radiation) = 248 W/m². This means the climate was cooling quite a bit with the starting point we gave it.

At 180 ppm CO2, 1775 ppb CH4, 319 ppb N2O and 50% relative humidity (set at the starting point of 288K and 6.5K/km lapse rate), the surface temperature after 1,000 days = 323.9 K. At this point the TOA flux was 240.0 W/m². So overall the climate has cooled from its initial starting point but the surface is hotter.

This might seem surprising at first sight – the climate cools but the surface heats up? It’s simply that the “radiation-only” atmosphere has made it much harder for heat to get out. So the temperature drop per km of height is now much greater than it is in a convection atmosphere. Remember that we started with a temperature profile of 6.5K/km – a typical convection atmosphere.

After CO2 doubles to 560 ppm (and all other factors stay the same, including absolute humidity), the immediate effect is the TOA flux drops to 221 W/m² (once again a radiative forcing of about 19 W/m²). This is because the atmosphere is now even more “resistant” to the escape of heat by radiation. The atmosphere is more opaque and so the average emission of radiation of space moves to a higher and colder part of the atmosphere. Colder parts of the atmosphere emit less radiation than warmer parts of the atmosphere.

After the climate moves back into balance – a TOA flux of 240 W/m² – the surface temperature = 327.0 K – an increase (pre-feedback) of 3.1 K.

Compare this with the standard IPCC “with convection” no-feedback forcing of 3.7 W/m² and a “no feedback” temperature rise of about 1.2 K.

Temp-profile-no-convection-280-560ppm-CO2-40-levels-50%RH

Figure 7 – with CO2 doubled instantaneously from 280ppm at 1000 days

Then I introduced a more realistic model with solar absorption by water vapor in the atmosphere (changed parameter ‘solaratm’ in the Matlab program from ‘false’ to ‘true’). Unfortunately this part of the radiative transfer program is not done by radiative transfer, only by a very crude parameterization, just to get roughly the right amount of heating by solar radiation in roughly the right parts of the atmosphere.

The equilibrium surface temperature at 280 ppm CO2 was now “only” 302.7 K (almost 30ºC). Doubling CO2 to 560 ppm created a radiative forcing of 11 W/m², and a final surface temperature of 305.5K – that is, an increase of 2.8K.

Why is the surface temperature lower? Because in the “no solar absorption in the atmosphere” model, all of the solar radiation is absorbed by the ground and has to “fight its way out” from the surface up. Once you absorb solar radiation higher up than the surface, it’s easier for this heat to get out.

Conclusion

One of the common themes of fantasy climate blogs is that the results of radiative physics are invalidated by convection, which “short-circuits” radiation in the troposphere. No one in climate science is confused about the fact that convection dominates heat transfer in the lower atmosphere.

We can see in this set of calculations that when we have a radiation-only atmosphere the surface temperature is a lot higher than any current climate – at least when we consider a “one-dimensional” climate.

Of course, the whole world would be different and there are many questions about the amount of water vapor and the effect of circulation (or lack of it) on moving heat around the surface of the planet via the atmosphere and the ocean.

When we double CO2 from its pre-industrial value the radiative forcing is much greater in a “radiation-only atmosphere” than in a “radiative-convective atmosphere”, with the pre-feedback temperature rise 3ºC vs 1ºC.

So it is definitely true that convection short-circuits radiation in the troposphere. But the whole climate system can only gain and lose energy by radiation and this radiation balance still has to be calculated. That’s what current climate models do.

It’s often stated as a kind of major simplification (a “teaching model”) that with increases in GHGs the “average height of emission” moves up, and therefore the emission is from a colder part of the atmosphere. This idea is explained in more detail and less simplifications in Visualizing Atmospheric Radiation – Part Three – Average Height of Emission – the complex subject of where the TOA radiation originated from, what is the “Average Height of Emission” and other questions.

A legitimate criticism of current atmospheric physics is that convection is poorly understood in contrast to subjects like radiation. This is true. And everyone knows it. But it’s not true to say that convection is ignored. And it’s not true to say that because “convection short-circuits radiation” in the troposphere that somehow more GHGs will have no effect.

On the other hand I don’t want to suggest that because more GHGs in the atmosphere mean that there is a “pre-feedback” temperature rise of about 1K, that somehow the problem is all nicely solved. On the contrary, climate is very complicated. Radiation is very simple by comparison.

All the standard radiative-convective calculation says is: “all other things being equal, an doubling of CO2 from pre-industrial levels, would lead to a 1K increase in surface temperature”

All other things are not equal. But the complication is not that somehow atmospheric physics has just missed out convection. Hilarious. Of course, I realize most people learn their criticisms of climate science from people who have never read a textbook on the subject. Surprisingly, this doesn’t lead to quality criticism..

On more complexity  – I was also interested to see what happens if we readjust absolute humidity due to the significant temperature changes, i.e. we keep relative humidity constant. This led to some surprising results, so I will post them in a followup article.

Notes

Note 1 – The boundary conditions are important if you want to understand radiative heat transfer in the atmosphere.

First of all, the downward longwave radiation at TOA (top of atmosphere) = 0. Why? Because there is no “longwave”, i.e., terrestrial radiation, from outside the climate system. So at the top of the atmosphere the downward flux = 0. As we move down through the atmosphere the flux gradually increases. This is because the atmosphere emits radiation. We can divide up the atmosphere into fictitious “layers”. This is how all numerical (finite element analysis) programs actually work. Each layer emits and each layer also absorbs. The balance depends on the temperature of the source radiation vs the temperature of the layer of the atmosphere we are considering.

At the bottom of the atmosphere, i.e., at the surface, the upwards longwave radiation is the surface emission. This emission is given by the Stefan-Boltzmann equation with an emissivity of 1.0 if we consider the surface as a blackbody which is a reasonable approximation for most surface types – for more on this, see Visualizing Atmospheric Radiation – Part Thirteen – Surface Emissivity – what happens when the earth’s surface is not a black body – useful to understand seeing as it isn’t..

At TOA, the upwards emission needs to equal the absorbed solar radiation, otherwise the climate system has an imbalance – either cooling or warming.

Read Full Post »

As a friend of mine in Florida says:

You can’t kill stupid, but you can dull it with a 4×2

Some ideas are so comically stupid that I thought there was no point writing about them. And yet, one after another, people who can type are putting forward these ideas on this blog.. At first I wondered if I was the object of a practical joke. Some kind of parody. Perhaps the joke is on me. But, just in case I was wrong about the practical joke..

 

If you pick up a textbook on heat transfer that includes a treatment of radiative heat transfer you find no mention of Arrhenius.

If you pick up a textbook on atmospheric physics none of the equations come from Arrhenius.

Yet there is a steady stream of entertaining “papers” which describe “where Arrhenius went wrong”, “Arrhenius and his debates with Fourier”. Who cares?

Likewise, if you study equations of motion in a rotating frame there is no discussion of where Newton went wrong, or where he got it right, or debates he got right or wrong with contemporaries. Who knows? Who cares?

History is fascinating. But if you want to study physics you can study it pretty well without reading about obscure debates between people who were in the formulation stages of the field.

Here are the building blocks of atmospheric radiation:

  • The emission of radiation – described by Nobel prize winner Max Planck’s equation and modified by the material property called emissivity (this is wavelength dependent)
  • The absorption of radiation by a surface – described by the material property called absorptivity (this is wavelength dependent and equal at the same wavelength and direction to emissivity)
  • The Beer-Lambert law of absorption of radiation by a gas
  • The spectral absorption characteristics of gases – currently contained in the HITRAN database – and based on work carried out over many decades and written up in journals like Journal of Quantitative Spectroscopy and Radiative Transfer
  • The theory of radiative transfer – the Schwarzschild equation – which was well documented by Nobel prize winner Subrahmanyan Chandrasekhar in his 1952 book Radiative Transfer (and by many physicists since)

The steady stream of stupidity will undoubtedly continue, but if you are interested in learning about science then you can rule out blogs that promote papers which earnestly explain “where Arrhenius went wrong”.

Hit them with a 4 by 2.

Or, ask the writer where Subrahmanyan Chandrasekhar went wrong in his 1952 work Radiative Transfer. Ask the writer where Richard M. Goody went wrong. He wrote the seminal Atmospheric Radiation: Theoretical Basis in 1964.

They won’t even know these books exist and will have never read them. These books contain equations that are thoroughly proven over the last 100 years. There is no debate about them in the world of physics. In the world of fantasy blogs, maybe.

There is also a steady stream of people who believe an idea yet more amazing. Somehow basic atmospheric physics is proven wrong because of the last 15 years of temperature history.

The idea seems to be:

More CO2 is believed to have some radiative effect in the climate because of the last 100 years of temperature history, climate scientists saw some link and tried to explain it using CO2, but now there has been no significant temperature increase for the last x years this obviously demonstrates the original idea was false..

If you think this, please go and find a piece of 4×2 and ask a friend to hit you across the forehead with it. Repeat. I can’t account for this level of stupidity but I have seen that it exists.

An alternative idea, that I will put forward, one that has evidence, is that scientists discovered that they can reliably predict:

  • emission of radiation from a surface
  • emission of radiation from a gas
  • absorption of radiation by a surface
  • absorption of radiation by a gas
  • how to add up, subtract, divide and multiply, raise numbers to the power of, and other ninja mathematics

The question I have for the people with these comical ideas:

Do you think that decades of spectroscopy professionals have just failed to measure absorption? Their experiments were some kind of farce? No one noticed they made up all the results?

Do you think Max Planck was wrong?

It is possible that climate is slightly complicated and temperature history relies upon more than one variable?

Did someone teach you that the absorption and emission of radiation was only “developed” by someone analyzing temperature vs CO2 since 1970 and not a single scientist thought to do any other measurements? Why did you believe them?

Bring out the 4×2.

Note – this article is a placeholder so I don’t have to bother typing out a subset of these points for the next entertaining commenter..

Update July 10th with the story of Fred the Charlatan

Let’s take the analogy of a small boat crossing the Atlantic.

Analogies don’t prove anything, they are for illustration. For proof, please review Theory and Experiment – Atmospheric Radiation.

We’ve done a few crossings and it’s taken 45 days, 42 days and 46 days (I have no idea what the right time is, I’m not a nautical person).

We measure the engine output – the torque of the propellors. We want to get across quicker. So Fred the engine guy makes a few adjustments and we remeasure the torque at 5% higher. We also do Fred’s standardized test, which is to zip across a local sheltered bay with no currents, no waves and no wind – the time taken for Fred’s standarized test is 4% faster. Nice.

So we all set out on our journey across the Atlantic. Winds, rain, waves, ocean currents. We have our books to read, Belgian beer and red wine and the time flies. Oh no, when we get to our final destination, it’s actually taken 47 days.

Clearly Fred is some kind of charlatan! No need to check his measurements or review the time across the bay. We didn’t make it across the Atlantic in less time and clearly the ONLY variable involved in that expedition was the output of the propellor.

Well, there’s no point trying to use more powerful engines to get across the Atlantic (or any ocean) faster. Torque has no relationship to speed. Case closed.

Analogy over.

 

Read Full Post »

In Part Seven – GCM I  through Part Ten – GCM IV we looked at GCM simulations of ice ages.

These were mostly attempts at “glacial inception”, that is, starting an ice age. But we also saw a simulation of the last 120 kyrs which attempted to model a complete ice age cycle including the last termination. As we saw, there were lots of limitations..

One condition for glacial inception, “perennial snow cover at high latitudes”, could be produced with a high-resolution coupled atmosphere-ocean GCM (AOGCM), but that model did suffer from the problem of having a cold bias at high latitudes.

The (reasonably accurate) simulation of a whole cycle including inception and termination came by virtue of having the internal feedbacks (ice sheet size & height and CO2 concentration) prescribed.

Just to be clear to new readers, these comments shouldn’t indicate that I’ve uncovered some secret that climate scientists are trying to hide, these points are all out in the open and usually highlighted by the authors of the papers.

In Part Nine – GCM III, one commenter highlighted a 2013 paper by Ayako Abe-Ouchi and co-workers, where the journal in question, Nature, had quite a marketing pitch on the paper. I made brief comment on it in a later article in response to another question, including that I had emailed the lead author asking a question about the modeling work (how was a 120 kyr cycle actually simulated?).

Most recently, in Eighteen – “Probably Nonlinearity” of Unknown Origin, another commented highlighted it, which rekindled my enthusiasm, and I went back and read the paper again. It turns out that my understanding of the paper had been wrong. It wasn’t really a GCM paper at all. It was an ice sheet paper.

There is a whole field of papers on ice sheet models deserving attention.

GCM review

Let’s review GCMs first of all to help us understand where ice sheet models fit in the hierarchy of climate simulations.

GCMs consist of a number of different modules coupled together. The first GCMs were mostly “atmospheric GCMs” = AGCMs, and either they had a “swamp ocean” = a mixed layer of fixed depth, or had prescribed ocean boundary conditions set from an ocean model or from an ocean reconstruction.

Less commonly, unless you worked just with oceans, there were ocean GCMs with prescribed atmospheric boundary conditions (prescribed heat and momentum flux from the atmosphere).

Then coupled atmosphere-ocean GCMs came along = AOGCMs. It was a while before these two parts matched up to the point where there was no “flux drift”, that is, no disappearing heat flux from one part of the model.

Why so difficult to get these two models working together? One important reason comes down to the time-scales involved, which result from the difference in heat capacity and momentum of the two parts of the climate system. The heat capacity and momentum of the ocean is much much higher than that of the atmosphere.

And when we add ice sheets models – ISMs – we have yet another time scale to consider.

  • the atmosphere changes in days, weeks and months
  • the ocean changes in years, decades and centuries
  • the ice sheets changes in centuries, millennia and tens of millenia

This creates a problem for climate scientists who want to apply the fundamental equations of heat, mass & momentum conservation along with parameterizations for “stuff not well understood” and “stuff quite-well-understood but whose parameters are sub-grid”. To run a high resolution AOGCM for a 1,000 years simulation might consume 1 year of supercomputer time and the ice sheet has barely moved during that period.

Ice Sheet Models

Scientists who study ice sheets have a whole bunch of different questions. They want to understand how the ice sheets developed.

What makes them grow, shrink, move, slide, melt.. What parameters are important? What parameters are well understood? What research questions are most deserving of attention? And:

Does our understanding of ice sheet dynamics allow us to model the last glacial cycle?

To answer that question we need a model for ice sheet dynamics, and to that we need to apply some boundary conditions from some other “less interesting” models, like GCMs. As a result, there are a few approaches to setting the boundary conditions so we can do our interesting work of modeling ice sheets.

Before we look at that, let’s look at the dynamics of ice sheets themselves.

Ice Sheet Dynamics

First, in the theme of the last paper, Eighteen – “Probably Nonlinearity” of Unknown Origin, here is Marshall & Clark 2002:

The origin of the dominant 100-kyr ice-volume cycle in the absence of substantive radiation forcing remains one of the most vexing questions in climate dynamics

We can add that to the 34 papers reviewed in that previous article. This paper by Marshall & Clark is definitely a good quick read for people who want to understand ice sheets a little more.

Ice doesn’t conduct a lot of heat – it is a very good insulator. So the important things with ice sheets happen at the top and the bottom.

At the top, ice melts, and the water refreezes, runs off or evaporates. In combination, the loss is called ablation. Then we have precipitation that adds to the ice sheet. So the net effect determines what happens at the top of the ice sheet.

At the bottom, when the ice sheet is very thin, heat can be conducted through from the atmosphere to the base and make it melt – if the atmosphere is warm enough. As the ice sheet gets thicker, very little heat is conducted through. However, there are two important sources of heat for surface heating which results in “basal sliding”. One source is geothermal energy. This is around 0.1 W/m² which is very small unless we are dealing with an insulating material (like ice) and lots of time (like ice sheets). The other source is the shear stress in the ice sheet which can create a lot of heat via the mechanics of deformation.

Once the ice sheet is able to start sliding, the dynamics create a completely different result compared to an ice sheet “cold-pinned” to the rock underneath.

Some comments from Marshall and Clark:

Ice sheet deglaciation involves an amount of energy larger than that provided directly from high-latitude radiation forcing associated with orbital variations. Internal glaciologic, isostatic, and climatic feedbacks are thus essential to explain the deglaciation.

..Moreover, our results suggest that thermal enabling of basal flow does not occur in response to surface warming, which may explain why the timing of the Termination II occurred earlier than predicted by orbital forcing [Gallup et al., 2002].

Results suggest that basal temperature evolution plays an important role in setting the stage for glacial termination. To confirm this hypothesis, model studies need improved basal process physics to incorporate the glaciological mechanisms associated with ice sheet instability (surging, streaming flow).

..Our simulations suggest that a substantial fraction (60% to 80%) of the ice sheet was frozen to the bed for the first 75 kyr of the glacial cycle, thus strongly limiting basal flow. Subsequent doubling of the area of warm-based ice in response to ice sheet thickening and expansion and to the reduction in downward advection of cold ice may have enabled broad increases in geologically- and hydrologically-mediated fast ice flow during the last deglaciation.

Increased dynamical activity of the ice sheet would lead to net thinning of the ice sheet interior and the transport of large amounts of ice into regions of intense ablation both south of the ice sheet and at the marine margins (via calving). This has the potential to provide a strong positive feedback on deglaciation.

The timescale of basal temperature evolution is of the same order as the 100-kyr glacial cycle, suggesting that the establishment of warm-based ice over a large enough area of the ice sheet bed may have influenced the timing of deglaciation. Our results thus reinforce the notion that at a mature point in their life cycle, 100-kyr ice sheets become independent of orbital forcing and affect their own demise through internal feedbacks.

[Emphasis added]

In this article we will focus on a 2007 paper by Ayako Abe-Ouchi, T Segawa & Fuyuki Saito. This paper is essentially the same modeling approach used in Abe-Ouchi’s 2013 Nature paper.

The Ice Model

The ice sheet model has a time step of 2 years, with 1° grid from 30°N to the north pole, 1° longitude and 20 vertical levels.

Equations for the ice sheet include sliding velocity, ice sheet deformation, the heat transfer through the lithosphere, the bedrock elevation and the accumulation rate on the ice sheet.

Note, there is a reference that some of the model is based on work described in Sensitivity of Greenland ice sheet simulation to the numerical procedure employed for ice sheet dynamics, F Saito & A Abe-Ouchi, Ann. Glaciol., (2005) – but I don’t have access to this journal. (If anyone does, please email the paper to me at scienceofdoom – you know what goes here – gmail.com).

How did they calculate the accumulation on the ice sheet? There is an equation:

Acc=Aref×(1+dP)Ts

Ts is the surface temperature, dP is a measure of aridity and Aref is a reference value for accumulation. This is a highly parameterized method of calculating how much thicker or thinner the ice sheet is growing. The authors reference Marshall et al 2002 for this equation, and that paper is very instructive in how poorly understood ice sheet dynamics actually are.

Here is one part of the relevant section in Marshall et al 2002:

..For completeness here, note that we have also experimented with spatial precipitation patterns that are based on present-day distributions.

Under this treatment, local precipitation rates diminish exponentially with local atmospheric cooling, reflecting the increased aridity that can be expected under glacial conditions (Tarasov and Peltier, 1999).

Paleo-precipitation under this parameterization has the form:

P(λ,θ,t) = Pobs(λ,θ)(1+dp)ΔT(λ,θ,t) x exp[βp.max[hs(λ,θ,t)-ht,0]]       (18)

The parameter dP in this equation represents the percentage of drying per 1C; Tarasov and Peltier (1999) choose a value of 3% per °C; dp = 0:03.

[Emphasis added, color added to highlight the relevant part of the equation]

So dp is a parameter that attempts to account for increasing aridity in colder glacial conditions, and in their 2002 paper Marshall et al describe it as 1 of 4 “free parameters” that are investigated to see what effect they have on ice sheet development around the LGM.

Abe-Ouchi and co-authors took a slightly different approach that certainly seems like an improvement over Marshall et al 2002:

Abe-Ouchi-eqn11

So their value of aridity is just a linear function of ice sheet area – from zero to a fixed value, rather than a fixed value no matter the ice sheet size.

How is Ts calculated? That comes, in a way, from the atmospheric GCM, but probably not in a way that readers might expect. So let’s have a look at the GCM then come back to this calculation of Ts.

Atmospheric GCM Simulations

There were three groups of atmospheric GCM simulations, with parameters selected to try and tease out which factors have the most impact.

Group One: high resolution GCM – 1.1º latitude and longitude and 20 atmospheric vertical levels with fixed sea surface temperature. So there is no ocean model, the ocean temperature are prescribed. Within this group, four experiments:

  • A control experiment – modern day values
  • LGM (last glacial maximum) conditions for CO2 (note 1) and orbital parameters with
    • no ice
    • LGM ice extent but zero thickness
    • LGM ice extent and LGM thickness

So the idea is to compare results with and without the actual ice sheet so see how much impact orbital and CO2 values have vs the effect of the ice sheet itself – and then for the ice sheet to see whether the albedo or the elevation has the most impact. Why the elevation? Well, if an ice sheet is 1km thick then the surface temperature will be something like 6ºC colder. (Exactly how much colder is an interesting question because we don’t know what the lapse rate actually was). There will also be an effect on atmospheric circulation – you’ve stuck a “mountain range” in the path of wind so this changes the circulation.

Each of the four simulations was run for 11 or 13 years and the last 10 years’ results used:

From Abe-Ouchi et al 2007

From Abe-Ouchi et al 2007

Figure 1

It’s clear from this simulation that the full result (left graphic) is mostly caused by the ice sheet (right graphic) rather than CO2, orbital parameters and the SSTs (middle graphic). And the next figure in the paper shows the breakdown between the albedo effect and the height of the ice sheet:

From Abe-Ouchi et al 2007

From Abe-Ouchi et al 2007

Figure 2 – same color legend as figure 1

Now a lapse rate of 5K/km was used. What happens if the lapse rate of 9K/km was used instead? There were no simulations done with different lapse rates.

..Other lapse rates could be used which vary depending on the altitude or location, while a lapse rate larger than 7 K/km or smaller than 4 K/km is inconsistent with the overall feature. This is consistent with the finding of Krinner and Genthon (1999), who suggest a lapse rate of 5.5 K/km, but is in contrast with other studies which have conventionally used lapse rates of 8 K/km or 6.5 K/km to drive the ice sheet models..

Group Two – medium resolution GCM 2.8º latitude and longitude and 11 atmospheric vertical levels, with a “slab ocean” – this means the ocean is treated as one temperature through the depth of some fixed layer, like 50m. So it is allowing the ocean to be there as a heat sink/source responding to climate, but no heat transfer through to a deeper ocean.

There were five simulations in this group, one control (modern day everything) and four with CO2 & orbital parameters at the LGM:

  • no ice sheet
  • LGM ice extent, but flat
  • 12 kyrs ago ice extent, but flat
  • 12 kyrs ago ice extent and height

So this group takes a slightly more detailed look at ice sheet impact. Not surprisingly the simulation results give intermediate values for the ice sheet extent at 12 kyrs ago.

Group Three – medium resolution GCM as in group two, and ice sheets either at present day or LGM, with nine simulations covering different orbital values, different CO2 values of present day, 280 or 200 ppm.

There was also some discussion of the impact of different climate models. I found this fascinating because the difference between CCSM and the other models appears to be as great as the difference in figure 2 (above) which identifies the albedo effect as more significant than the lapse rate effect:

From Abe-Ouchi et al 2007

From Abe-Ouchi et al 2007

Figure 3

And this naturally has me wondering about how much significance to put on the GCM simulation results shown in the paper. The authors also comment:

Based on these GCM results we conclude there remains considerable uncertainty over the actual size of the albedo effect.

Given there is also uncertainty over the lapse rate that actually occurred, it seems there is considerable uncertainty over everything.

Now let’s return to the ice sheet model, because so far we haven’t seen any output from the ice sheet model.

GCM Inputs into the Ice Sheet Model

The equation which calculates the change in accumulation on the ice sheet used a fairly arbitrary parameter dp, with (1+dp) raised to the power of Ts.

The ice sheet model has a 2 year time step. The GCM results don’t provide Ts across the surface grid every 2 years, they are snapshots for certain conditions. The ice sheet model uses this calculation for Ts:

Ts = Tref + ΔTice + ΔTco2 + ΔTinsol + ΔTnonlinear

Tref is the reference temperature which is present day climatology. The other ΔT (change in temperature) values are basically a linear interpolation from two values of the GCM simulations. Here is the ΔTCo2 value:

Abe-Ouchi-2007-eqn6

 

So think of it like this – we have found Ts at one value of CO2 higher and one value of CO2 lower from some snapshot GCM simulations. We plot a graph with Co2 on the x-axis and Ts on the y-axis with just two points on the graph from these two experiments and we draw a straight line between the two points.

To calculate Ts at say 50 kyrs ago we look up the CO2 value at 50 kyrs from ice core data, and read the value of TCO2 from the straight line on the graph.

Likewise for the other parameters. Here is ΔTinsol:

Abe-Ouchi-eqn7

 

So the method is extremely basic. Of course the model needs something..

Now, given that we have inputs for accumulation on the ice sheet, the ice sheet model can run. Here are the results. The third graph (3) is the sea level from proxy results so is our best estimate of reality, with (4) providing model outputs for different parameters of d0 (“desertification” or aridity) and lapse rate, and (5) providing outputs for different parameters of albedo and lapse rate:

From Abe-Ouchi et al 2007

From Abe-Ouchi et al 2007

Figure 4

There are three main points of interest.

Firstly, small changes in the parameters cause huge changes in the final results. The idea of aridity over ice sheets as just linear function of ice sheet size is very questionable itself. The idea of a constant lapse rate is extremely questionable. Together, using values that appear realistic, we can model much less ice sheet growth (sea level drop) or many times greater ice sheet growth than actually occurred.

Secondly, notice that the time of maximum ice sheet (lowest sea level) for realistic results show sea level starting to rise around 12 kyrs, rather than the actual 18 kyrs. This might be due to the impact of orbital factors which were at quite a low level (i.e., high latitude summer insolation was at quite a low level) when the last ice age finished, but have quite an impact in the model. Of course, we have covered this “problem” in a few previous articles in this series. In the context of this model it might be that the impact of the southern hemisphere leading the globe out of the last ice age is completely missing.

Thirdly – while this might be clear to some people, but for many new to this kind of model it won’t be obvious – the inputs for the model are some limits of the actual history. The model doesn’t simulate the actual start and end of the last ice age “by itself”. We feed into the GCM model a few CO2 values. We feed into the GCM model a few ice sheet extent and heights that (as best as can be reconstructed) actually occurred. The GCM gives us some temperature values for these snapshot conditions.

In the case of this ice sheet model, every 2 years (each time step of the ice sheet model) we “look up” the actual value of ice sheet extent and atmospheric CO2 and we linearly interpolate the GCM output temperatures for the current year. And then we crudely parameterize these values into some accumulation rate on the ice sheet.

Conclusion

This is our first foray into ice sheet models. It should be clear that the results are interesting but we are at a very early stage in modeling ice sheets.

The problems are:

  • the computational load required to run a GCM coupled with an ice sheet model over 120 kyrs is much too high, so it can’t be done
  • the resulting tradeoff uses a few GCM snapshot values to feed linearly interpolated temperatures into a parameterized accumulation equation
  • the effect of lapse rate on the results is extremely large and the actual value for lapse rate over ice sheets is very unlikely to be a constant and is also not known
  • our understanding of ice sheet fundamental equations are still at an early stage, as readers can see by reviewing the first two papers below, especially the second one

 Articles in this Series

Part One – An introduction

Part Two – Lorenz – one point of view from the exceptional E.N. Lorenz

Part Three – Hays, Imbrie & Shackleton – how everyone got onto the Milankovitch theory

Part Four – Understanding Orbits, Seasons and Stuff – how the wobbles and movements of the earth’s orbit affect incoming solar radiation

Part Five – Obliquity & Precession Changes – and in a bit more detail

Part Six – “Hypotheses Abound” – lots of different theories that confusingly go by the same name

Part Seven – GCM I – early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Seven and a Half – Mindmap – my mind map at that time, with many of the papers I have been reviewing and categorizing plus key extracts from those papers

Part Eight – GCM II – more recent work from the “noughties” – GCM results plus EMIC (earth models of intermediate complexity) again trying to produce perennial snow cover

Part Nine – GCM III – very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age – a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Eleven – End of the Last Ice age – latest data showing relationship between Southern Hemisphere temperatures, global temperatures and CO2

Twelve – GCM V – Ice Age Termination – very recent work from He et al 2013, using a high resolution GCM (CCSM3) to analyze the end of the last ice age and the complex link between Antarctic and Greenland

Thirteen – Terminator II – looking at the date of Termination II, the end of the penultimate ice age – and implications for the cause of Termination II

Fourteen – Concepts & HD Data – getting a conceptual feel for the impacts of obliquity and precession, and some ice age datasets in high resolution

Fifteen – Roe vs Huybers – reviewing In Defence of Milankovitch, by Gerard Roe

Sixteen – Roe vs Huybers II – remapping a deep ocean core dataset and updating the previous article

Seventeen – Proxies under Water I – explaining the isotopic proxies and what they actually measure

Eighteen – “Probably Nonlinearity” of Unknown Origin – what is believed and what is put forward as evidence for the theory that ice age terminations were caused by orbital changes

References

Basal temperature evolution of North American ice sheets and implications for the 100-kyr cycle, SJ Marshall & PU Clark, GRL (2002) – free paper

North American Ice Sheet reconstructions at the Last Glacial Maximum, SJ Marshall, TS James, GKC Clarke, Quaternary Science Reviews (2002) – free paper

Climatic Conditions for modelling the Northern Hemisphere ice sheets throughout the ice age cycle, A Abe-Ouchi, T Segawa, and F Saito, Climate of the Past (2007) – free paper

Insolation-driven 100,000-year glacial cycles and hysteresis of ice-sheet volume, Ayako Abe-Ouchi, Fuyuki Saito, Kenji Kawamura, Maureen E. Raymo, Jun’ichi Okuno, Kunio Takahashi & Heinz Blatter, Nature (2013) – paywall paper

Notes

Note 1 – the value of CO2 used in these simulations was 200 ppm, while CO2 at the LGM was actually 180 ppm. Apparently this value of 200 ppm was used in a major inter-comparison project (the PMIP), but I don’t know the reason why. PMIP = Paleoclimate Modelling Intercomparison Project, Joussaume and Taylor, 1995.

Read Full Post »

In Thirteen – Terminator II we had a cursory look at the different “proxies” for temperature and ice volume/sea level. And we’ve considered some issues around dating of proxies.

There are two main proxies we have used so far to take a look back into the ice ages:

  • δ18O in deep ocean cores in the shells of foraminifera – to measure ice volume
  • δ18O in the ice in ice cores (Greenland and Antarctica) – to measure temperature

Now we want to take a closer look at the proxies themselves. It’s a necessary subject if we want to understand ice ages, because the proxies don’t actually measure what they might be assumed to measure. This is a separate issue from the dating: of ice; of gas trapped in ice; and of sediments in deep ocean cores.

If we take samples of ocean water, H2O, and measure the proportion of the oxygen isotopes, we find (Ferronsky & Polyakov 2012):

  • 16O – 99.757 %
  • 17O –   0.038%
  • 18O –   0.205%

There is another significant water isotope, Deuterium – aka, “heavy hydrogen” – where the water molecule is HDO, also written as 1H2HO – instead of H2O.

The processes that affect ratios of HDO are similar to the processes that affect the ratios of H218O, and consequently either isotope ratio can provide a temperature proxy for ice cores. A value of δD equates, very roughly, to 10x a value of δ18O, so mentally you can use this ratio to convert from δ18O to δD (see note 1).

In Note 2 I’ve included some comments on the Dole effect, which is the relationship between the ocean isotopic composition and the atmospheric oxygen isotopic composition. It isn’t directly relevant to the discussion of proxies here, because the ocean is the massive reservoir of 18O and the amount in the atmosphere is very small in comparison (1/1000). However, it might be of interest to some readers and we will return to the atmospheric value later when looking at dating of Antarctic ice cores.

Terminology and Definitions

The isotope ratio, δ18O, of ocean water = 2.005 ‰, that is, 0.205 %. This is turned into a reference, known as Vienna Standard Mean Ocean Water. So with respect to VSMOW, δ18O, of ocean water = 0. It’s just a definition. The change is shown as δ, the Greek symbol for delta, very commonly used in maths and physics to mean “change”.

The values of isotopes are usually expressed in terms of changes from the norm, that is, from the absolute standard. And because the changes are quite small they are expressed as parts per thousand = per mil = ‰, instead of percent, %.

So as δ18O changes from 0 (ocean water) to -50‰ (typically the lowest value of ice in Antarctica), the proportion of 18O goes from 0.20% (2.0‰) to 0.19% (1.9‰).

If the terminology is confusing think of the above example as a 5% change. What is 5% of 20? Answer is 1; and 20 – 1 = 19. So the above example just says if we reduce the small amount, 2 parts per thousand of 18O by 5% we end up with 1.9 parts per thousand.

Here is a graph that links the values together:

From Hoef 2009

From Hoefs 2009

Figure 1

Fractionation, or Why Ice Sheets are So Light

We’ve seen this graph before – the δ18O (of ice) in Greenland (NGRIP) and Antarctica (EDML) ice sheets against time:

From EPICA 2006

From EPICA 2006

Figure 2

Note that the values of δ18O from Antarctica (EDML – top line) through the last 150 kyrs are from about -40 to -52 ‰. And the values from Greenland (NGRIP – black line in middle section) are from about -32 to -44 ‰.

There are some standard explanations around – like this link – but the I’m not sure the graphic alone quite explains it – unless you understand the subject already..

If we measure the 18O concentration of a body of water, then we measure the 18O concentration of the water vapor above it, we find that the water vapor value has 18O at about -10 ‰ compared with the body of water. We write this as δ18O = -10 ‰. That is, the water vapor is a little lighter, isotopically speaking, than the ocean water.

The processes (fractionation) that cause this are easy to reproduce in the lab:

  • during evaporation, the lighter isotopes evaporate preferentially
  • during precipitation, the heavier isotopes precipitate preferentially

(See note 3).

So let’s consider the journey of a parcel of water vapor evaporated somewhere near the equator. The water vapor is a little reduced in 18O (compared with the ocean) due to the evaporation process. As the parcel of air travels away from the equator it rises and cools and some of the water vapor condenses. The initial rain takes proportionately more 18O than is in the parcel – so the parcel of air gets depleted in 18O. It keeps moving away from the equator, the air gets progressively colder, it keeps raining out, and the further it goes the less the proportion of 18O remains in the parcel of air. By the time precipitation forms in polar regions the water or ice is very light isotopically, that is, δ18O is the most negative it can get.

As a very simplistic idea of water vapor transport, this explains why the ice sheets in Greenland and Antarctica have isotopic values that are very low in 18O. Let’s take a look at some data to see how well such a simplistic idea holds up..

The isotopic composition of precipitation:

From Gat 2010

From Gat 2010

Figure 3 – Click to Enlarge

We can see the broad result represented quite well – the further we are in the direction of the poles the lower the isotopic composition of precipitation.

In contrast, when we look at local results in some detail we don’t see such a tidy picture. Here are some results from Rindsberger et al (1990) from central and northern Israel:

From Rindsberger et al 1990

From Rindsberger et al 1990

Figure 4

From Rindsberger et al 1990

From Rindsberger et al 1990

Figure 5

The authors comment:

It is quite surprising that the seasonally averaged isotopic composition of precipitation converges to a rather well-defined value, in spite of the large differences in the δ value of the individual precipitation events which show a range of 12‰ in δ18O.. At Bet-Dagan.. from which we have a long history.. the amount weighted annual average is δ18O = 5.07 ‰ ± 0.62 ‰ for the 19 year period of 1965-86. Indeed the scatter of ± 0.6‰ in the 19-year long series is to a significant degree the result of a 4-year period with lower δ values, namely the years 1971-75 when the averaged values were δ18O = 5.7 ‰ ± 0.2 ‰. That period was one of worldwide climate anomalies. Evidently the synoptic pattern associated with the precipitation events controls both the mean isotopic values of the precipitation and its variability.

The seminal 1964 paper by Willi Dansgaard is well worth a read for a good overview of the subject:

As pointed out.. one cannot use the composition of the individual rain as a direct measure of the condensation temperature. Nevertheless, it has been possible to show a simple linear correlation between the annual mean values of the surface temperature and the δ18O content in high latitude, non-continental precipitation. The main reason is that the scattering of the individual precipitation compositions, caused by the influence of numerous meteorological parameters, is smoothed out when comparing average compositions at various locations over a sufficiently long period of time (a whole number of years).

The somewhat revised and extended correlation is shown in fig. 3..

From Dansgaard 1964

From Dansgaard 1964

Figure 6

So we appear to have a nice tidy picture when looking at annual means, a little bit like the (article) figure 3 from Gat’s 2010 textbook.

Before “muddying the waters” a little, let’s have a quick look at ocean values.

Ocean δ18O

We can see that the ocean, as we might expect, is much more homogenous, especially the deep ocean. Note that these results are δD (think, about 10x the value of δ18O):

From Ferronsky & Polyakov (2012)

From Ferronsky & Polyakov (2012)

Figure 7 – Click to enlarge

And some surface water values of δD (and also salinity), where we see a lot more variation, again as might expect:

From Ferronsky & Polyakov 2012

From Ferronsky & Polyakov 2012

Figure 8

If we do a quick back of the envelope calculation, using the fact that the sea level change between the last glacial maximum (LGM) and the current interglacial was about 120m, the average ocean depth is 3680m we expect a glacial-interglacial change in the ocean of about 1.5 ‰.

This is why the foraminifera near the bottom of the ocean, capturing 18O from the ocean, are recording ice volume, whereas the ice cores are recording atmospheric temperatures.

Note as well that during the glacial, with more ice locked up in ice sheets, the value of ocean δ18O will be higher. So colder atmospheric temperatures relate to lower values of δ18O in precipitation, but – due to the increase in ice, depleted in 18O – higher values of ocean δ18O.

Muddying the Waters

Hoefs 2009, gives a good summary of the different factors in isotopic precipitation:

The first detailed evaluation of the equilibrium and nonequilibrium factors that determine the isotopic composition of precipitation was published by Dansgaard (1964). He demonstrated that the observed geographic distribution in isotope composition is related to a number of environmental parameters that characterize a given sampling site, such as latitude, altitude, distance to the coast, amount of precipitation, and surface air temperature.

Out of these, two factors are of special significance: temperature and the amount of precipitation. The best temperature correlation is observed in continental regions nearer to the poles, whereas the correlation with amount of rainfall is most pronounced in tropical regions as shown in Fig. 3.15.

The apparent link between local surface air temperature and the isotope composition of precipitation is of special interest mainly because of the potential importance of stable isotopes as palaeoclimatic indicators. The amount effect is ascribed to gradual saturation of air below the cloud, which diminishes any shift to higher δ18O-values caused by evaporation during precipitation.

[Emphasis added]

From Hoefs 2009

From Hoefs 2009

Figure 9

The points that Hoefs make indicate some of the problems relating to using δ18O as the temperature proxy. We have competing influences that depend on the source and journey of the air parcel responsible for the precipitation. What if circulation changes?

For readers who have followed the past discussions here on water vapor (e.g., see Clouds & Water Vapor – Part Two) this is a similar kind of story. With water vapor, there is a very clear relationship between ocean temperature and absolute humidity, so long as we consider the boundary layer. But what happens when the air rises high above that – then the amount of water vapor at any location in the atmosphere is dependent on the past journey of air, and as a result the amount of water vapor in the atmosphere depends on large scale circulation and large scale circulation changes.

The same question arises with isotopes and precipitation.

The ubiquitous Jean Jouzel and his colleagues (including Willi Dansgaard) from their 1997 paper:

In Greenland there are significant differences between temperature records from the East coast and the West coast which are still evident in 30 yr smoothed records. The isotopic records from the interior of Greenland do not appear to follow consistently the temperature variations recorded at either the east coast or the west coast..

This behavior may reflect the alternating modes of the North Atlantic Oscillation..

They [simple models] are, however, limited to the study of idealized clouds and cannot account for the complexity of large convective systems, such as those occurring in tropical and equatorial regions. Despite such limitations, simple isotopic models are appropriate to explain the main characteristics of dD and d18O in precipitation, at least in middle and high latitudes where the precipitation is not predominantly produced by large convective systems.

Indeed, their ability to correctly simulate the present-day temperature-isotope relationships in those regions has been the main justification of the standard practice of using the present day spatial slope to interpret the isotopic data in terms of records of past temperature changes.

Notice that, at least for Antarctica, data and simple models agree only with respect to the temperature of formation of the precipitation, estimated by the temperature just above the inversion layer, and not with respect to the surface temperature, which owing to a strong inversion is much lower..

Thus one can easily see that using the spatial slope as a surrogate of the temporal slope strictly holds true only if the characteristics of the source have remained constant through time.

[Emphases added]

If all the precipitation occurs during warm summer months, for example, the “annual δ18O” will naturally reflect a temperature warmer than Ts [annual mean]..

If major changes in seasonality occur between climates, such as a shift from summer-dominated to winter- dominated precipitation, the impact on the isotope signal could be large..it is the temperature during the precipitation events that is imprinted in the isotopic signal.

Second, the formation of an inversion layer of cold air up to several hundred meters thick over polar ice sheets makes the temperature of formation of precipitation warmer than the temperature at the surface of the ice sheet. Inversion forms under a clear sky.. but even in winter it is destroyed rapidly if thick cloud moves over a site..

As a consequence of precipitation intermittancy and of the existence of an inversion layer, the isotope record is only a discrete and biased sampling of the surface temperature and even of the temperature at the atmospheric level where the precipitation forms. Current interpretation of paleodata implicitly assumes that this bias is not affected by climate change itself.

Now onto the oceans, surely much simpler, given the massive well-mixed reservoir of 18O?

Mix & Ruddiman (1984):

The oxygen-isotopic composition of calcite is dependent on both the temperature and the isotopic composition of the water in which it is precipitated

..Because he [Shackleton] analyzed benthonic, instead of planktonic, species he could assume minimal temperature change (limited by the freezing point of deep-ocean water). Using this constraint, he inferred that most of the oxygen-isotope signal in foraminifera must be caused by changes in the isotopic composition of seawater related to changing ice volume, that temperature changes are a secondary effect, and that the isotopic composition of mean glacier ice must have been about -30 ‰.

This estimate has generally been accepted, although other estimates of the isotopic composition have been made by Craig (-17‰); Eriksson (-25‰), Weyl (-40‰) and Dansgaard & Tauber (≤30‰)

..Although Shackleton’s interpretation of the benthonic isotope record as an ice-volume/sea- level proxy is widely quoted, there is considerable disagreement between ice-volume and sea- level estimates based on δ18O and those based on direct indicators of local sea level. A change in δ18O of 1.6‰ at δ(ice) = – 35‰ suggests a sea-level change of 165 m.

..In addition, the effect of deep-ocean temperature changes on benthonic isotope records is not well constrained. Benthonic δ18O curves with amplitudes up to 2.2 ‰ exist (Shackleton, 1977; Duplessy et al., 1980; Ruddiman and McIntyre, 1981) which seem to require both large ice- volume and temperature effects for their explanation.

Many other heavyweights in the field have explained similar problems.

We will return to both of these questions in the next article.

Conclusion

Understanding the basics of isotopic changes in water and water vapor is essential to understand the main proxies for past temperatures and past ice volumes. Previously we have looked at problems relating to dating of the proxies, in this article we have looked at the proxies themselves.

There is good evidence that current values of isotopes in precipitation and ocean values give us a consistent picture that we can largely understand. The question about the past is more problematic.

I started looking seriously at proxies as a means to perhaps understand the discrepancies for key dates of ice age terminations between radiometric dating and ocean cores (see Thirteen – Terminator II). Sometimes the more you know, the less you understand..

Articles in the Series

Part One – An introduction

Part Two – Lorenz – one point of view from the exceptional E.N. Lorenz

Part Three – Hays, Imbrie & Shackleton – how everyone got onto the Milankovitch theory

Part Four – Understanding Orbits, Seasons and Stuff – how the wobbles and movements of the earth’s orbit affect incoming solar radiation

Part Five – Obliquity & Precession Changes – and in a bit more detail

Part Six – “Hypotheses Abound” – lots of different theories that confusingly go by the same name

Part Seven – GCM I – early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Seven and a Half – Mindmap – my mind map at that time, with many of the papers I have been reviewing and categorizing plus key extracts from those papers

Part Eight – GCM II – more recent work from the “noughties” – GCM results plus EMIC (earth models of intermediate complexity) again trying to produce perennial snow cover

Part Nine – GCM III – very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age – a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Eleven – End of the Last Ice age – latest data showing relationship between Southern Hemisphere temperatures, global temperatures and CO2

Twelve – GCM V – Ice Age Termination – very recent work from He et al 2013, using a high resolution GCM (CCSM3) to analyze the end of the last ice age and the complex link between Antarctic and Greenland

Thirteen – Terminator II – looking at the date of Termination II, the end of the penultimate ice age – and implications for the cause of Termination II

Fourteen – Concepts & HD Data – getting a conceptual feel for the impacts of obliquity and precession, and some ice age datasets in high resolution

Fifteen – Roe vs Huybers – reviewing In Defence of Milankovitch, by Gerard Roe

Sixteen – Roe vs Huybers II – comparing the results if we take the Huybers dataset and tie the last termination to the date implied by various radiometric dating

Eighteen – “Probably Nonlinearity” of Unknown Origin – what is believed and what is put forward as evidence for the theory that ice age terminations were caused by orbital changes

Nineteen – Ice Sheet Models I – looking at the state of ice sheet models

References

Isotopes of the Earth’s Hydrosphere, VI Ferronsky & VA Polyakov, Springer (2012)

Isotope Hydrology – A Study of the Water Cycle, Joel R Gat, Imperial College Press (2010)

Stable Isotope Geochemistry, Jochen Hoefs, Springer (2009)

Patterns of the isotopic composition of precipitation in time and space: data from the Israeli storm water collection program, M Rindsberger, Sh Jaffe, Sh Rahamim and JR Gat, Tellus (1990) – free paper

Stable isotopes in precipitation, Willi Dansgaard, Tellus (1964) – free paper

Validity of the temperature reconstruction from water isotopes in ice cores, J Jouzel, RB Alley, KM Cuffey, W Dansgaard, P Grootes, G Hoffmann, SJ Johnsen, RD Koster, D Peel, CA Shuman, M Stievenard, M Stuiver, J White, Journal of Geophysical Research (1997) – free paper

Oxygen Isotope Analyses and Pleistocene Ice Volumes, Mix & Ruddiman, Quaternary Research (1984)  – free paper

– and on the Dole effect, only covered in Note 2:

The Dole effect and its variations during the last 130,000 years as measured in the Vostok ice core, Michael Bender, Todd Sowers, Laurent Labeyrie, Global Biogeochemical Cycles (1994) – free paper

A model of the Earth’s Dole effect, Georg Hoffmann, Matthias Cuntz, Christine Weber, Philippe Ciais, Pierre Friedlingstein, Martin Heimann, Jean Jouzel, Jörg Kaduk, Ernst Maier-Reimer, Ulrike Seibt & Katharina Six, Global Biogeochemical Cycles (2004) – free paper

The isotopic composition of atmospheric oxygen Boaz Luz & Eugeni Barkan, Global Biogeochemical Cycles (2011) – free paper

Notes

Note 1: There is a relationship between δ18O and δD which is linked to the difference in vapor pressures between H2O and HDO in one case and H216O and H218O in the other case.

δD = 8 δ18O + 10 – known as the Global Meteoric Water Line.

The equation is more of a guide and real values vary sufficiently that I’m not really clear about its value. There are lengthy discussions of it and the variations from it in Ferronsky & Polyakov.

Note 2: The Dole effect

When we measure atmospheric oxygen, we find that the δ18O = 23.5 ‰ with respect to the oceans (VSMOW) – this is the Dole effect

So, oxygen in the atmosphere has a greater proportion of 18O than the ocean

Why?

How do the atmosphere and ocean exchange oxygen? In essence, photosynthesis turns sunlight + water (H2O) + carbon dioxide (CO2) –> sugar + oxygen (O2).

Respiration turns sugar + oxygen –> water + carbon dioxide + energy

The isotopic composition of the water in photosynthesis affects the resulting isotopic composition in the atmospheric oxygen.

The reason the Dole effect exists is well understood, but the reason why the value comes out at 23.5‰ is still under investigation. This is because the result is the global aggregate of lots of different processes. So we might understand the individual processes quite well, but that doesn’t mean the global value can be calculated accurately.

It is also the case that δ18O of atmospheric O2 has varied in the past – as revealed first of all in the Vostok ice core from Antarctica.

Michael Bender and his colleagues had a go at calculating the value from first principles in 1994. As they explain (see below), although it might seem as though their result is quite close to the actual number it is not a very successful result at all. Basically due to the essential process you start at 20‰ and should get to 23.5‰, but they o to 20.8‰.

Bender et al 1994:

The δ18O of O2.. reflects the global responses of the land and marine biospheres to climate change, albeit in a complex manner.. The magnitude of the Dole effect mainly reflects the isotopic composition of O2 produced by marine and terrestrial photosynthesis, as well as the extent to while the heavy isotope is discriminated against during respiration..

..Over the time period of interest here, photosynthesis and respiration are the most important reactions producing and consuming O2. The isotopic composition of O2 in air must therefore be understood in terms of isotope fractionation associated with these reactions.

The δ18O of O2 produced by photosynthesis is similar to that of the source water. The δ18O of O2 produced by marine plants is thus 0‰. The δ18O of O2 produced on the continents has been estimated to lie between +4 and +8‰. These elevated δ18O values are the result of elevated leaf water δ18O values resulting from evapotranspiration.

..The calculated value for the Dole effect is then the productivity-weighted values of the terrestrial and marine Dole effects minus the stratospheric diminution: +20.8‰. This value is considerably less than observed (23.5‰). The difference between the expected value and the observed value reflects errors in our estimates and, conceivably, unrecognized processes.

Then they assess the Vostok record, where the main question is less about why the Dole effect varies apparently with precession (period of about 20 kyrs), than why the variation is so small. After all, if marine and terrestrial biosphere changes are significant from interglacial to glacial then surely those changes would reflect more strongly in the Dole effect:

Why has the Dole effect been so constant? Answering this question is impossible at the present time, but we can probably recognize the key influences..

They conclude:

Our ability to explain the magnitude of the contemporary Dole effect is a measure of our understanding of the global cycles of oxygen and water. A variety of recent studies have improved our understanding of many of the principles governing oxygen isotope fractionation during photosynthesis and respiration.. However, our attempt to quantitively account for the Dole effect in terms of these principles was not very successful.. The agreement is considerably worse than it might appear given the fact that respiratory isotope fractionation alone must account for ~20‰ of the stationary enrichment of the 18O of O2 compared with seawater..

..[On the Vostok record] Our results show that variation in the Dole effect have been relatively small during most of the last glacial-interglacial cycle. These small changes are not consistent with large glacial increases in global oceanic productivity.

[Emphasis added]

Georg Hoffmann and his colleagues had another bash 10 years later and did a fair bit better:

The Earth’s Dole effect describes the isotopic 18O/16O-enrichment of atmospheric oxygen with respect to ocean water, amounting under today’s conditions to 23.5‰. We have developed a model of the Earth’s Dole effect by combining the results of three- dimensional models of the oceanic and terrestrial carbon and oxygen cycles with results of atmospheric general circulation models (AGCMs) with built-in water isotope diagnostics.

We obtain a range from 22.4‰ to 23.3‰ for the isotopic enrichment of atmospheric oxygen. We estimate a stronger contribution to the global Dole effect by the terrestrial relative to the marine biosphere in contrast to previous studies. This is primarily caused by a modeled high leaf water enrichment of 5–6‰. Leaf water enrichment rises by ~1‰ to 6–7‰ when we use it to fit the observed 23.5‰ of the global Dole effect.

Very recently Luz & Barkan (2011), backed up by lots of new experimental work produced a slightly closer estimate with some revisions of the Hoffman et al results:

Based on the new information on the biogeochemical mechanisms involved in the global oxygen cycle, as well as new and more precise experimental data on oxygen isotopic fractionation in various processes obtained over the last 15 years, we have reevaluated the components of the Dole effect.Our new observations on marine oxygen isotope effects, as well as, new findings on photosynthetic fractionation by marine organisms lead to the important conclusion that the marine, terrestrial and the global Dole effects are of similar magnitudes.

This result allows answering a long‐standing unresolved question on why the magnitude of the Dole effect of the last glacial maximum is so similar to the present value despite enormous environmental differences between the two periods. The answer is simple: if DEmar [marine Dole effect] and DEterr [terrestrial Dole effect] are similar, there is no reason to expect considerable variations in the magnitude of the Dole effect as the result of variations in the ratio terrestrial to marine O2 production.

Finally, the widely accepted view that the magnitude of the Dole effect is controlled by the ratio of land‐to‐sea productivity must be changed. Instead of the land‐sea control, past variations in the Dole effect are more likely the result of changes in low‐latitude hydrology and, perhaps, in structure of marine phytoplankton communities.

[Emphasis added]

Note 3:

Jochen Hoefs (2009):

Under equilibrium conditions at 25ºC, the fractionation factors for evaporating water are 1.0092 for 18O and 1.074 for D. However under natural conditions, the actual isotopic composition of water is more negative than the predicted equilibrium values due to kinetic effects.

The discussion of kinetic effects gets a little involved and I don’t think is really necessary to understand – the values of isotopic fractionation during evaporation and condensation are well understood. The confounding factors around what the proxies really measure relate to the journey (i.e. temperature history) and mixing of the various air parcels as well as the temperature of air relating to the precipitation event – is the surface temperature, the inversion temperature, both?

Read Full Post »

In Wonderland, Radiative Forcing and the Rate of Inflation we looked at the definition of radiative forcing and a few concepts around it:

  • why the instantaneous forcing is different from the adjusted forcing
  • what adjusted forcing is and why it’s a more useful concept
  • why the definition of the tropopause affects the value
  • GCM results usually don’t use radiative forcing as an input

In this article we will look at some results using the Wonderland model.

Remember the Wonderland model is not the earth. But the same is also true of “real” GCMs with geographical boundaries that match the earth as we know it. They are not the earth either. All models have limitations. This is easy to understand in principle. It is challenging to understand in the specifics of where the limitations are, even for specialists – and especially for non-specialists.

What the Wonderland model provides is a coarse geography with earth-like layout of land and ocean, plus of course, physics that follows the basic equations. And using this model we can get a sense of how radiative forcing is related to temperature changes when the same value of radiative forcing is applied via different mechanisms.

In the 1997 paper I think that Hansen, Sato & Ruedy did a decent job of explaining the limitations of radiative forcing, at least as far as the Wonderland climate model is able to assist us with that understanding. Remember as well that, in general, results we see from GCMs do not use radiative forcing. Instead they calculate from first principles – or parameterized first principles.

Doubling CO2

Now there’s a lot in this first figure, it can be a bit overwhelming. We’ll take it one step at a time. We double CO2 overnight – in Wonderland – and we see various results. The left half of the figure is all about flux while the right half is all about temperature:

From Hansen et al 1997

From Hansen et al 1997

Figure 1 – Green text added – Click to Expand

On the top line, the first two graphs are the net flux change, as a function of height and latitude. First left – instantaneous; second left – adjusted. These two cases were explained in the last article.

The second left is effectively the “radiative forcing”, and we can see that the above the tropopause (at about 200 mbar) the net flux change with height is constant. This is because the stratosphere has come into radiative balance. Refer to the last article for more explanation. On the right hand side, with all feedbacks from this one change in Wonderland, we can see the famous predicted “tropospheric hot spot” and the cooling of the stratosphere.

We see in the bottom two rows on the right the expected temperature change :

  • second row – change in temperature as a function of latitude and season (where temperature is averaged across all longitudes)
  • third row – change in temperature as a function of latitude and longitude (averaged annually)

It’s interesting to see the larger temperature increases predicted near the poles. I’m not sure I really understand the mechanisms driving that. Note that the radiative forcing is generally higher in the tropics and lower at the poles, yet the temperature change is the other way round.

Increasing Solar Radiation by 2%

Now let’s take a look at a comparison exercise, increasing solar radiation by 2%.

The responses to these comparable global forcings, 2xCO2 & +2% S0, are similar in a gross sense, as found by previous investigators. However, as we show in the sections below, the similarity of the responses is partly accidental, a cancellation of two contrary effects. We show in section 5 that the climate model (and presumably the real world) is much more sensitive to a forcing at high latitudes than to a forcing at low latitudes; this tends to cause a greater response for 2xCO2 (compare figures 4c & 4g); but the forcing is also more sensitive to a forcing that acts at the surface and lower troposphere than to a forcing which acts higher in the troposphere; this favors the solar forcing (compare figures 4a & 4e), partially offsetting the latitudinal sensitivity.

We saw figure 4 in the previous article, repeated again here for reference:

From Hansen et al (1997)

From Hansen et al (1997)

Figure 2

In case the above comment is not clear, absorbed solar radiation is more concentrated in the tropics and a minimum at the poles, whereas CO2 is evenly distributed (a “well-mixed greenhouse gas”). So a similar average radiative change will cause a more tropical effect for solar but a more even effect for CO2.

We can see that clearly in the comparable graphic for a solar increase of 2%:

From Hansen et al (1997)

From Hansen et al (1997)

Figure 3 – Green text added – Click to Expand

We see that the change in net flux is higher at the surface than the 2xCO2 case, and is much more concentrated in the tropics.

We also see the predicted tropospheric hot spot looking pretty similar to the 2xCO2 tropospheric hot spot (see note 1).

But unlike the cooler stratosphere of the 2xCO2 case, we see an unchanging stratosphere for this increase in solar irradiation.

These same points can also be seen in figure 2 above (figure 4 from Hansen et al).

Here is the table which compares radiative forcing (instantaneous and adjusted), no feedback temperature change, and full-GCM calculated temperature change for doubling CO2, increasing solar by 2% and reducing solar by 2%:

From Hansen et al 1997

From Hansen et al 1997

Figure 4 – Green text added – Click to Expand

The value R (far right of table) is the ratio of the predicted temperature change from a given forcing divided by the predicted temperature change from the 2% increase in solar radiation.

Now the paper also includes some ozone changes which are pretty interesting, but won’t be discussed here (unless we have questions from people who have read the paper of course).

“Ghost” Forcings

The authors then go on to consider what they call ghost forcings:

How does the climate response depend on the time and place at which a forcing is applied? The forcings considered above all have complex spatial and temporal variations. For example, the change of solar irradiance varies with time of day, season, latitude, and even longitude because of zonal variations in ground albedo and cloud cover. We would like a simpler test forcing.

We define a “ghost” forcing as an arbitrary heating added to the radiative source term in the energy equation.. The forcing, in effect, appears magically from outer space at an atmospheric level, latitude range, season and time of day. Usually we choose a ghost forcing with a global and annual mean of 4 W/m², making it comparable to the 2xCO2 and +2% S0 experiments.

In the following table we see the results of various experiments:

Hansen et al (1997)

Hansen et al (1997)

Figure 5 – Click to Expand

We note that the feedback factor for the ghost forcing varies with the altitude of the forcing by about a factor of two. We also note that a substantial surface temperature response is obtained even when the forcing is located entirely within the stratosphere. Analysis of these results requires that we first quantify the effect of cloud changes. However, the results can be understood qualitatively as follows.

Consider ΔTs in the case of fixed clouds. As the forcing is added to successively higher layers, there are two principal competing effects. First, as the heating moves higher, a larger fraction of the energy is radiated directly to space without warming the surface, causing ΔTs to decline as the altitude of the forcing increases. However, second, warming of a given level allows more water vapor to exist there, and at the higher levels water vapor is a particularly effective greenhouse gas. The net result is that ΔTs tends to decline with the altitude of the forcing, but it has a relative maximum near the tropopause.

When clouds are free to change the surface temperature change depends even more on the altitude of the forcing (figure 8). The principal mechanism is that heating of a given layer tends to decrease large-scale cloud cover within that layer. The dominant effect of decreased low-level clouds is a reduced planetary albedo, thus a warming, while the dominant effect of decreased high clouds is a reduced greenhouse effect, thus a cooling. However, the cloud cover, the cloud cover changes and the surface temperature sensitivity to changes may depend on characteristics of the forcing other than altitude, e.g. latitude, so quantitive evaluation requires detailed examination of the cloud changes (section 6).

Conclusion

Radiative forcing is a useful concept which gives a headline idea about the imbalance in climate equilibrium caused by something like a change in “greenhouse” gas concentration.

GCM calculations of temperature change over a few centuries do vary significantly with the exact nature of the forcing – primarily its vertical and geographical distribution. This means that a calculated radiative forcing of, say, 1 W/m² from two different mechanisms (e.g. ozone and CFCs) would (according to GCMs) not necessarily produce the same surface temperature change.

References

Radiative forcing and climate response, Hansen, Sato & Ruedy, Journal of Geophysical Research (1997) – free paper

Notes

Note 1: The reason for the predicted hot spot is more water vapor causes a lower lapse rate – which increases the temperature higher up in the troposphere relative to the surface. This change is concentrated in the tropics because the tropics are hotter and, therefore, have much more water vapor. The dry polar regions cannot get a lapse rate change from more water vapor because the effect is so small.

Any increase in surface temperature is predicted to cause this same change.

With limited research on my part, the idealized picture of the hotspot as shown above is not actually the real model results. The top graph is the “just CO2” graph, and the bottom graph is the “CO2 + aerosols” – the second graph is obviously closer to the real case:

From Santer et al 1996

From Santer et al 1996

Many people have asked for my comment on the hot spot, but apart from putting forward an opinion I haven’t spent enough time researching this topic to understand it. From time to time I do dig in, but it seems that there are about 20 papers that need to be read to say something useful on the topic. Unfortunately many of them are heavy in stats and my interest wanes.

Read Full Post »

Radiative forcing is a “useful” concept in climate science.

But while it informs it also obscures and many people are confused about its applicability. Also many people are confused about why stratospheric adjustment takes place and what that means. And why does the definition of the tropopause, which is a concept that doesn’t have one definite meaning, affect this all important concept of radiative forcing. Surely there is a definition which is clear and unambiguous?

So there are a few things we will attempt to understand in this article.

The Rate of Inflation and Other Stories

The value of radiative forcing (however it is derived) has the same usefulness as the rate of inflation, or the exchange rate as measured by a basket of currencies (with relevant apologies to all economists reading this article).

The rate of inflation tells you something about how prices are increasing but in the end it is a complex set of relationships reduced to a single KPI.

It’s quite possible for the rate of inflation to be the same value in two different years, and yet one important group of the country in question to see no increase in their spending in the first year yet a significant increase in their spending costs in the second year. That’s the problem with reducing a complex problem to one number.

However, the rate of inflation apparently has some value despite being a single KPI. And so it is with radiative forcing.

The good news is, when we get the results from a GCM, we can be sure the value of radiative forcing wasn’t actually used. Radiative forcing is more to inform the public and penniless climate scientists who don’t have access to a GCM.

Wonderland, the Simple Climate Model

The more precision you put into a GCM the slower it runs. So comparing 100’s of different cases can be impossible. Such is the dilemma of a climate scientist with access to a supercomputer running a GCM but a long queue of funded but finger-tapping climate scientists behind him or her.

Wonderland is a compromise model and is described in Wonderland Climate Model by Hansen et al (1997). This model includes some basic geography that is similar to the earth as we know it. It is used to provide insight into radiative forcing basics.

The authors explain:

A climate model provides a tool which allows us to think about, analyze, and experiment with a facsimile of the climate system in ways which we could not or would not want to experiment with the real world. As such, climate modeling is complementary to basic theory, laboratory experiments and global observations.

Each of these tools has severe limitations, but together, especially in iterative combinations they allow our understanding to advance. Climate models, even though very imperfect, are capable of containing much of the complexity of the real world and the fundamental principles from which that complexity arises.

Thus models can help structure the discussions and define needed observations, experiments and theoretical work. For this purpose it is desirable that the stable of modeling tools include global climate models which are fast enough to allow the user to play games, to make mistakes and rerun the experiments, to run experiments covering hundreds or thousands of simulated years, and to make the many model runs needed to explore results over the full range of key parameters. Thus there is great incentive for development of a highly efficient global climate model, i.e., a model which numerically solves the fundamental equations for atmospheric structure and motion.

Here is Wonderland, from a geographical point of view:

From Hansen et al (1997)

From Hansen et al (1997)

Figure 1

Wonderland is then used in Radiative Forcing and Climate Response, Hansen, Sato & Ruedy (1997). The authors say:

We examine the sensitivity of a climate model to a wide range of radiative forcings, including change of solar irradiance, atmospheric CO2, O3, CFCs, clouds, aerosols, surface albedo, and “ghost” forcing introduced at arbitrary heights, latitudes, longitudes, season, and times of day.

We show that, in general, the climate response, specifically the global mean temperature change, is sensitive to the altitude, latitude, and nature of the forcing; that is, the response to a given forcing can vary by 50% or more depending on the characteristics of the forcing other than its magnitude measured in watts per square meter.

In other words, radiative forcing has its limitations.

Definition of Radiative Forcing

The authors explain a few different approaches to the definition of radiative forcing. If we can understand the difference between these definitions we will have a much clearer view of atmospheric physics. From here, the quotes and figures will be from Radiative Forcing and Climate Response, Hansen, Sato & Ruedy (1997) unless otherwise stated.

Readers who have seen the IPCC 2001 (TAR) definition of radiative forcing may understand the intent behind this 1997 paper. Up until that time different researchers used inconsistent definitions.

The authors say:

The simplest useful definition of radiative forcing is the instantaneous flux change at the tropopause. This is easy to compute because it does not require iterations. This forcing is called “mode A” by WMO [1992]. We refer to this forcing as the “instantaneous forcing”, Fi, using the nomenclature of Hansen et al [1993c]. In a less meaningful alternative, Fi is computed at the top of the atmosphere; we include calculations of this alternative for 2xCO2 and +2% S0 for the sake of comparison.

An improved measure of radiative forcing is obtained by allowing the stratospheric temperature to adjust to the presence of the perturber, to a radiative equilibrium profile, with the tropospheric temperature held fixed. This forcing is called “mode B” by WMO [1992]; we refer to it here as the “adjusted forcing”, Fa [Hansen et al 1993c].

The rationale for using the adjusted forcing is that the relaxation time of the stratosphere is only several months [Manabe & Strickler, 1964], compared to several decades for the troposphere [Hansen et al 1985], and thus the adjusted forcing should be a better measure of the expected climate response for forcings which are present at least several months..The adjusted forcing can be calculated at the top of the atmosphere because the net radiative flux is constant throughout the stratosphere in radiative equilibrium. The calculated Fa depends on where the tropopause level is specified. We specify this level as 100 mbar from the equator to 40° latitude, changing to 189 mbar there, and then increasing linearly to 300 mbar at the poles.

[Emphasis added].

This explanation might seem confusing or abstract so I will try and explain.

Let’s say we have a sudden increase in a particular GHG (see note 1). We can calculate the change in radiative transfer through the atmosphere with a given temperature profile and concentration profile of absorbers with little uncertainty. This means we can see immediately the reduction in outgoing longwave radiation (OLR). And the change in absorption of solar radiation.

Now the question becomes – what happens in the next 1 day, 1 month, 1 year, 10 years, 100 years?

Small changes in net radiation (solar absorbed – OLR) will have an equilibrium effect over many decades at the surface because of the thermal inertia of the oceans (the heat capacity is very high).

The issue that everyone found when they reviewed this problem – the radiative forcing on day 1 was different from the radiative forcing on day 90.

Why?

Because the changes in net absorption above the tropopause (the place where convection stops and let’s review that definition a little later) affect the temperature of the stratosphere very quickly. So the stratosphere quickly adjusts to the new world order and of course this changes the radiative forcing. It’s like (in non-technical terms) the stratosphere responded very quickly and “bounced out” some of the radiative forcing in the first month or two.

So the stratosphere, with little heat capacity, quickly adapts to the radiative changes and moves back into radiative equilibrium. This changes the “radiative forcing” and so if we want to work out the changes over the next 10-100 years there is little point in considering the radiative forcing on day 1, but maybe if the quick responders sort themselves out in 60 days we can wait for the quick responders to settle down and pick the radiative forcing number after 90-120 days.

This is the idea behind the definition.

Let’s look at this in pictures. In the graph below the top line is for doubling CO2 (the line below is for increasing solar by 2%), and the top left is the flux change through the atmosphere for instantaneous and for adjusted. The red line is the “adjusted” value:

From Hansen (1997)

From Radiative Forcing & Climate Response, Hansen et al (1997)

Figure 2 – Click to expand

This red line is the value of flux change after the stratosphere has adjusted to the radiative forcing. Why is the red line vertical?

The reason is simple.

The stratosphere is now in temperature equilibrium because energy in = energy out at all heights. With no convection in the stratosphere this is the same as radiation absorbed = radiation emitted at all heights. Therefore, the net flux change with height must be zero.

If we plotted separately the up and down flux we would find that they have a slope, but the slope of the up and down would be the same. Net absorption of radiation going up balances net emission of radiation going down – more on this in Visualizing Atmospheric Radiation – Part Eleven – Stratospheric Cooling.

Another important point, we can see in the top left graph that the instantaneous net flux at the tropopause (i.e., the net flux on day one) is different from the net flux at the tropopause after adjustment (i.e., after the stratosphere has come into radiative balance).

But once the stratosphere has come into balance we could use the TOA net flux, or the tropopause net flux – it would not matter because both are the same.

Result of Radiative Forcing

Now let’s look at 4 different ways to think about radiative forcing, using the temperature profile as our guide to what is happening:

From Hansen et al (1997)

From Radiative Forcing & Climate Response, Hansen et al (1997)

Figure 3 – Click to expand

On the left, case a, instantaneous forcing. This is the result of the change in net radiation absorbed vs height on day one. Temperature doesn’t change instantaneously so it’s nice and simple.

On the next graph, case b, adjusted forcing. This is the temperature change resulting from net radiation absorbed after the stratosphere has come into equilibrium with the new world order, but the troposphere is held fixed. So by definition the tropospheric temperature is identical in case b to case a.

On the next graph, case c, no feedback response of temperature. Now we allow the tropospheric temperature to change until such time as the net flux at the tropopause has gone back to zero. But during this adjustment we have held water vapor, clouds and the lapse rate in the troposphere at the same values as before the radiative forcing.

On the last graph, case d, all feedback response of temperature. Now we let the GCM take over and calculate how water vapor, clouds and the lapse rate respond. And as with case c, we wait until the temperature has increased sufficiently that net tropopause flux has gone back to zero.

What Definition for the Tropopause and Why does it Matter?

We’ve seen that if we use adjusted forcing that the radiative forcing is the same at TOA and at the tropopause. And the adjusted forcing is the IPCC 2001 definition. So why use the forcing at the tropopause? And why does the definition of the tropopause matter?

The first question is easy. We could use the forcing at TOA, it wouldn’t matter so long as we have allowed the stratosphere to come into radiative equilibrium (which takes a few months). As far as I can tell, my opinion, it’s more about the history of how we arrived at this point. If you want to run a climate model to calculate the radiative forcing without stratospheric equilibrium then, on day one, the radiative forcing at the tropopause is usually pretty close to the value calculated after stratospheric equilibrium is reached.

So:

  1. Calculate the instantaneous forcing at the tropopause and get a value close to the authoritative “radiative forcing” – with the benefit of minimal calculation resources
  2. Calculate the adjusted forcing at the tropopause or TOA to get the authoritative “radiative forcing”

And lastly, why then does the definition of the tropopause matter?

The reason is simple, but not obvious. We are holding the tropospheric temperature constant, and letting the stratospheric temperature vary. The tropopause is the dividing line. So if we move the dividing line up or down we change the point where the temperatures adjust and so, of course, this affects the “adjusted forcing”. This is explained in some detail in Forster et al (1997) in section 4, p.556 (see reference below).

For reference, three definitions of the tropopause are found in Freckleton et al (1998):

  • the level at which the lapse rate falls below 2K/km
  • the point at which the lapse rate changes sign, i.e., the temperature minimum
  • the top of convection

Conclusion

Understanding what radiative forcing means requires understanding a few basics.

The value of radiative forcing depends upon the somewhat arbitrary definition of the location of the tropopause. Some papers like Freckleton et al (1998) have dived into this subject, to show the dependence of the radiative forcing for doubling CO2 on this definition.

We haven’t covered it in this article, but the Hansen et al (1997) paper showed that radiative forcing is not a perfect guide to how climate responds (even in the idealized world of GCMs). That is, the same radiative forcing applied via different mechanisms can lead to different temperature responses.

Is it a useful parameter? Is the rate of inflation a useful parameter in economics? Usefulness is more a matter of opinion. What is more important at the start is to understand how the parameter is calculated and what it can tell us.

References

Radiative forcing and climate response, Hansen, Sato & Ruedy, Journal of Geophysical Research (1997) – free paper

Wonderland Climate Model, Hansen, Ruedy, Lacis, Russell, Sato, Lerner, Rind & Stone, Journal of Geophysical Research, (1997) – paywall paper

Greenhouse gas radiative forcing: Effect of averaging and inhomogeneities in trace gas distribution, Freckleton et al, QJR Meteorological Society (1998) – paywall paper

On aspects of the concept of radiative forcing, Forster, Freckleton & Shine, Climate Dynamics (1997) – free paper

Notes

Note 1: The idea of an instantaneous increase in a GHG is a thought experiment to make it easier to understand the change in atmospheric radiation. If instead we consider the idea of a 1% change per year, then we have a more difficult problem. (Of course, GCMs can quite happily work with a real-world slow change in GHGs. And they can quite happily work with a sudden change).

Read Full Post »

The earth’s surface is not a black-body. A blackbody has an emissivity and absorptivity = 1.0, which means that it absorbs all incident radiation and emits according to the Planck law.

The oceans, covering over 70% of the earth’s surface, have an emissivity of about 0.96. Other areas have varying emissivity, going down to about 0.7 for deserts. (See note 1).

A lot of climate analyses assume the surface has an emissivity of 1.0.

Let’s try and qualify the effect of this assumption.

The most important point to understand is that if the emissivity of the surface, ε, is less than 1.0 it means that the surface also reflects some atmospheric radiation.

Let’s first do a simple calculation with nice round numbers.

Say the surface is at a temperature, Ts=289.8 K. And the atmosphere emits downward flux = 350 (W/m²).

  • If ε = 1.0 the surface emits 400. And it reflects 0. So a total upward radiation of 400.
  • If ε = 0.8 the surface emits 320. And it reflects 70 (350 x 0.2). So a total upward radiation of 390.

So even though we are comparing a case where the surface reduces its emission by 20%, the upward radiation from the surface is only reduced by 2.5%.

Now the world of atmospheric radiation is very non-linear as we have seen in previous articles in this series. The atmosphere absorbs very strongly in some wavelength regions and is almost transparent in other regions. So I was intrigued to find out what the real change would be for different atmospheres as surface emissivity is changed.

To do this I used the Matlab model already created and explained – in brief in Part Two and with the code in Part Five – The Code (note 2). The change in surface emissivity is assumed to be wavelength independent (so if ε = 0.8, it is the case across all wavelengths).

I used some standard AFGL (air force geophysics lab) atmospheres. A description of some of them can be seen in Part Twelve – Heating Rates (note 3).

For the tropical atmosphere:

  • ε = 1.0, TOA OLR = 280.9   (top of atmosphere outgoing longwave radiation)
  • ε = 0.8, TOA OLR = 278.6
  • Difference = 0.8%

Here is the tropical atmosphere spectrum:

Atmospheric-radiation-14b-tropical-atm-TOA-emissivity-0.8vs1.0

Figure 1

We can see that the difference occurs in the 800-1200 cm-1 region (8-12 μm), the so-called “atmospheric window” – see Kiehl & Trenberth and the Atmospheric Window. We will come back to the reasons why in a moment.

For reference, an expanded view of the area with the difference:

Atmospheric-radiation-14b-tropical-atm-TOA-emissivity-0.8vs1.0-expanded

Figure 2

Now the mid-latitude summer atmosphere:

  • ε = 1.0, TOA OLR = 276.9
  • ε = 0.8, TOA OLR = 272.4
  • Difference = 1.6%

And the mid-latitude winter atmosphere:

  • ε = 1.0, TOA OLR = 227.9
  • ε = 0.8, TOA OLR = 217.4
  • Difference = 4.6%

Here is the spectrum:

Atmospheric-radiation-14c-midlat-winter-atm-TOA-emissivity-0.8vs1.0

Figure 3

We can see that the same region is responsible and the difference is much greater.

The sub-arctic summer:

  • ε = 1.0, TOA OLR = 259.8
  • ε = 0.8, TOA OLR = 252.7
  • Difference = 2.7%

The sub-arctic winter:

  • ε = 1.0, TOA OLR = 196.8
  • ε = 0.8, TOA OLR = 186.9
  • Difference = 5.0%

Atmospheric-radiation-14c-subarctic-winter-atm-TOA-emissivity-0.8vs1.0

Figure 4

We can see that the surface emissivity of the tropics has a negligible difference on OLR. The higher latitude winters have a 5% change for the same surface emissivity change, and the higher latitude summers have around 2-3%.

The reasoning is simple.

For the tropics, the hot humid atmosphere radiates quite close to a blackbody, even in the “window region” due to the water vapor continuum. We can see this explained in detail in Part Ten – “Back Radiation”.

So any “missing” radiation from a non-blackbody surface is made up by reflection of atmospheric radiation (where the radiating atmosphere is almost at the same temperature as the surface).

When we move to higher latitudes the “window region” becomes more transparent, and so the “missing” radiation cannot be made up by reflection of atmospheric radiation in this wavelength region. This is because the atmosphere is not emitting in this “window” region.

And the effect is more pronounced in the winters in high latitudes because the atmosphere is colder and so there is even less water vapor.

Now let’s see what happens when we do a “radiative forcing” calculation – we will do a comparison of TOA OLR at 360 ppm CO2 – 720 ppm at two different emissivities for the tropical atmosphere. That is, we will calculate 4 cases:

  • 360 ppm at ε=1.0
  • 720  ppm at ε=1.0
  • 360 ppm at ε=0.8
  • 720  ppm at ε=0.8

And, at both ε=1.0 & ε=0.8 we subtract the OLR at 360ppm from OLR at 720ppm and plot both differenced emissivity results on the same graph:

Atmospheric-radiation-14fg-tropical-atm-2xCO2-TOA-emissivity-0.8vs1.0

 

Figure 5

We see that both comparisons look almost identical – we can’t distinguish between them on this graph. So let’s subtract one from the other. That is, we plot (360ppm-720ppm)@ε=1.0 – (360ppm – 720ppm)@ε=0.8:

Atmospheric-radiation-14h-tropical-atm-2xCO2-1xCO2-emissivity-0.8-1.0

 

Figure 6 – same units as figure 5

So it’s clear that in this specific case of calculating the difference in CO2 from 360ppm to 720ppm it doesn’t matter whether we use surface emissivity = 1.0 or 0.8.

Conclusion

The earth’s surface is not a blackbody. No one in climate science thinks it is. But for a lot of basic calculations assuming it is a blackbody doesn’t have a big impact on the TOA radiation – for the reasons outlined above. And it has even less impact on the calculations of changes in CO2.

The tropics, from 30°S to 30°N, are about half the surface area of the earth. And with a typical tropical atmosphere, a drop in surface emissivity from 1.0 to 0.8 causes a TOA OLR change of less than 1%.

Of course, it could get more complicated than the calculations we have seen in this article. Over deserts in the tropics, where the surface emissivity actually gets below 0.8, water vapor is also low and therefore the resulting TOA flux change will be higher (as a result of using actual surface emissivity vs black body emissivity).

I haven’t delved into the minutiae of GCMs to find out what they assume about surface emissivity and, if they do use 1.0, what calculations have been done to quantify the impact.

The average surface emissivity of the earth is much higher than 0.8. I just picked that value as a reference.

The results shown in this article should help to clarify that the effect of surface emissivity less than 1.0 is not as large as might be expected.

Notes

Note 1: Emissivity and absorptivity are wavelength dependent phenomena. So these values are relevant for the terrestrial wavelengths of 4-50μm.

Note 2: There was a minor change to the published code to allow for atmospheric radiation being reflected by the non-black surface. This hasn’t been updated to the relevant article because it’s quite minor. Anyone interested in the details, just ask.

In this model, the top of atmosphere is at 10 hPa.

Some outstanding issues remain in my version of the model, like whether the diffusivity improvement is correct or needs improvement, and the Voigt profile (important in the mid-upper stratosphere) is still not used. These issues will have little or no effect on the question addressed in this article.

Note 3: For speed, I only considered water vapor and CO2 as “greenhouse” gases. No ozone was used. To check, I reran the tropical atmosphere with ozone at the values prescribed in that AFGL atmosphere. The difference between ε = 1.0 and ε = 0.8 was 0.7% – less than with no ozone (0.8%). This is because ozone reduces the transparency of the “atmospheric window” region.

Read Full Post »

« Newer Posts - Older Posts »