Feeds:
Posts
Comments

Archive for the ‘Atmospheric Physics’ Category

This post is intended to help readers better understand how changes in temperature and water vapor at different locations affect the radiation balance of the planet, primarily outgoing longwave radiation (OLR).

A lot of questions on this blog come about because people have trouble visualizing the process of radiative transfer. This is not surprising – it’s not an intuitive subject.

Basic energy balance for the planet is covered in The Earth’s Energy Budget – Part One and Part Two. If some change to the climate causes more energy to be radiated into space then the climate system will cool. Likewise, if some change causes less energy to be radiated into space then the climate system will warm (assuming constant absorbed solar radiation). We use the term OLR (outgoing longwave radiation) for the radiation from the climate system into space.

Heating the Atmosphere and the Surface

Here is a calculation of how changing temperature in the atmosphere affects OLR at different latitudes and different pressures (1000mbar at the bottom of the graph is the surface):

From Soden et al (2008)

Figure 1 – Italic text is added

Understanding the Terminology

The top two graphs show the same effect under two different conditions. The top one is “All Sky” which is all conditions. And the second one is “Clear Sky”, i.e., the subset of conditions when clouds are not present. From left to right we have latitude and from bottom to top we have pressure. 1000mbar is the surface pressure = zero altitude, and 200mbar is the pressure at the top of the troposphere, and is around 14km (it varies depending on the latitude).

The units (the values are shown as colors) are in W/m².K.100hPa. (See Note 1). Already some readers are lost?

The important part is W/m² – this is flux (radiation in less technical language) or watts per square meter. We can call it power per unit area.

W/m².K is watts per square meter per 1K temperature change. So this asks – how much does the power per unit area increase (or decrease) for each 1K of temperature change?

If we asked, how much does the OLR increase for 1K increase in the whole atmosphere we would use the units of W/m².K. But when we want to look at how different layers in the atmosphere affect the OLR we are considering just once “slice”. So we have to have watts per meter squared per Kelvin per slice. In this case we are considering 100mbar (=100hPa) slices so this is how we get W/(m².K.100hPa).

Understanding the Results

So now the basics are out of the way, what do the graphs show us?

At the simplest level if the whole atmosphere heats up by 1K, the graphs show us the relative contribution of different latitudes and altitudes to OLR.

Let’s suppose we increase the temperature of the atmosphere at the equator between 1000 and 900 mbar by 1°C (=1K). This means we have taken a “layer” of the atmosphere and somehow just increased its temperature. What is the effect on the OLR? All bodies, including gases, emit according to their temperature and their emissivity.

Increase the temperature and the radiation (flux) increases. In our graph, at that location, for 1°C the flux increases by about 0.3 W/(m².100hPa).

Of course this means the climate cools which should be totally unsurprising. Increase the temperature of the atmosphere and it radiates more energy away into space. This is negative feedback. You can see from the graph that no matter where you heat the atmosphere it increases the flux into space – cooling the climate back down.

The bottom graph shows the result of heating the earth’s surface for clear sky and for all sky conditions. Note the difference. Under clear skies the increased flux emitted by the surface more easily escapes to space. When clouds are present the increased surface radiation is absorbed by clouds and the clouds emit at the cloud top temperature. (The cloud top temperature is high up in the atmosphere, is cooler than the surface, and so the emission to space is reduced by the presence of clouds).

At this point let’s make it clear what the graphs are not showing. They are not showing the ultimate result of heating the surface or a slice of the atmosphere after the whole climate has come into a new “equilibrium”. They are simply showing what happens directly to radiation balance as a result of a change in temperature of a “portion” of the climate.

If you’ve understood why the all sky/clear sky results in the surface graph are different then the difference between the first and second graph might be clear. The first graph is under all sky conditions (including clouds) and so the cloud tops are the region where a 1K increase has the greatest effect on OLR. Lower down in the atmosphere an increase in flux (due to hotter conditions) can be masked by clouds.

In contrast, under clear sky conditions changes in the lower atmosphere have a similar effect to changes in the upper atmosphere.

The authors say:

Under total-sky conditions the longwave fluxes are most sensitive to temperatures at the level of cloud tops that are exposed to space. This results in an obvious maximum just beneath the tropopause, where convectively detrained cirrus anvils are common, and along the top of the cloud topped boundary layer. By masking the surface, clouds also diminish the surface contribution to KT

Adding Water Vapor to the Atmosphere

More water vapor in the atmosphere generally reduces the outgoing longwave radiation which has a heating effect on the atmosphere. (The opposite of higher atmospheric temperatures).

The reason for this is with more water vapor the atmosphere becomes more opaque to longwave radiation. So, for example, with more water vapor in the upper atmosphere, radiation from the surface or the lower atmosphere is absorbed by the water vapor higher up.

Another way of looking at the problem is to say that the more opaque the atmosphere the higher up the effective radiation to space. And higher altitudes have colder temperatures. This is the essence of the inappropriately-named “greenhouse” effect. For more on this see The Earth’s Energy Budget – Part Three.

From Soden et al (2008)

Figure 2 – Italic text is added

Any calculation / visualization of the climate effect of increased water vapor has a choice – do you show the effect from absolute or relative changes in water vapor? As water vapor concentration reduces by more than 1000 times as you go up through the atmosphere showing relative change is generally preferred over showing absolute change.

The authors of this paper have chosen relative changes and calculated the change in OLR if temperature changes by 1K and relative humidity stays constant.

Some of the readers might be tempted to jump in here thinking that some unproven claim is being used as a premise for a climate calculation. But this is not so. It is simply a convenient way of illustrating OLR changes with water vapor changes.

Reviewing the graphs, we can see that under clear skies the deep tropics have the dominant water vapor response. This is not surprising as the tropics have so much more water vapor than the rest of the globe. See Clouds and Water Vapor – Part Two for discussion on this.

Under all sky conditions the effects of clouds are seen. The subtropics become more important than the tropics because the subtropics are mostly cloud-free. In the deep tropics the clouds “mask” the effects of lower levels in the atmosphere.

The authors comment:

By masking underlying water vapor perturbations, clouds reduce the sensitivity of OLR to water vapor changes and increase the relative importance of upper-tropospheric moistening to the total feedback.

Water vapor also absorbs solar radiation. If there was no water vapor wouldn’t the surface absorb all the solar radiation anyway? Does it make a difference? The surface doesn’t absorb all the solar (shortwave) radiation, and especially over snow/ice covered areas the proportion of reflected solar radiation is high. Therefore, solar absorption by water vapor (as water vapor increases) has a relatively larger impact at the poles.

From Soden et al (2008)

Figure 3 – Italic text is added

And Out of Interest..

This graph below wasn’t the intent of the article, but is in the Soden et al paper and is quite interesting.

This article is aimed at showing how the net radiative climate balance changes when atmospheric temperature and water vapor changes under clear and cloudy skies.

But once a climate model is used to compute changes in temperature we can see what different climate models show for the difference between the feedbacks:

Soden et al (2008)

This graph shows the results of various climate models.

As an example, Lapse rate feedback is the feedback from changes in the atmospheric temperature profile.

Final results from climate models are much more complex than determining how changes in water vapor or atmospheric temperature affect the emission of thermal radiation into space.

Conclusion

This article is aimed at increasing understanding of how changes in temperature and water vapor change the net radiation balance of the climate before any feedback.

The whole paper is well worth reading.

Further reading

        Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Ten

References

Quantifying Climate Feedbacks Using Radiative Kernels, Soden et al, Journal of Climate (2008) – Free Link

Notes

Note 1 – The units in figure 1 are Wm-2K-1. To non-mathematicians this notation can be difficult to understand. Most people know what m² means – it means meters squared. m-2 means per meter squared. So we can write Wm-2 or W/m2. Both mean the same thing.

The mathematical convention of Wm-2K-1 is better because it is more precise (when I write W/m2K does it mean watts per meter squared per Kelvin, or Watts per meter squared times Kelvin?)

But I write the slightly less mathematically precise way to provide better readability for the non-mathematicians.

Read Full Post »

Here is the annual mean temperature as a function of pressure (=height) and latitude:

From Marshall & Plumb (2008)

Figure 1 – Click for a larger image

We see that the equator is warmer than the poles and the surface is warmer than the upper troposphere (“troposphere” = lower atmosphere). No surprises.

Here is “potential temperature”, whatever that is..

From Marshall & Plumb (2008)

Figure 2 – Click for a larger image

We see that – whatever “potential temperature” is – the equator is warmer than the poles, but this version of temperature increases with height.

Why does temperature decrease with height? What is potential temperature? And why does it increase with height?

The Lapse Rate

Atmospheric pressure decreases with height. This is because as you go higher up there is less air above you, and therefore less downward force due to the weight of this air.

Because pressure decreases – and because air is a compressible fluid – air that rises expands (and air that sinks contracts).

Air that expands does “work” against its surroundings and because of the first law of thermodynamics (conservation of energy) this work needs to be paid for. So internal energy is consumed in expanding the parcel of air outwards against the atmosphere. And a reduction in internal energy means a reduction in temperature.

  • Air that rises expands
  • Expanding air cools

A little bit more technically.. adiabatic expansion is what we are talking about. An adiabatic process is one where no heat is exchanged with the surroundings. This is a reasonable approximation for typical rising air. It is reasonable because conduction is an extremely slow process (= negligible) in the atmosphere and radiative heat transfer is quite slow.

So if heat can’t be exchanged between a “parcel of air” and its surroundings it is relatively simple to calculate how the temperature changes. An example which contains way too much detail (because it is debunking a “debunking”) at Paradigm Shifts in Convection and Water Vapor?

The essence of the calculation is to equate internal energy changes with work done on the environment.

Textbooks usually start off with the simplest version, the dry adiabatic lapse rate, or DALR. (The “lapse rate” is the change in temperature with height of a parcel of air).

The DALR is for air without any water vapor. Now water vapor is very influential in our climate. The reason for neglecting it and starting off with this simplification is:

  • the calculation is easy and everyone (almost) can understand it
  • it represents one extreme of the atmosphere (polar climates and upper troposphere)

The result from this simplification:

Change in temperature with height = -g/cp ≈ -10 °C/km, where g = acceleration due to gravity = 9.8 m/s² and cp = heat capacity of air at constant pressure ≈ 1 J/kg.K

So for every km we displace air upwards it cools by about 10°C – so long as we displace it reasonably quickly. Well, this is true if it is dry.

A note on conventions – dry parcels of air moved upwards cool by 10°C per km, but the lapse rate is usually written as a positive number. So a cooling of 10 °C/km =  -10 °C/km, but by convention, equals a “lapse rate” of +10 °C/km. This makes it very confusing when people say things like “the environmental lapse rate must be less than the adiabatic lapse rate“. Are we talking about the number with the minus sign in front? Or not?

It’s not easy to think about negative numbers being less than other negative numbers when the “less than” test is applied after they have been made into positive numbers. Not for me anyway. I have to write it down each time.

The Saturated Lapse Rate

If a parcel of air contains water vapor and it cools sufficiently then the water vapor condenses. This releases latent heat.

As a result, moist rising air cools slower than dry rising air

So the saturated adiabatic lapse rate is “less than” the dry adiabatic lapse rate.

E.g. the change in temperature with height of a dry parcel of air ≈ -10 °C/km, while the change in temperature with height of a moist parcel of air in the tropics near the surface ≈ -4 °C/km.

Conventionally we say that the saturated adiabatic lapse rate is less than the dry adiabatic lapse rate. Because we write them as positive numbers.

Now note the caveats around the value for the moist parcel of air rising. I said “..in the tropics near the surface..”, but for the DALR there are no caveats. That’s because once we consider moisture we have to consider how much water vapor and the amount varies hugely depending on temperature (and also on other factors – see Clouds and Water Vapor – Part Three).

The maths is somewhat harder for the saturated adiabatic lapse rate but it’s not conceptually more difficult, there is just an addition of energy (from condensing water vapor) to offset the work done.

Potential Temperature

Potential temperature is usually written with the Greek letter θ.

θ = T.(p0/p)k

where T = (real) temperature, p = pressure, p0 = reference pressure (usually at 1000 mbar) and k = R/cp = 2/7 for our atmosphere (more on this in a later article)

With a bit of tedious maths we can prove that θ stays constant under adiabatic conditions (for dry air).

Let’s look at what that means.

Suppose the surface (1000 mbar) temperature = 288 K (15°C) so also θ = 288K.

Now the air is moved (adiabatically) to 800 mbar, so T = 270 K. That’s what you expect – temperature falls with height. And no change to potential temperature, so θ = 288 K.

Now we move the air to 600 mbar, and T = 249 K. More reduction of temperature. And still θ = 288 K.

So is this a useful parameter – move the air (adiabatically) and the potential temperature stays the same?

The parameter is mathematically sound, but whether it is useful remains to be seen. As an artificial construct no doubt many people will be shaking their heads..

Stability and Potential Temperature Profile

In Density, Stability and Motion in Fluids we saw that for a fluid to be stable, lighter fluid must be above heavier fluid. No surprise to anyone.

And we saw that in mechanical terms equilibrium is different from stability.

An unstable equilibrium can exist, but a slight displacement will turn the instability into motion. Whereas with a stable equilibrium a slight displacement (or a large displacement) will result in a restoring force back to its original position. For the simplest case – an incompressible fluid – this means that the temperature must increase with height.

If you watched the accompanying video of a tank of water being heated from below you would have seen that the instability caused turbulent motion until finally the tank was well-mixed.

We left the more complex case of compressible fluids (like air) until today. What we will find is that with a compressible fluid potential temperature is effectively the same as “real” temperature for an incompressible fluid.

So if potential temperature increases with height the fluid is stable, but if potential temperature decreases with height the fluid is unstable.

Let’s look at two examples:

Figure 3

On the left hand side we see an example where potential temperature decreases with height. At the surface, θ = 288 K but at 800 mbar, θ = 275 K. A parcel of air displaced adiabatically from the surface to 800 mbar will keep its potential temperature of 288 K. Now we convert that to real temperatures. The environmental temperature at 800 mbar is 258 K, but the parcel of air cools to only 270 K. This means the displaced parcel is warmer than the surroundings, so it is less dense – and therefore it keeps rising.

This case is unstable – clearly any air that starts rising or falling (perhaps due to atmospheric winds, pressure differentials, etc) will keep rising or falling.

On the right hand side we see potential temperature decreasing with height. The parcel of air displaced from the surface to 800 mbar reaches the same temperature as on the left – 270 K. But here the environmental temperature is 281 K. So the parcel of air is cooler than the surrounding air, so it is more dense – and so it falls.

This case is stable – any air that starts rising or falling experiences a restoring force.

So the potential temperature profile with height tells us whether the atmosphere is stable, neutral or unstable. If potential temperature increases with height the atmosphere is stable, and if potential temperature decreases with height the atmosphere is unstable.

This is exactly the same as comparing the actual temperature change with the lapse rate.

Both answer the same question about atmospheric stability.

Moist Potential Temperature

The previous section slightly over-simplified things because potential temperature is with reference to dry air and yet moisture changes the way in which temperature decreases with height.

So here is the real deal – moist potential temperature. This is also known as equivalent potential temperature:

From Marshall & Plumb (2008)

Figure 4 – Click for a larger image

Here we see the “real potential temperature” and notice that especially in the tropics moist potential temperature is almost constant with height – up to the tropopause at 200 mbar. This is due to convection creating a well-mixed atmosphere. In the polar regions we see that the atmosphere is still quite stratified, which is due to the lack of convective mixing.

Conclusion

Potential temperature is very useful. It is a method of comparing the temperature of air at two different heights.

And if potential temperature is constant or increasing with height then the atmosphere is stable.

The atmosphere is mostly stable for dry air. If you refer back to figure 2 you see that (dry) potential temperature is quite stratified which means any displaced air experiences a restoring force. So it is moisture in the air that is the enabler for most of the convection that takes place. Figure 4 shows us that the atmosphere is “finely” balanced as far as moist convection is concerned.

(Remember of course that these graphs are annual mean values. It doesn’t mean that dry convection does not occur).

Potential temperature is also a useful metric because the change of potential temperature with height can be used to calculate the strength of the restoring force on displaced air. The result is the buoyancy frequency and the period of internal gravity waves.

Read Full Post »

The coriolis effect isn’t the easiest thing to get your head around, but it is an essential element in understanding the large scale motions of the atmosphere and the oceans.

If you roll a ball along a flat frictionless surface it keeps going in the same direction. This is because objects that have no forces on them continue in the same direction at the same speed. (The combination of direction and speed is known as velocity, which is a vector. A vector consists of a magnitude (e.g. speed) and a direction).

Well, that statement was not strictly true – because it wasn’t specific enough.

If you get onto a merry go round and launch your same ball in one direction you observe it move away in a curved arc. But someone above the merry go round, perhaps someone who had climbed up a pole and was looking down, would observe the ball moving in a straight line.

It’s all about frames of reference.

Now we live on planet that is rotating so we have to consider the “merry go round” effect.

There are two approaches for a mathematical basis (and we will keep the maths separated):

  • consider everything from an inertial frame – as if all motion was viewed from space (note 1)
  • consider everything from the surface of the planet

If we considered everything from space then the problem would actually be more difficult. On the plus side thrown balls would go in a straight line (as normal). On the minus side the boundaries of the oceans, mountains and everything else important would be constantly on the move and we would need mathematical trickery beyond most people’s comprehension.

So everyone goes for option b – consider motion from the surface of the planet. This means the frame of reference is constantly on the move.

Coriolis

The excellent Atmosphere, Ocean and Climate Dynamics by Marshall & Plumb (2008) comes with a number of accompanying web pages most of which have some videos.

See GFDLab V: Inertial Circles – visualizing the Coriolis force for some detail and the video link, or click on the image below for the video link:

Figure 1 – Click for the video

  • the left hand video is the inertial frame of reference – stationary camera
  • the right hand video is the rotational frame of reference – the camera is moving with the turntable

This is the best video I have found for making clear what happens in a rotating frame.

With some relatively simple maths, the equations of motion in an inertial frame get transformed into a rotating frame of reference.

Two new terms get introduced:

  • the Coriolis acceleration = “stuff appears to veer off to the side as far as I can tell” effect
  • centrifugal acceleration = “things get thrown outwards like on a merry-go-round that goes very fast” effect

The centrifugal acceleration is not so significant, just a slight modifier of magnitude and direction to the very strong gravitational effect. But the Coriolis effect is very significant.

Now the Coriolis effect is easy to demonstrate on a rotating table, but we live on a rotating sphere and so there are some complexities that require the use of vector maths to calculate.

Mathematically it is easy to show that the Coriolis effect is modified by a factor relating to latitude. Specifically the effect is multiplied by the sine of the latitude, which means that at the equator the Coriolis effect is zero (sin 0° = 0), and at 30° it is half the maximum (sin 30°=0.5) and at the poles it has the full effect (sin 90° = 1.0).

I found it difficult to come up with a conceptual model which helps readers see why this is so. Readers who have had to think about the effect of resolving forces and rotations into orthogonal directions might be able to provide a conceptual picture – so please add comment if you think so. (Note 2).

Some Maths

The Coriolis effect has to be seen in the light of the other terms in the equation of motion.

The intimidating version, for those not used to the equations of motion for fluids in a Lagrangian formulation (note 3):

Du/Dt + 1/ρ.∇p∇φ + fz x u = Fr …..[1]

where bold characters are vectors, z is the unit vector in the upward direction, u = velocity vector (u,v,w), φ = gravitational potential modified by the centrifugal force, ρ = density, p = pressure and f = Coriolis parameter.

And in not-quite-plain English, the change in velocity with time (following a moving parcel of fluid) plus pressure force plus gravitional force plus the coriolis force equals the frictional force (note that the terms are effectively for unit mass).

The Coriolis parameter:

f = 2Ω sinφ …..[2]

where Ω = the rotational speed of the earth (in radians/sec) = 2 π / (24*3600) = 7.3 x 10-5 /s

And the simpler version in each local x,y,x direction with some simplifications applied (like the hydrostatic equilibrium approximation):

Du/Dt + 1/ρ . ∂p/∂x –  f.v = Fx ….(local x-direction) …[3a]

Dv/Dt + 1/ρ . ∂p/∂y + f.u = Fy ….(local y-direction) …[3b]

                  1/ρ . ∂p/∂z  + g = 0 ….(local z-direction) …[3c]

Geostrophic Balance and the Magnitude of the Coriolis Effect

Analysis of fluid flows is often carried out via non-dimensional ratios.

The Rossby number is the ratio of acceleration terms to the Coriolis force, and in the atmosphere at mid-latitudes is typically 0.1.

Another way of saying this is that the acceleration terms in equation 3 are a lot smaller than the Coriolis term. And in the free atmosphere (away from the boundary layer with the earth’s surface) the friction terms are negligible. This simplifies equation 3:

ug = – 1/fρ . ∂p/∂y ….[4a]

vg =   1/fρ . ∂p/∂x ….[4b]

With ug, vg defining the solution – geostrophic balance – to these simplified equations. This tells us that the E-W wind speed is proportional to the pressure change in the N-S direction, and the N-S wind speed is proportional to the pressure change in the E-W direction.

From Marshall & Plumb (2008)

Figure 2 – Colored text added

What might be surprising is the instead of the wind flowing from high to low pressure, it flows at right angles – along the lines of constant pressure.

So of course we have to ask whether these simplifications are justified..

Here is a sample of the 500 mbar wind and geopotential height:

From Marshall & Plumb (2008)

Figure 3

We can see that the wind at 5oo mbar (about 5km high) is quite close to geostrophic balance.

By contrast, if we look at surface winds:

From Marshall & Plumb (2008)

Figure 4

Here we see that the wind is flowing more across the pressure field from high to low pressure – this is because of the effect of friction at the surface. The friction term in equation 3 cannot be ignored when we want to calculate the motion near boundary layers.

Conclusion

This is just an interesting part of climate science. The large scale atmospheric and oceanic motion is fascinating and also necessary for understanding the science of climate.

Notes

Note 1: Even watching the planet from space is not an inertial frame of reference as the earth is rotating around the sun, and the sun is rotating around the center of the galaxy, etc, etc.. To avoid this article being a 100 page unfathomable treatise on rederiving the equations of motion, there are necessarily many simplifications, offered without caveat or explanation.

Note 2: The components of the Coriolis force on the surface of a sphere are calculated from Ω x u (where the “x” is the vector cross product, not “times”).

Ωu = (0,  Ωcosφ,  Ωcosφ) x (u,  v,  w)

            = (Ωcosφ.w – Ωsinφ.v,   Ωsinφ.u,  -Ωcosφ.u)

w is the vertical component of wind and is generally very small compared with horizontal components. So when at the equator (φ=0°), then:

Ωu = (Ωcosφ.w,   0,  -Ωcosφ.u)

the u-direction (W-E) is very small because w is very small, and the w-direction (vertical) is not important because it competes with the much larger gravity term

Note 3: The term D/Dt has a specific meaning that might be new to many people. This is the Lagrangian differential, which is the change in the property of a fluid following that element of fluid. Rather than the change in property of a fluid at a fixed point in space.

D/Dt ≡ ∂/∂t + u∂/∂x + v∂/∂y + w∂/∂z, where u = (u,v,w) is the velocity vector

Read Full Post »

In a discussion a little while ago on What’s the Palaver? – Kiehl and Trenberth 1997, one of our commenters asked about the surface forcing and how it could possibly lead to anything like the IPCC-projected temperature change for doubling of CO2.

Following a request for clarification, he added:

..We first look at the RHS. We believe that the atmosphere will also increase in temperature by roughly the same amount, so there will be no change in the conductive term. The increase in the Radiative term is roughly 5.5W/m².

The increase in the evaporative term is much more difficult, but is believed to be in the range 2-7%/DegC. So the increase in the evaporative term is 1.5 to 5.5W/m², for a total change on the RHS of 7 to 11 W/m².

Since balance is an assumption, the LHS changed by the same amount. The surface sensitivity is therefore 0.095 to 0.15 DegC/W/m².

Note that this is the sensitivity to changes in Surface Forcing, whatever the source. It is NOT the response to Radiative Forcing – there is no response of the surface to Radiative Forcing, it can only respond to Sunlight and Back-Radiation.

[See the whole comment and exchange for the complete picture].

These are good questions and no doubt many people have similar ones. The definition of radiative forcing (see CO2 – An Insignificant Trace Gas? Part Seven – The Boring Numbers) is at the tropopause, which is the top of the troposphere (around 12km above the surface).

Why is it at the tropopause and not at the surface? The great Ramanathan explains (in his 1998 review paper):

..Manabe & Wetherald’s [1967] paper, which convincingly demonstrated that the CO2-induced surface warming is not solely determined by the energy balance at the surface but by the energy balance of the coupled surface-troposphere-stratosphere system.

The underlying concept of the Manabe-Wetherald model is that the surface and the troposphere are so strongly coupled by convective heat and moisture transport that the relevant forcing governing surface warming is the net radiative perturbation at the tropopause, simply known as radiative forcing.

In essence, the reason we consider the value at the tropopause is that it is the best value to tell us what will happen at the surface. It is now an idea established for over 40 years, although for some it might sound bizarre. So we will try and make sense of it here.

Here is a schematic originating in Ramanathan’s 1981 paper, but extracted here from his 1998 review paper:

From Ramanathan (1998)

From Ramanathan (1998)

Figure 1

The first thing to pay attention to is the right hand side – 1. CO2 direct surface heating – which is shown as 1.2 W/m².

The surface forcing from a doubling of CO2 is around 1 W/m² compared with around 4 W/m² at the tropopause. The surface forcing is a lot less than at the top of atmosphere!

Before too much joy sets in, let’s consider what these concepts represent. They are essentially idealized quantities, derived from considering the instantaneous change in concentrations of CO2.

As CO2 shows a steady increase year on year, the idea of doubling overnight is clearly not in accord with reality. However, it is a useful comparison point and helps to get many ideas straight. If instead we said, “CO2 increasing by 1% per year”, we would need to define a time period for this 1% annual increase, plus how long after the end before a new balance was restored. It wouldn’t make solving the problem any easier – and it would make the results harder to understand – by contrast GCM’s do consider a steadily rising CO2 level according to whatever scenario they are considering.

So, with the idea of an instantaneous doubling, if the surface increase in radiative forcing is less than the tropopause increase in radiative forcing, this must mean that the balance of energy is absorbed within the atmosphere. And also, we have to consider what happens as a result of the surface energy imbalance.

The numbers I use here are Ramanathan’s numbers from his 1981 paper. Later, and more accurate, numbers have been calculated but don’t affect the main points of this analysis. The reason for reviewing his analysis is because some (but not all) of the inherent responses of the climate system are explicitly calculated – making it easier to understand than the output of  GCM.

Immediate Response

The immediate result of this doubling of CO2 is a reduced emission of radiation (OLR = outgoing longwave radiation) from the climate system into space. See the Atmospheric Radiation and the “Greenhouse” Effect series for detailed explanations of why.

At the tropopause the OLR reduces by 3.1 W/m², and downward emission from the stratosphere into the troposphere increases by 1.2 W/m².

This results in a net forcing at the tropopause of 4.3 W/m². Most of the radiation from the atmosphere to the surface (as a result of more CO2) is absorbed by water vapor. So at the surface the DLR (downward long radiation) increases by only 1.2 W/m² – this is the (immediate) surface forcing. Here is a simple graphical explanation of why the OLR decreases and the DLR increases:

Figure 2 – Click for a larger image

Response After a Few Months

The stratosphere cools and reaches a new radiative equilibrium. This reduces the downward emission from the stratosphere by a small amount. The new value of radiative forcing at the tropopause = 4.2 W/m².

Response After Many Decades

The surface-troposphere warms until a new equilibrium is reached – the radiative forcing at the tropopause has returned to zero.

The Surface

So let’s now consider the surface. Take a look at Figure 1 again. The values/ranges we will consider are calculated by a model. This doesn’t mean they are correct. It means that applying well-understood processes in a simplistic way gives us a “first order” result. The reason for assessing this kind of approach is because our mental models are usually less accurate than a calculated result which draws on well-understood physics.

As Ramanathan says in his 1998 paper:

As a caveat, the system we considered up to this point to elucidate the principles of warming is a highly simplified linear system. Its use is primarily educational and cannot be used to predict actual changes.

Process 1 is as already described – the surface forcing increases by just over 1 W/m². But the balance of 3 W/m² goes into heating the troposphere.

Process 2 – The warming of the troposphere results in increases downward radiation to the surface (because the hotter the body, the higher the radiation emitted). The calculated value is an additional 2.3 W/m², so the surface imbalance is now 3.5 W/m² and the surface temperature must increase in response. Upwards surface radiation and/or sensible and latent heat will increase to balance.

Process 3 – The surface emission of radiation increases at around 5.5 W/m² for every 1°C of surface temperature increase. But this is almost balanced by increased downward radiation from the atmosphere (“back radiation”). The net effect is only about 10% of the change in upward radiation. So latent heat and sensible heat increase to restore the energy balance, but this also heats the troposphere.

Process 4 – The tropospheric humidity increases. This increases the emissivity of the atmosphere near the surface, which increases the back radiation.

So essentially some cycles are reinforcing each other (=positive feedback). The question is about the value of the new equilibrium point.

From Ramanathan (1981)

From Ramanathan (1981)

Figure 3

In Ramanathan’s 1981 paper he gives some basic calculations before turning to GCM results. The basic calculations are quite interesting because one of the purposes of the paper was to explain why some model results of the day produced very small equilibrium temperature changes.

Sadly for some readers, a little maths is necessary to reproduce the result. It is simple maths because it is based on simple concepts – as already presented. As much as possible I follow the equation numbers and notations from Ramanathan’s 1981 paper.

Calculations

Energy balance at an “average” surface:

Upward flux = Downward flux

→  LH + SH + F↑ = F↓ + S + ΔR  ….[2]

where LH = latent heat, SH = sensible heat, F↑ = surface emitted upward radiation, F↓ = surface downward radiation from the atmosphere, S = solar radiation absorbed, ΔR = instantaneous change in energy absorbed at the surface due to an increase in CO2

And see note 1. We have simple formulas for the left hand side.

F↑ = σT4….[3a]

Latent heat and sensible heat flux have “bulk aerodynamic formulas” (note 2):

LH = ρLCDV (q*M – qS)   ….[3b]

SH = ρcpCDV (TM – TS)   ….[3c]

Where ρ = density of air = 1.3 kg/m, L = latent heat of water vapor = 2.5 x 106, CD = empirically determined coefficient ≈ 1.3 x10-3,  V = average wind speed at some reference height above the surface ≈ 5 m/s, q*M = specific humidity at saturation at the surface temperature of the ocean,  qS = specific humidity at the reference height,  TM = temperature of the ocean at the surface,  TS = temperature of the air at the reference height (typically 10m).

To give an idea of typical values, for every 1°C difference between the surface and the air at the reference height, SH = 8.5 W/m²K, and with a relative humidity of 80% at the reference height (and 100% at the ocean surface), LH = 55 W/m²K.

Now we consider changes.

TM‘ is the change in the surface temperature of the ocean as the result of the increased CO2, and similar notation for other changes in values. Missing out a few steps that you can read in the paper:

TM‘ =                                    ΔR(0) + ΔF↓(2) + ΔF↓(3)                               ….[13]

      [ ∂LH/∂TM + ∂SH/∂TM + 4σTM³] + [  ∂LH/∂TS +  ∂SH/∂TS ].TS‘/TM

This probably seems a little daunting to a lot of readers.. so let’s explain it:

  • The parameter on the top line in black, ΔR(0) is the surface radiative forcing from the increase in CO2
  • The red terms are the changes in downward radiation as a result of process 2 and 3 described above
  • The blue terms are the changes in upward flux due to only the ocean surface temperature changing
  • The green terms are the changes in upward flux due to only the atmospheric temperature near the surface changing
  • The blue term ≈ 30 W/m²K @ 15°C; the green term ≈ -8.5 W/m²K @ 15°C (note 3)

And the smaller the total under the line, the higher the increase in temperature. And there are two competing terms:

  • As the surface temperature of the ocean increases the heat transfer from the ocean to the atmosphere increases
  • As the atmospheric temperature (just above the ocean surface) increases the heat transfer from the ocean to the atmosphere decreases

As an interesting comparison, Ramanathan reviewed the methods and results of Newell & Dopplick (1979) who found a changed surface temperature, Tm’ = 0.04 °C as a result of CO2 doubling. Effectively, very little change in surface temperature as a result of doubling of CO2.

Ramanathan states that the calculations of Newell & Dopplick had ignored the red terms and the green terms. Ignoring the red terms means that the heating of the atmosphere is ignored. Ignoring the green terms means that the effect of the ocean surface heating is inflated – if the ocean surface heats and the atmosphere just above somehow stayed the same then the heat transferred would be higher than if the atmospheric temperature also increased as a result. (Because heat transfer depends on temperature difference).

I expect that many people doing their own estimates will be working from similar assumptions.

Later Work

Here is a graphic from Andrews et al (2009), reference and free link below, which shows the simplified idea:

From Andrews et al (2009)

Figure 4

The paper itself is well worth reading and perhaps will be the subject of another article at a later date.

Conclusion

I haven’t demonstrated that the surface will warm by 3°C for a doubling of CO2. But I hope I have demonstrated the complexity of the processes involved and why a simplistic calculation of how the surface responds immediately to the surface forcing is not the complete answer. It is nowhere near the complete answer.

The surface temperature change as a result of doubling of CO2 is, of course, a massively important question to answer. GCM’s are necessarily involved despite their limitations.

Re-iterating what Ramanathan said in his 1998 paper in case anyone thinks I am making a case for a 3°C surface temperature increase:

As a caveat, the system we considered up to this point to elucidate the principles of warming is a highly simplified linear system. Its use is primarily educational and cannot be used to predict actual changes.

References

Trace Gas Greenhouse Effect and Global Warming, V. Ramanathan, Ambio (1998)

The role of ocean-atmosphere interactions in the CO2 climate problem, V Ramanathan, Journal of Atmospheric Sciences (1981)

Thermal equilibrium of the atmosphere with a given distribution of atmospheric humidity, Manabe & Wetherald, Journal of Atmospheric Sciences (1967)

A Surface Energy Perspective on Climate Change, Andrews, Forster & Gregory, Journal of Climate (2009)

Notes

Note 1: The equation ignores the transfer of heat into the ocean depths

Note 2: The “bulk aerodynamic formulas” – as they have become known – are more usable versions of the fundamental equations of heat and water vapor flux. Upward sensible heat flux, SH = ρcp<wT>, where w = vertical velocity, T = temperature, so <wT> is the time average of the product of vertical velocity and temperature. However, turbulent motions are so rapid, changing on such short time intervals that measurement of these values is usually impossible (or requires intensive measurement with specialist equipment in one location). We can write,

w = <w> + w’, where <w> = mean vertical velocity and w’ = deviation of vertical velocity from the mean, likewise T = <T> + T’.

So:

<wT> = <w><T> + <w’ T’> or, Total = Mean + Eddy

Near the surface the mean vertical motion is very small compared with the turbulent vertical velocity and so the turbulent component, <w’ T’>, dominates. Therefore,

SH = cρ <w’ T’>

LH = L ρ <w’ T’>

where  cp = specific heat capacity of air, ρ = density of air, L = latent heat of water vapor

By various thermodynamic arguments, and especially by lots of empirical measurements, an estimate of heat transfer can be made via the bulk aerodynamic formulas shown above, which use the average horizontal wind speed at the surface in conjunction with the coefficients of heat transfer, which are related to the friction term for the wind at the ocean surface.

Note 3: The calculation of each of the partial derivative terms is not shown in the paper, these are my calculations. I believe that ∂LH/∂TS = 0, most of the time – this is because if the atmosphere at the reference height is not saturated then an increase in the atmospheric temperature, TS, does not change the moisture flux, and therefore, does not change the latent heat. I might be wrong about this, and clearly some of the time this assumption I have made is not valid.

Read Full Post »

A long time ago I started writing this article. I haven’t yet finished it.

I realized that trying to write it was difficult because the audience criticism was so diverse. Come to me you huddled masses.. This paper, so simple in concept, has become somehow the draw card for “everyone against AGW”. The reasons why are not clear, since the paper is nothing to do with that.

As I review the “critiques” around the blogosphere, I don’t find any consistent objection. That makes it very hard to write about.

So, the reason for posting a half-finished article is for readers to say what they don’t agree with and maybe – if there is a consistent message/question – I will finish the article, or maybe answer the questions here. If readers think that the ideas in the paper somehow violate the first or second law of thermodynamics, please see note 1 and comment in those referenced articles. Not here.

==== part written article ===

In 1997, J. T. Kiehl and Kevin Trenberth’s paper was published, Earth’s Annual Global Mean Energy Budget. (Referred to as KT97 for the rest of this article).

For some reason it has become a very unpopular paper, widely criticized, and apparently viewed as “the AGW paper”.

This is strange as it is a paper which says nothing about AGW, or even possible pre-feedback temperature changes from increases in the inappropriately-named “greenhouse” gases.

KT97 is a paper which attempts to quantify the global average numbers for energy fluxes at the surface and the top of atmosphere. And to quantify the uncertainty in these values.

Of course, many people criticizing the paper believe the values violates the first or second law of thermodynamics. I won’t comment in the main article on the basic thermodynamics laws – for this, check out the links in note 1.

In this article I will try and explain the paper a little.  There are many updates from various researchers to the data in KT97, including Trenberth & Kiehl themselves (Trenberth, Fasullo and Kiehl 2009), with later and more accurate figures.

We are looking at this earlier paper because it has somehow become such a focus of attention.

Most people have seen the energy budget diagram as it appears in the IPCC TAR report (2001), but here it is reproduced for reference:

From Kiehl & Trenberth (1997)

From Kiehl & Trenberth (1997)

History and Utility

Many people have suggested that the KT97 energy budget is some “new invention of climate science”. And at the other end of the spectrum at least one commenter I read was angered by the fact that KT97 had somehow claimed this idea for themselves when many earlier attempts had been made long before KT97.

The paper states:

There is a long history of attempts to construct a global annual mean surface–atmosphere energy budget for the earth. The first such budget was provided by Dines (1917).

Compared with “imagining stuff”, reading a paper is occasionally helpful. KT97 is simply updating the field with the latest data and more analysis.

What is an energy budget?

It is an attempt to identify the relative and absolute values of all of the heat transfer components in the system under consideration. In the case of the earth’s energy budget, the main areas of interest are the surface and the “top of atmosphere”.

Why is this useful?

Well, it won’t tell you the likely temperature in Phoenix next month, whether it will rain more next year, or whether the sea level will change in 100 years.. but it helps us understand the relative importance of the different heat transfer mechanisms in the climate, and the areas and magnitude of uncertainty.

For example, the % of reflected solar radiation is now known to be quite close to 30%. That equates to around 103 W/m² of solar radiation (see note 2) that is not absorbed by the climate system. Compared with the emission of radiation from the earth’s climate system into space – 239 W/m² – this is significant. So we might ask – how much does this reflected % change? How much has it changed in the past? See The Earth’s Energy Budget – Part Four – Albedo.

In a similar way, the measurements of absorbed solar radiation and emitted thermal radiation into space are of great interest – do they balance? Is the climate system warming or cooling? How much uncertainty do we have about these measurements.

The subject of the earth’s energy budget tries to address these kind of questions and therefore it is a very useful analysis.

However, it is just one tiny piece of the jigsaw puzzle called climate.

Uncertainty

It might surprise many people that KT97 also say:

Despite these important improvements in our understanding, a number of key terms in the energy budget remain uncertain, in particular, the net absorbed shortwave and longwave surface fluxes.

And in their conclusion:

The purpose of this paper is not so much to present definitive values, but to discuss how they were obtained and give some sense of the uncertainties and issues in determining the numbers.

It’s true. There are uncertainties and measurement difficulties. Amazing that they would actually say that. Probably didn’t think people would read the paper..

AGW – “Nil points”

What does this paper say about AGW?

Nothing.

What does it say about feedback from water vapor, ice melting and other mechanisms?

Nothing.

What does it say about the changes in surface temperature from doubling of CO2 prior to feedback?

Nothing.

Top of Atmosphere

Since satellites started measuring:

  • incoming solar (shortware) radiation
  • reflected solar radiation
  • outgoing terrestrial (longwave) radiation

– it has become much easier to understand – and put boundaries around – the top of atmosphere (TOA) energy budget.

The main challenge is the instrument uncertainty. So KT97 consider the satellite measurements. The most accurate results available (at that time) were from five years of ERBE data (1985-1989).

From those results, the outgoing longwave radiation (OLR) from ERBE averaged 235 W/m² while the absorbed solar radiation averaged 238 W/m². Some dull discussion of error estimates from earlier various papers follows. The main result being that the error estimates are in the order of 5W/m², so it isn’t possible to pin down the satellite results any closer than that.

KT97 concludes:

Based on these error estimates, we assume that the bulk of the bias in the ERBE imbalance is in the shortwave absorbed flux at the top of the atmosphere, since the retrieval of shortwave flux is more sensitive than the retrieval of longwave flux to the sampling and modeling of the diurnal cycle, surface and cloud inhomogeneities.

Therefore, we use the ERBE outgoing longwave flux of 235 W/m² to define the absorbed solar flux.

What are they saying? That – based on the measurements and error estimates – a useful working assumption is that the earth (over this time period) is in energy balance and so “pick the best number” to represent that. Reflected solar radiation is the hardest to measure accurately (because it can be reflected in any direction) so we assume that the OLR is the best value to work from.

If the absorbed solar radiation and the OLR had been, say, 25 W/m² apart then the error estimates couldn’t have bridged this gap. And the choices would have been:

  • the first law of thermodynamics was wrong (150 years of work proven wrong)
  • the earth was cooling (warming) – depending on the sign of the imbalance
  • a mystery source of heating/cooling hadn’t been detected
  • one or both of the satellites was plain wrong (or the error estimates had major mistakes)

So all the paper is explaining about the TOA results is that the measurement results don’t justify concluding that the earth is out of energy balance and therefore they pick the best number to represent the TOA fluxes. That’s it. This shouldn’t be very controversial.

And also note that during this time period the ocean heat content (OHC) didn’t record any significant increase, so an assumption of energy balance during this period is reasonable.

And, as with any review paper, KT97 also include the results from previous studies, explaining where they agree and where they differ and possible/probable reasons for the differences.

In their later update of their paper (2009) they use the results of a climate model for the TOA imbalance. This comes to 0.9 W/m². In the context of the uncertainties they discuss this is not so significant. It is simply a matter of whether the TOA fluxes balance or not. This is something that is fundamentally unknown over a given 5-year or decadal time period.

As an exercise for the interested student, if you review KT97 with the working assumption that the TOA fluxes are out of balance by 1W/m², what changes of note take place to the various values in the 1997 paper?

Surface Fluxes

This is the more challenging energy balance. At TOA we have satellites measuring the radiation quite comprehensively – and we have only radiation as the heat transfer mechanism for incoming and outgoing energy.

At the surface the measurement systems are less complete. Why is that?

Firstly, we have movement of heat from the surface via latent heat and sensible heat – as well as radiation.

Secondly, satellites can only measure only a small fraction of the upward emitted surface radiation and none of the downward radiation at the surface.

Surface Fluxes – Radiation

To calculate the surface radiation, upward and downward, we need to rely on theory, on models.

You mean made up stuff that no one has checked?

Well, that’s what you might think if you read a lot of blogs that have KT97 on their hit list. It’s easy to make claims.

In fact, if we want to know on a global annual average basis what the upward and downward longwave fluxes are, and if we want to know the solar (shortwave) fluxes that reach the surface (vs absorbed in the atmosphere), we need to rely on models. This is simply because we don’t have 1,000’s of high quality radiation-measuring stations.

Instead we do have a small network of high-quality monitoring stations for measuring downward radiation – the BSRN (baseline surface radiation network) was established by the World Climate Research Programme (WCRP) in the early 1990’s. See The Amazing Case of “Back Radiation”.

The important point is that, for the surface values of downward solar and downward longwave radiation we can check the results of theory against measurements in the places where measurements are available. This tells us whether models are accurate or not.

To calculate the values of surface fluxes with the resolution to calculate the global annual average we need to rely on models. For many people, their instinctive response is that obviously this is not accurate. Instinctive responses are not science, though.

Digression – Many Types of Models

There are many different types of models. For example, if we want to know the value of the DLR (downward longwave radiation) at the surface on Nov 1st, 2210 we need to be sure that some important parameters are well-known for this date. We would need to know the temperature of the atmosphere as a function of height through the atmosphere – and also the concentration of CO2, water vapor, methane – and so on. We would need to predict all of these values successfully for Nov 1st, 2210.

The burden of proof is quite high for this “prediction”.

However, if we want to know the average value of DLR for 2009 we need to have a record of these parameters at lots of locations and times and we can do a proven calculation for DLR at these locations and times.

An Analogy – It isn’t much different from calculating how long the water will take to boil on the stove – we need to know how much water, the initial temperature of the water, the atmospheric temperature and what level you turned the heat to. If we want to predict this value for the future we will need to know what these values will be in the future. But to calculate the past is easy – if we already have a record of these parameters.

See Theory and Experiment – Atmospheric Radiation for examples of verifying theory against experiment.

End of Digression

And if we want to know the upward fluxes we need to know the reflected portion.

Related Articles

Kiehl & Trenberth and the Atmospheric Window

The Earth’s Energy Budget – Part One – a few climate basics.

The Earth’s Energy Budget – Part Two –  the important concept of energy balance at top of atmosphere.

References

Earth’s Annual Global Mean Energy Budget, Kiehl & Trenberth, Bulletin of the American Meteorological Society (1997) – free paper

Earth’s Global Energy Budget, Trenberth, Fasullo & Kiehl, Bulletin of the American Meteorological Society (2009) – free paper

Notes

Note 1 – The First Law of Thermodynamics is about the conservation of energy. Many people believe that because the temperature is higher at the surface than the top of atmosphere this somehow violates this first law. Check out Do Trenberth and Kiehl understand the First Law of Thermodynamics? as well as the follow-on articles.

The Second Law of Thermodynamics is about entropy increasing, due to heat flowing from hotter to colder. Many have created an imaginary law which apparently stops energy from radiation from a colder body being absorbed by a hotter body. Check out these articles:

Amazing Things we Find in Textbooks – The Real Second Law of Thermodynamics

The Three Body Problem

Absorption of Radiation from Different Temperature Sources

The Amazing Case of “Back Radiation” – Part Three and Part One and Part Two

Note 2 – When comparing solar radiation with radiation emitted by the climate system there is a “comparison issue” that has to be taken into account. Solar radiation is “captured” by an area of πr² (the area of a disc) because the solar radiation comes from a point source a long way away. But terrestrial radiation is emitted over the whole surface of the earth, an area of 4πr². So if we are talking about W/m² either we need to multiply terrestrial radiation by a factor of 4 to equate the two, or divide solar radiation by a factor of 4 to equate the two. The latter is conventionally chosen.

More about this in The Earth’s Energy Budget – Part One

Read Full Post »

During a discussion following one of the six articles on Ferenc Miskolczi someone pointed to an article in E&E (Energy & Environment). I took a look and had a few questions.

The article is question is The Thermodynamic Relationship Between Surface Temperature And Water Vapor Concentration In The Troposphere, by William C. Gilbert from 2010. I’ll call this WG2010. I encourage everyone to read the whole paper for themselves.

Actually this E&E edition is a potential collector’s item because they announce it as: Special Issue – Paradigms in Climate Research.

The author comments in the abstract:

 The key to the physics discussed in this paper is the understanding of the relationship between water vapor condensation and the resulting PV work energy distribution under the influence of a gravitational field.

Which sort of implies that no one studying atmospheric physics has considered the influence of gravitational fields, or at least the author has something new to offer which hasn’t previously been understood.

Physics

Note that I have added a WG prefix to the equation numbers from the paper, for ease of referencing:

First let’s start with the basic process equation for the first law of thermodynamics
(Note that all units of measure for energy in this discussion assume intensive properties, i.e., per unit mass):

dU = dQ – PdV ….[WG1]

where dU is the change in total internal energy of the system, dQ is the change in thermal energy of the system and PdV is work done to or by the system on the surroundings.

This is (almost) fine. The author later mixes up Q and U. dQ is the heat added to the system. dU is change in internal energy which includes the thermal energy.

But equation (1) applies to a system that is not influenced by external fields. Since the atmosphere is under the influence of a gravitational field the first law equation must be modified to account for the potential energy portion of internal energy that is due to position:

dU = dQ + gdz – PdV ….[WG2]

where g is the acceleration of gravity (9.8 m/s²) and z is the mass particle vertical elevation relative to the earth’s surface.

[Emphasis added. Also I changed “h” into “z” in the quotes from the paper to make the equations easier to follow later].

This equation is incorrect, which will be demonstrated later.

The thermal energy component of the system (dQ) can be broken down into two distinct parts: 1) the molecular thermal energy due to its kinetic/rotational/ vibrational internal energies (CvdT) and 2) the intermolecular thermal energy resulting from the phase change (condensation/evaporation) of water vapor (Ldq). Thus the first law can be rewritten as:

dU = CvdT + Ldq + gdz – PdV ….[WG3]

where Cv is the specific heat capacity at constant volume, L is the latent heat of condensation/evaporation of water (2257 J/g) and q is the mass of water vapor available to undergo the phase change.

Ouch. dQ is heat added to the system, and it is dU which is the internal energy which should be broken down into changes in thermal energy (temperature) and changes in latent heat. This is demonstrated later.

Later, the author states:

This ratio of thermal energy released versus PV work energy created is the crux of the physics behind the troposphere humidity trend profile versus surface temperature. But what is it that controls this energy ratio? It turns out that the same factor that controls the pressure profile in the troposphere also controls the tropospheric temperature profile and the PV/thermal energy ratio profile. That factor is gravity. If you take equation (3) and modify it to remove the latent heat term, and assume for an adiabatic, ideal gas system CpT = CvT + PV, you can easily derive what is known in the various meteorological texts as the “dry adiabatic lapse rate”:

dT/dz = –g/Cp = 9.8 K/km ….[WG5]

[Emphasis added]

Unfortunately, with his starting equations you can’t derive this result.

What I am talking about?

The Equations Required to Derive the Lapse Rate

Most textbooks on atmospheric physics include some derivation of the lapse rate. We consider a parcel of air of one mole. (Some terms are defined slightly differently to WG2010 – note 1).

There are 5 basic equations:

The hydrostatic equilibrium equation:

dp/dz = -ρg ….[1]

where p = pressure, z = height, ρ = density and g = acceleration due to gravity (=9.8 m/s²)

The ideal gas law:

pV = RT ….[2]

where V = volume, R = the gas constant, T = temperature in K, and this form of the equation is for 1 mole of gas

The equation for density:

ρ = M/V ….[3]

where M = mass of one mole

The First Law of Thermodynamics:

dU = dQ + dW ….[4]

where dU = change in internal energy, dQ = heat added to the system, dW = work added to the system

..rewritten for dry atmospheres as:

dQ = CvdT + pdV ….[4a]

where Cv = heat capacity at constant volume (for one mole), dV = change in volume

And the (less well-known) equation which links heat capacity at constant volume with heat capacity at constant pressure (derived from statistical thermodynamics and experimentally verifiable):

Cp = Cv + R ….[5]

where Cp = heat capacity (for one mole) at constant pressure

With an adiabatic process no heat is transferred between the parcel and its surroundings. This is a reasonable assumption with typical atmospheric movements. As a result, we set dQ = 0 in equation 4 & 4a.

Using these 5 equations we can solve to find the dry adiabatic lapse rate (DALR):

dT/dz = -g/cp ….[6]

where dT/dz = the change in temperature with height (the lapse rate), g = acceleration due to gravity, and cp = specific heat capacity (per unit mass) at constant pressure

dT/dz ≈ -9.8 K/km

Knowing that many readers are not comfortable with maths I show the derivation in The Maths Section at the end.

And also for those not so familiar with maths & calculus, the “d” in front of a term means “change in”. So, for example, “dT/dz” reads as: “the change in temperature as z changes”.

Fundamental “New Paradigm” Problems

There are two basic problems with his fundamental equations:

  • he confuses internal energy and heat added to get a sign error
  • he adds a term for gravitational potential energy when it is already implicitly included via the pressure change with height

A sign error might seem unimportant but given the claims later in the paper (with no explanation of how these claims were calculated) it is quite possible that the wrong equation was used to make these calculations.

These problems will now be explained.

Under the New Paradigm – Sign Error

Because William Gilbert mixes up internal energy and heat added, the result is a sign error. Consult a standard thermodynamics textbook and the first law of thermodynamics will be represented something like this:

dU = dQ + dW

Which in words means:

The change in internal energy equals the heat added plus the work done on the system.

And if we talk about dW as the work done by the system then the sign in front of dW will change. So, if we rewrite the above equation:

dU = dQ – pdV

By the time we get to [WG3] we have two problems.

Here is [WG3] for reference:

dU = CvdT + Ldq + gdz – PdV ….[WG3]

The first problem is that for adiabatic process, no heat is added to (or removed from) the system. So dQ = 0. The author says dU = 0 and makes dQ = change in internal energy (=CvdT + Ldq).

Here is the demonstration of the problem using his equation..

If we have no phase change then Ldq = 0. The gdz term is a mistake – for later consideration – but if we consider an example with no change in height in the atmosphere, we would have (using his equation):

CvdT – PdV = 0 ….[WG3a]

So if the parcel of air expands, doing work on its environment, what happens to temperature?

dV is positive because the volume is increasing. So to keep the equation valid, dT must be positive, which means the temperature must increase.

This means that as the parcel of air does work on its environment, using up energy, its temperature increases – adding energy. A violation of the first law of thermodynamics.

Hopefully, everyone can see that this is not correct. But it is the consequence of the incorrectly stated equation. In any case, I will use both the flawed and the fixed version to demonstrate the second problem.

Under the New Paradigm – Gravity x 2

This problem won’t appear so obvious, which is probably why William Gilbert makes the mistake himself.

In the list of 5 equations, I wrote:

dQ = CvdT + pdV ….[4a]

This is for dry atmospheres, to keep it simple (no Ldq term for water vapor condensing). If you check the Maths Section at the end, you can see that using [4a] we get the result that everyone agrees with for the lapse rate.

I didn’t write:

dQ = CvdT + Mgdz + pdV ….[should this instead be 4a?]

[Note that my equations consider 1 mole of the atmosphere rather than 1 kg which is why “M” appears in front of the gdz term].

So how come I ignored the effect of gravity in the atmosphere yet got the correct answer? Perhaps the derivation is wrong?

The effect of gravity already shows itself via the increase in pressure as we get closer to the surface of the earth.

Atmospheric physics has not been ignoring the effect of gravity and making elementary mistakes. Now for the proof.

If you consult the Maths Section, near the end we have reached the following equation and not yet inserted the equation for the first law of thermodynamics:

pdV – Mgdz = (Cp-Cv)dT ….[10]

Using [10] and “my version” of the first law I successfully derive dT/dz = -g/cp (the right result). Now we will try using William Gilbert’s equation [WG3], with Ldq = 0, to derive the dry adiabatic lapse rate.

0 = CvdT + gdz – PdV ….[WG3b]

and rewriting for one mole instead of 1 kg (and using my terms, see note 1):

pdV = CvdT + Mgdz ….[WG3c]

Inserting WG3c into [10]:

CvdT + Mgdz – Mgdz = (Cp-Cv)dT ….[11]

which becomes:

Cv = (Cp-Cv) ↠   Cp = Cv/2 ….[11a]

A New Paradigm indeed!

Now let’s fix up the sign error in WG3 and see what result we get:

0 = CvdT + gdz + PdV ….[WG3d]

and again rewriting for one mole instead of 1 kg (and again using my terms, see note 1):

pdV = -CvdT – Mgdz ….[WG3e]

Inserting WG3e into [10]:

-CvdT – Mgdz – Mgdz = (Cp-Cv)dT ….[12]

which becomes:

-CvdT – 2Mgdz = CpdT – CvdT ….[12a]

and canceling the -CvdT term from each side:

-2Mgdz = CpdT ….[12b]

So:

dT/dz = -2Mg/Cp, and because specific heat capacity, cp = Cp/M

dT/dz = -2g/cp ….[12c]

The result of “correctly including gravity” is that the dry adiabatic lapse rate ≈ -19.6 K/km. 

Note the factor of 2. This is because we are now including gravity twice. The pressure in the atmosphere reduces as we go up – this is because of gravity. When a parcel of air expands due to its change in height, it does work on its surroundings and therefore reduces in temperature  – adiabatic expansion. Gravity is already taken into account with the hydrostatic equation.

The Physics of Hand-Waving

The author says:

As we shall see, PV work energy is very important to the understanding of this thermodynamic behavior of the atmosphere, and the thermodynamic role of water vapor condensation plays an important part in this overall energy balance. But this is unfortunately often overlooked or ignored in the more recent climate science literature. The atmosphere is a very dynamic system and cannot be adequately analyzed using static, steady state mental models that primarily focus only on thermal energy.

Emphasis added. This is an unproven assertion because it comes with no references.

In the next stage of the “physics” section, the author doesn’t bother with any equations, making it difficult to understand exactly what he is claiming.

Keeping this gravitational steady state equilibrium in mind, let’s look again at what happens when latent heat is released (condensation) during air parcel ascension.

Latent heat release immediately increases the parcel temperature. But that also results in rapid PV expansion which then results in a drop in parcel temperature. Buoyancy results and the parcel ascends and is driven by the descending pressure profile created by gravity.

The rate of ascension, and the parcel temperature, is a function of the quantity of latent heat released and the PV work needed to overcome the gravitational field to reach a dynamic equilibrium. The more latent heat that is released, the more rapid the expansion / ascension. And the more rapid the ascension, the more rapid is the adiabatic cooling of the parcel. Thus the PV/thermal energy ratio should be a function of the amount of latent heat available for phase conversion at any given altitude. The corresponding physics shows the system will try to force the convecting parcel to approach the dry adiabatic or “gravitational” lapse rate as internal latent heat is released.

For the water vapor remaining uncondensed in the parcel, saturation and subsequent condensation will occur at a more rapid rate if more latent heat is released. In fact if the cooling rate is sufficiently large, super saturation can occur, which can then cause very sudden condensation in greater quantity. Thus the thermal/PV energy ratio is critical in determining the rate of condensation occurring. The higher this ratio, the more complete is the condensation in the parcel, and the lower the specific humidity will be at higher elevations.

I tried (unsuccessfully) to write down some equations to reflect the above paragraphs. The correct approach for the author would be:

  • A. Here is what atmospheric physics states now (with references)
  • B. Here are the flaws/omissions due to theoretical consideration i), ii), etc
  • C. Here is the new derivation (with clear statement of physics principles upon which the new equations are based)

One point I think the author is claiming is that the speed of ascent is a critical factor. Yet the equation for the moist adiabatic lapse rate doesn’t allow for a function of time in the equation.

The (standard) equation has the form (note 2):

dT/dz = g/cp {[1+Lq*/RT]/[1+βLq*/cp]} ….[13]

where q* is the saturation specific humidity and is a function of p & T (i.e. not a constant), and β = 0.067/°C. (See, for example: Atmosphere, Ocean & Climate Dynamics by Marshall & Plumb, 2008)

And this means that if the ascent is – for example – twice as fast, the amount of water vapor condensed at any given height will still be the same. It will happen in half the time, but why will this change any of the thermodynamics of the process?

It might, but it’s not clearly stated, so who can determine the “new physics”?

I can see that something else is claimed to do with the ratio CvdT /pV but I don’t know what it is, or what is behind the claim.

Writing the equations down is important so that other people can evaluate the claim.

And the final “result” of the hand waving is what appears to be the crux of the paper – more humidity at the surface will cause so much “faster” condensation of the moisture that the parcel of air will be drier higher up in the atmosphere. (Where “faster” could mean dT/dt, or could mean dT/dz).

Assuming I understood the claim of the paper correctly it has not been proven from any theoretical considerations. (And I’m not sure I have understood the claim correctly).

Empirical Observations

The heading is actually “Empirical Observations to Verify the Physics”. A more accurate title is “Empirical Observations”.

The author provides 3 radiosonde profiles from Miami. Here is one example:

From Gilbert (2010)

From Gilbert (2010)

Figure 1 – “Thermal adiabat” in the legend = “moist adiabat”

With reference to the 3 profiles, a higher surface humidity apparently leads to complete condensation at a lower altitude.

This is, of course, interesting. This would mean a higher humidity at the surface leads to a drier upper troposphere.

But it’s just 3 profiles. From one location on two different days. Does this prove something or should a few more profiles be used?

A few statements that need backing up:

The lower troposphere lapse rate decreases (slower rate of cooling) with increasing system surface humidity levels, as expected. But the differences in lapse rate are far less than expected based on the relative release of latent heat occurring in the three systems.

What equation determines “than expected”? What result was calculated vs measured? What implications result?

The amount of PV work that occurs during ascension increases markedly as the system surface humidity levels increase, especially at lower altitudes..

How was this calculated? What specifically is the claim? The equation 4a, under adiabatic conditions, with the additional of latent heat reads like this:

CvdT + Ldq + pdV = 0 ….[4a]

Was this equation solved from measured variables of pressure, temperature & specific humidity?

Latent heat release is effectively complete at 7.5 km for the highest surface humidity system (20.06 g/kg) but continues up to 11 km for the lower surface humidity systems (18.17 and 17.07 g/kg). The higher humidity system has seen complete condensation at a lower altitude, and a significantly higher temperature (−17 ºC) than the lower humidity systems (∼ −40 ºC) despite the much greater quantity of latent heat released.

How was this determined?

If it’s true, perhaps the highest humidity surface condition ascended into a colder air front and therefore lost all its water vapor due to the lower temperature?

Why is this (obvious) possibility not commented on or examined??

Textbook Stuff and Why Relative Humidity doesn’t Increase with Height

The radiosonde profiles in the paper are not necessarily following one “parcel” of air.

Consider a parcel of air near saturation at the surface. It rises, cools and soon reaches saturation. So condensation takes place, the release of latent heat causes the air to be more buoyant and so it keeps rising. As it rises water vapor is continually condensing and the air (of this parcel) will be at 100% relative humidity.

Yet relative humidity doesn’t increase with height, it reduces:

From Marshall & Plumb (2008)

From Marshall & Plumb (2008)

Figure 2

Standard textbook stuff on typical temperature profiles vs dry and moist adiabatic profiles:

From Marshall & Plumb (2008)

From Marshall & Plumb (2008)

Figure 3

And explaining why the atmosphere under convection doesn’t always follow a moist adiabat:

From Marshall & Plumb (2008)

From Marshall & Plumb (2008)

Figure 4 

The atmosphere has descending dry air as well as rising moist air. Mixing of air takes place, which is why relative humidity reduces with height.

Conclusion

The “theory section” of the paper is not a theory section. It has a few equations which are incorrect, followed by some hand-waving arguments that might be interesting if they were turned into equations that could be examined.

It is elementary to prove the errors in the few equations stated in the paper. If we use the author’s equations we derive a final result which contradicts known fundamental thermodynamics.

The empirical results consist of 3 radiosonde profiles with many claims that can’t be tested because the method by which these claims were calculated is not explained.

If it turned out that – all other conditions remaining the same – higher specific humidity at the surface translated into a drier upper troposphere, this would be really interesting stuff.

But 3 radiosonde profiles in support of this claim is not sufficient evidence.

The Maths Section – Real Derivation of Dry Adiabatic Lapse Rate

There are a few ways to get to the final result – this is just one approach. Refer to the original 5 equations under the heading: The Equations for the Lapse Rate.

From [2], pV = RT, differentiate both sides with respect to T:

↠ d(pV)/dT = d(RT)/dT

The left hand side can be expanded as: V.dp/dT + p.dV/dT, and the right hand side = R (as dT/dT=1).

↠ Vdp + pdV = RdT  ….[7]

Insert [5], Cp = Cv + R, into [7]:

Vdp + pdV = (Cp-Cv)dT ….[8]

From [1] & [3]:

Vdp = -Mgdz ….[9]

Insert [9] into [8]:

pdV – Mgdz = (Cp-Cv)dT ….[10]

From 4a, under adiabatic conditions, dQ = 0, so CvdT + pdV = 0, and substituting into [10]”

-CvdT – Mgdz = CpdT – CvdT

and adding CvdT to both sides:

-Mgdz = CpdT, or dT/dz = -Mg/Cp ….[11]

and specific heat capacity, cp = Cp/M, so:

dT/dz = g /cp ….[11a]

The correct result, stated as equation [6] earlier.

Notes

Note 1: Definitions in equations. WG2010 has:

  • P = pressure, while this article has p = pressure (lower case instead of upper case0
  •  Cv = heat capacity for 1 kg, this article has Cv = heat capacity for one mole, and cv = heat capacity for 1 kg.

Note 2: The moist adiabatic lapse rate is calculated using the same approach but with an extra term, Ldq, in equation 4a, which accounts for the latent heat released as water vapor condenses.

Read Full Post »

In Part One I made the observation:

If the atmosphere has an invariant optical thickness then surely all molecules should be included?

Meaning all ‘radiatively-active’ gases. Then I cited some results from Collins (2006) on the ‘radiative forcing’ for other gases, and added:

..So if total optical thickness from CO2 and water vapor has stayed constant over 60 years then surely total optical thickness must have increased?

In response, Miskolczi supporter Miklos Zagoni said:

Optical thickness was calculated over 60 years for CO2 and water vapor and other 9 IR-active molecular species (O3, N2O, CH4, NO, SO2, NO2, CCl4, F11 and F12), and turned out to be strictly fluctuating around a theoretically predicted equilibrium value

I asked for more details (concentrations of each of these gases over time which were used for the calculations) which weren’t forthcoming.

Later Miskolczi supporter Ken Gregory said:

Only the H2O and CO2 gases were changed. Other minor GHG were held constant.

So, working with this data I thought it would be interesting to see what changes had taken place in optical thickness due to these minor “greenhouse” gases.

I should point out that there are substantial problems identified with Miskolczi’s theory and experimental work and this is a very minor issue – it is more of an interesting aside.

A little while ago I managed to recreate the CO2 transmittance in the atmosphere – as shown in Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Nine. This was done using the HITRAN database in a MATLAB model I created.

The question about changes in optical thickness over time from other gases was a good motivator to update my MATLAB model to bring in other molecules. It was something I wanted to do anyway.

Note that radiative forcing or (surface emission – OLR) is a much more useful value than total optical thickness (as explained in Part One).

Extracting the HITRAN data proved to be the most tedious and challenging part of the project. It turns out that the “minor gases” like CFC-11 and CFC-12 are stored in a totally different format from gases like CO2, N2O, CH4 etc. These minor gases have a dataset for each temperature and pressure, with different sizes of dataset at various temperature/pressures. Nothing mathematically or conceptually challenging, just very tedious.

Another challenge was working out what concentrations to use for 1948 – the start date that Miskolczi uses. From Collins (2006) it seemed that the main “greenhouse” gases to evaluate were N2O (nitrous oxide), CH4 (methane) plus CFC11 (CCl3F) and CFC12 (CCl2F2). There are other halocarbons to include but time is limited.

Here are the values used:

……………………1948                2008

CO2           311 ppmv     386 ppmv
N2O           289 ppbv      319 ppbv
CH4         1250 ppbv    1775 ppbv
CFC11           0               267 pptv
CFC12           0               535 pptv

The later CO2 value is from 2008 from Miskolczi’s spreadsheet while the other values are from 2005.

ppmv = parts per million by volume, ppbv = parts per billion (109) by volume, pptv = parts per trillion (1012) by volume.

Earlier values of N2O and CH4 are taken from various papers, I can provide citations if anyone is interested – but pre-1980 values are thin on the ground.

In any case, my calculations of total optical thickness are rudimentary and provided as a starting point.

The Model

I used a 5 layer model up to 200mbar, with a surface temperature of 289K. The diffusivity approximation was used to estimate total hemispherical transmittance (see Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations). The wavenumber step, Δν = 1 cm-1. The calculations were done from 100 cm to 2500 cm (4μm – 100 μm) and the “Planck weighted” transmittance (at 289K) was calculated. This transmittance was converted back to an optical thickness, which is the same approach that Miskolczi uses (see comment).

Water vapor was assumed to be 10g/kg at the surface with a straight line reduction (vs pressure) to zero at 200mbar. Previously I carried out calculations where water vapor was varied from 5g/kg to 15g/kg and the effect on the transmittance change due to other gases was quite small.

Water vapor absorption lines are included from the HITRAN database but the water vapor continuum is not. This is next in my wishlist to include.

Changes in Water Vapor

The model deliberately did not try to follow Miskolczi’s water vapor values. The point of this article is to demonstrate that if (and only if) CO2 optical thickness is canceled out by water vapor changes, then significant increases in optical thickness from other gases impact negatively on his hypothesis.

If his calculations show:

optical thickness (CO2 + water vapor) = constant

then this article demonstrates that:

optical thickness (CO2 + other gases + water vapor) = increasing

Many people might not realize that there are a number of water vapor datasets. The one Miskolczi uses is not the only one. Others show different trends.

Results

Note that water vapor is included, but at unchanged concentration.

  • The change in optical thickness, Δτ, for CO2 only changing = 0.0167
  • The change in optical thickness, Δτ, for CO2+N2O+CH4+CFC11+CFC12 = 0.0238

The % increase (over CO2) due to the nominated “minor gases” = is 43%.

The total optical thickness is not so important in this analysis. If the number of layers is changed, the total optical thickness changes, but percent changes due to “greenhouse” gas increases are roughly similar.

Conclusion

If (and only if) water vapor has canceled out CO2 increases, then the increase in optical thickness due to these other gases (methane, nitrous oxide plus halocarbons) has destroyed the idea that optical thickness can be considered to be constant.

Of course, my calculations are rudimentary. My model is much less exact than the HARTCODE model used by Miskoczi and it would be interesting to see his results reproduced in full with the correct concentrations of all of the GHGs from 1948 – 2008.

As I commented earlier – this is one of the least important of the criticisms of Ferenc Miskolczi’s papers.

Now I have updated the model I can produce results like these:

 

Other articles in the series

The Mystery of Tau – Miskolczi – introduction to some of the issues around the calculation of optical thickness of the atmosphere, by Miskolczi, from his 2010 paper in E&E

Part Two – Kirchhoff – why Kirchhoff’s law is wrongly invoked, as the author himself later acknowledged, from his 2007 paper

Part Three – Kinetic Energy – why kinetic energy cannot be equated with flux (radiation in W/m²), and how equation 7 is invented out of thin air (with interesting author comment)

Part Four – a minor digression into another error that seems to have crept into the Aa=Ed relationship

Part Five – Equation Soufflé – explaining why the “theory” in the 2007 paper is a complete dog’s breakfast

 

References

The HITRAN 2008 molecular spectroscopic database, by L.S. Rothman et al, Journal of Quantitative Spectroscopy & Radiative Transfer (2009)

Radiative forcing by well-mixed greenhouse gases: Estimates from climate models in the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4), Collins et al, JGR (2006)

Read Full Post »

One question that has intrigued me for a while – how much of the transmittance (and change in transmittance) from CO2 in the atmosphere is caused by weak lines, and how much is caused by the “far wings” of individual lines.

Take a look back at Part Nine. Here is the calculated change in transmittance from the surface to the tropopause for a doubling of CO2 from pre-industrial levels:

Figure 1

We can see that there is significant change outside of the central range of wavenumbers. (For reference, 667 cm-1 = 15 μm).

Here is another of the graphics – showing how an individual line absorbs across a range of wavenumbers:

Figure 2

There are many lines in the HITRAN database for CO2 (over 300,000 lines), many of them weak.

So back to my original question – is it the “wings” = line broadening, of the individual lines, or the weaker lines that have the biggest effect? And how do we quantify it?

The Curve of Growth

The Curve of Growth is about the change in transmittance with an increase in path length (specifically the increase in number of absorbing molecules in the path).

The reason the problem is not so simple is because each line has a line shape that absorbs across a range of wavelengths.

We know that transmittance, t=e – this is the Beer-Lambert law. The spreading out of the line means that the reduction in transmittance as the path increases isn’t as strong as predicted by a simple application of the Beer-Lambert law.

Here is a calculation of this for a typical tropospheric condition (note 1) – for one isolated absorption line with its line center at 1000 cm-1:

Figure 3

Each curve represents a doubling of the number of absorbing molecules in the path. So, if this was at constant density, each curve represents a doubling of the path length.

What does this graph show?

At the line center, an increase in the path length soon causes saturation. But out in the “wings” of the absorption line the reduction in transmittance changes much more slowly.

With some maths we can show that for very small paths (note 2), the absorptance is linearly proportional to the number of absorbers in the path.

And with some more maths we can show that for very large paths (note 2), the absorptance increases as the square root of the number of absorbers in the path.

Here is another way to view the “curve of growth”. I haven’t seen it shown this way before:

Figure 4

This is the same scenario, but this time the x-axis (bottom axis) has the effective path length, while each curve represents a different distance from the line center. The legend is showing the distance from the line center compared with the “line width”. So “10” = 1 cm-1 from the line center, while “0” is the line center.

Notice the difference at a mass path x absorption coefficient = 100 (=1) where the line center has a transmittance of 0.05 while at 1 cm-1 distance from the line center the line has a transmittance of 0.97.

What happens when we look at the effect of the whole line?

Here is a curve of the equivalent line width, W as the path increases. This equivalent line width is just the total effect across enough bandwidth to take into account the effects of the far wings of the line (note 3):

Figure 5

The 2nd graph is the important one. This shows the power relationship between W and Su, as Su increases – where Su = absorption coefficient x mass of molecules in the path. S is the absorption coefficient, and u is the mass.

For a given line, S is a constant, and so an increase in Su means an increase in the number of molecules in the path.

So if we believe that W = (Su)x, how do we determine x?

We can do it by taking the log of both sides: log(W)=x.log(Su), so by calculating the slope of this log relationship we can see how this value, x, changes.

The 2nd graph shows:

  • at very small paths, the line strength is proportional to Su – because x = 1
  • at very large paths, the line strength is proportional to √(Su) – because x = 0.5

This is simply backing up by numerical calculation the earlier claim: “With some maths we can show..

So if you want to understand “the curve of growth” you need to understand that at very small optical thickness the “equivalent line” grows in linear proportion to the mass of molecules in the path, and with very large optical thickness the “equivalent line” grows in proportion to the square root.

And probably the easiest way to see it conceptually is to take another look at Figure 3.

Line Wings in Practice

So back to my original question. Is it the weak lines or the wings of the strong lines that has the effect?

The first step is to calculate the effect of the line wings. I took the MATLAB program developed for Part Nine and made a few changes. The original program had simply applied the equation for line shape out to the edge of the region under consideration. In this version I added a new factor, ca, which “chopped” the line shape. The factor ca limits the line shape for each line to the line center, v0 ± (ca * line width).

The line width is a parameter in the HITRAN database and is a measure of the shape of the line, approximately the value where the strength has fallen to half the peak value.

The original program was run for two values of atmospheric CO2: 280ppm (pre-industrial); and 560ppm (doubling of CO2) – with 15 layers and a calculation every 0.01 cm-1 across the band of 500 cm-1 – 850 cm-1. These are called the “Standard” results. And for these, and all following simulations, we are only considering the main isotopologue of CO2, which accounts for over 98% of atmospheric CO2.

Then the revised program was run for a number of values of ca: 5000, 1000, 100 & 10.

Given that the typical line width is 0.05 – 0.1 cm-1 this means that each line is considered between two extremes: across 250-500 cm-1 down to only 0.5 – 1 cm-1.

As expected, with ca = 5000, the difference between that and the Standard is almost nil. And as ca is reduced the differences increase:

Figure 6

By the time ca = 100, the differences are noticeable, and at ca = 10, the differences in some parts of the band are huge. The above result (figure 6) is dominated by the ca=10 result so here is ca=100 separately:

Figure 7

So this demonstrates that even when the line shape is “limited” to 100x the line width, there is still a noticeable effect in the transmittance calculation for the troposphere.

For completeness, here is the same comparison at figure 6, but for 560ppm:

Figure 8

Across this band, the standard case at 280ppm: t = 0.4980, and at 560ppm, t = 0.437.

Here is how the mean transmittance changes as we crop the line shape. Cropping the line shape means that we make the atmosphere more transparent.

Figure 9

We can see from figure 9 that the change in transmittance from 280 – 560 ppm is of a similar magnitude at each artificial “cropping” of the line shape.

Weak Lines

So let’s take a look at the effect of weak lines. For this case I used the same MATLAB program developed for Part Nine but with a user-defined selection of lines. For example, the top 10% of lines by strength.

Here is the transmittance change @ 280ppm through the troposphere, for all lines (standard) less the transmittance of the strongest 10% of lines:

Figure 10

And here is the graph of all lines – the strongest 1%:

Figure 11

And here is the mean transmittance for both 280ppm and 560ppm as only the strongest lines are considered:

Figure 12

It’s clear that the weakest 90% of lines have virtually no effect on the transmittance of the atmosphere. There are a lot of very weak lines in the HITRAN database.

The calculated transmittance for 100% of the lines at 280ppm = 0.4974 and for the top 10% = 0.4994 – meaning that the top 10% of lines account for 99.6% of the transmittance.

At 560ppm the top 10% of lines account for 99.3%.

Conclusion

Some of this analysis is of curiosity value only.

However, it is very useful to understand the “curve of growth” – and to realize how absorptance increases as the mass in the path increases.

And it’s at least interesting to see how the “far wings” of the individual lines have such an effect on the transmittance through the atmosphere. Even “cropping” the effect at 100x the line width has a significant effect on the atmospheric transmittance.

And for the question posed at the beginning, both the weak lines and the far wings of individual lines have an effect on the total atmospheric transmittance.

Many people have appreciated the massive absorption at the peak of the CO2 band (around 15 μm). But as we have seen in earlier parts of this series (and as shown in Figure 1), it is towards the “edges” of the band where the largest changes take place as CO2 concentration increases.

Remember as well that total transmittance is not really a complete picture of radiative transfer in the atmosphere. The atmosphere also emits radiation, and so the temperature profile of the atmosphere is just as important for seeing the whole picture.

Other articles in the series:

Part One – a bit of a re-introduction to the subject.

Part Two – introducing a simple model, with molecules pH2O and pCO2 to demonstrate some basic effects in the atmosphere. This part – absorption only.

Part Three – the simple model extended to emission and absorption, showing what a difference an emitting atmosphere makes. Also very easy to see that the “IPCC logarithmic graph” is not at odds with the Beer-Lambert law.

Part Four – the effect of changing lapse rates (atmospheric temperature profile) and of overlapping the pH2O and pCO2 bands. Why surface radiation is not a mirror image of top of atmosphere radiation.

Part Five – a bit of a wrap up so far as well as an explanation of how the stratospheric temperature profile can affect “saturation”

Part Six – The Equations – the equations of radiative transfer including the plane parallel assumption and it’s nothing to do with blackbodies

Part Seven – changing the shape of the pCO2 band to see how it affects “saturation” – the wings of the band pick up the slack, in a manner of speaking

Part Eight – interesting actual absorption values of CO2 in the atmosphere from Grant Petty’s book

Part Nine – calculations of CO2 transmittance vs wavelength in the atmosphere using the 300,000 absorption lines from the HITRAN database

Part Ten – spectral measurements of radiation from the surface looking up, and from 20km up looking down, in a variety of locations, along with explanations of the characteristics

Part Eleven – Heating Rates – the heating and cooling effect of different “greenhouse” gases at different heights in the atmosphere

Part Twelve – The Curve of Growth – how absorptance increases as path length (or mass of molecules in the path) increases, and how much effect is from the “far wings” of the individual CO2 lines compared with the weaker CO2 lines

And Also –

Theory and Experiment – Atmospheric Radiation – real values of total flux and spectra compared with the theory.

References

The HITRAN 2008 molecular spectroscopic database, by L.S. Rothman et al, Journal of Quantitative Spectroscopy & Radiative Transfer (2009)

Notes

Note 1 – As you can see in Part Nine, the line shape differs as the pressure reduces.

Note 2 – Technically speaking, for “very small paths” we are really considering the case where optical thickness, τ <<1 (very much less than 1). And for “very large paths” we are considering the case where optical thickness,  τ>>1 (very much greater than 1).

Note 3 – For a line strength = W, if we want to calculate absorptance, a = 1 – t, where t= transmittance, across any given band, Δv, the calculation is very simple:

a = 1-t = W / Δv

Read Full Post »

Many people have requested an analysis of Miskolczi’s theories.

I start with his more recent paper:  The Stable Stationary Value of the Earth’s Global Average Atmospheric Planck-Weighted Greenhouse-Gas Optical Thickness, Energy & Environment (2010).

It’s an interesting paper and clearly Miskolczi has put a lot of time and effort into it. I recommend people read the paper for themselves, and the link above provides free access.

The essence of the claim is that the optical thickness of the earth’s atmosphere is a constant – at least over the last 60 years – where water vapor cancels out any change from CO2. So if more CO2 increases the optical thickness, then the optical thickness from water vapor will reduce.

In his paper he make this statement:

Unfortunately no computational results of EU, ST, A, TA and τA can be found in the literature, and therefore our main purpose is to give realistic estimates of their global mean values, and investigate their dependence on the atmospheric CO2 concentration.

Among the terms noted in this quote, τA is the optical thickness of the atmosphere.

As we delve into the paper, hopefully the reasons why this value isn’t calculated in any papers will become clear. In fact, the first question people should be asking themselves is this:

If the result is of significant importance why has no one else calculated this parameter before?

There are thousands of papers about radiative transfer, CO2 and water vapor.

Why has no one (apparently) published their calculations of the globally averaged optical thickness of the atmosphere and how it has changed over time?

There is a reason..

What is Optical Thickness?

You can find a more complete explanation of optical thickness in Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations, which I definitely recommend reading even though it has many equations. (Actually, because it has many equations..)

Because optical thickness isn’t an obvious parameter, let’s start with a simpler property called transmittance.

Transmittance is the proportion of radiation which is transmitted through a body (in this case, the atmosphere). We will use the letter “t” to refer to it.

t has a value between 0 and 1. Slightly more formally, we can write 0 ≤ t≤ 1.

For t = 1, the body is totally transparent to incident radiation.

For t = 0, the body is totally opaque and absorbs all incident radiation.

For non-scattering atmospheres (note 1), absorptance, a = 1- t, which means that whatever is not absorbed gets transmitted. This is simple enough, and everyone would expect this from the First Law of Thermodynamics.

Now for optical thickness. We will use τ for this parameter. τ is the Greek letter “tau”.

The Beer-Lambert law says that the transmittance of a beam of radiation:

t = exp(-τ)

The “exp” is a mathematical convention for “e to the power of”. So this can alternatively be written as:

t = e

Which means that when τ = 1, t = 0.36;   when τ = 2, t = 0.14; and when τ = 10, t = 0.000045.

Optical thickness is tedious to calculate because the properties of each gas vary strongly with wavelength.

In brief, for each molecule at each wavelength, the total optical thickness is equal to the total number of molecules in the path x the absorption coefficient (which is a function of wavelength).

So optical thickness is a very handy parameter. Calculating it does take some work and a pre-requisite is a database of all the spectroscopic values for each molecule – as well as knowing the total amount of each gas in the path we want to calculate.

Absorption and Emission

The atmosphere absorbs and also emits.

Absorption, as we have just seen, is a function of the total amount of each gas (in a path) as well as the properties of each gas.

And, in case it is not obvious, the total radiation absorbed is also a function of the intensity of radiation travelling through the body that we want to calculate. This is because absorption = incident radiation x absorptance.

What about emission?

Emission of radiation is a function of the temperature of the atmosphere, as well as its emissivity, ε. This parameter emissivity is equal to the absorptivity or absorptance, of a body at any given wavelength – or across a range of wavelengths. This is known as Kirchhoff’s law.

Emission = ε . σT4 in W/m², where T is the temperature of the atmosphere at that point.

If we want to calculate the radiative transfer through the atmosphere we need both terms.

Here is a simple example of why. Readers who followed the series Understanding Atmospheric Radiation and the “Greenhouse” Effect will remember that I introduced a simple atmosphere with two molecules, pCO2 and pH2O. These had a passing resemblance to the real molecules, but had properties that were much simpler, for the purposes of demonstrating some important aspects of how radiation interacts with the atmosphere.

This following example has three scenarios. Each scenario has the same total amount of water vapor through the atmosphere, but a different profile vs height. These are shown in the graph:

Figure 1

The bottom graph shows the top of atmosphere (TOA) flux from each of the three scenarios.

If we calculated the total transmittance through the atmosphere it would be the same in each scenario (update: correction – see Ken Gregory’s point below). Because the optical thickness is the same. The optical thickness is the same because the total number of pH2O molecules in the path is the same.

Yet the TOA flux is very different.

This is because where the atmosphere emits from is very important in calculations of flux. For example, in the case of the 3rd scenario, the TOA flux is lower because more of the water vapor is at colder temperatures, and less is at hotter temperatures.

From Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations:

dIλ/dτ = Iλ – Bλ(T)     [12]

which is also known as Schwarzschild’s Equation – and is the fundamental description of changes in radiation as it passes through an absorbing (and non-scattering) atmosphere. Bλ(T) = the Planck function, which is a function of temperature. And the subscript λ in each term identifies the wavelength dependence of this equation.

For the mathematically minded, it will be clear reviewing the above equation that total optical thickness tells you less than you need. As the location of optical thickness varies, if temperature varies (which it does in the atmosphere) then you can get different results for the same optical thickness.

That is, the simulations above demonstrate what is clear, and easily provable, from the form of the fundamental equation.

This is why papers on total optical thickness of the atmosphere over time are hard to come by. It is of curiosity value only.

What About Methane, Nitrous Oxide and Halocarbons?

The total optical thickness of the atmosphere is not just determined by water vapor and CO2. If the atmosphere has an invariant optical thickness then surely all molecules should be included?

According to WM Collins and his co-authors (2006):

The increased concentrations of CO2, CH4, and N2O between 1750 and 1998 have produced forcings of +1.48, +0.48, and +0.15 W m, respectively [IPCC, 2001]. The introduction of halocarbons in the mid-20th century has contributed an additional +0.34 Wm for a total forcing by WMGHGs of +2.45Wm with a 15% margin of uncertainty.

I’m sure someone with enough determination can find some results for the changes in the radiative forcing from CH4 and N2O between 1950 and 2010. But this at least demonstrates that there is some significant absorption characteristics for other molecules. After all, halocarbons have added a quarter of the longer term CO2 increase in radiative forcing from CO2 (from 1750 to the present day) in just half a century.

So if total optical thickness from CO2 and water vapor has stayed constant over 60 years then surely total optical thickness must have increased?

This is not mentioned in the paper and seems to be a major blow to the not-particularly-useful result calculated.

Update, 31st May: Ken Gregory, a Miskolczi supporter armed with the spreadsheet of calculations, says that minor gases were kept constant. So Part Six demonstrates my basic calculations of optical thickness changes due to CO2 and some minor gases.

Cloudy Thinking

Miskolczi says:

In all calculations of A, TA, tA, and of the radiative flux components, the presence or absence of clouds was ignored; the calculations refer only to the greenhouse gas components of the atmosphere registered in the radiosonde data; we call this the quasi-all-sky protocol. It is assumed, however, that the atmospheric vertical thermal and water vapor structures are implicitly affected by the actual cloud cover, and that the atmosphere is at a stable steady state of cloud cover.

Assumed but not demonstrated.

Clouds have a huge impact on the radiative (and convective) heat transfers in the atmosphere. From Clouds and Water Vapor – Part One:

Clouds reflect solar radiation by 48 W/m² but reduce the outgoing longwave radiation (OLR) by 30 W/m², therefore the average net effect of clouds – over this period at least – is to cool the climate by 18 W/m².

Are they constant?

Here is a snapshot from Vardavas & Taylor (2007):

From Vardavas & Taylor (2007)

Figure 2

Another important point – given the non-linearity of the equations of radiative transfer, even if the cloud cover stayed at a constant global percentage but the geographical distribution changed, the optical thickness of the atmosphere cannot be assumed constant.

Here are some values of cloud emissivity from Hartmann (1994):

From Hartmann (1994)

Figure 3

Just for some perspective, as emissivity reaches 0.8, τ =  1.6; with emissivity = 0.9, τ = 2.3. And Miskolczi calculates the global average optical thickness of the atmosphere – without clouds – at 1.87.

At the end of his paper, Miskolczi concludes:

Apparently, the global average cloud cover must not have a dramatic effect on the global average clear-sky optical thickness..

I can’t understand, from the paper, where this confidence comes from.

Conclusion

There is more in the paper, including some very suspect assumptions about radiative exchange. However, six out of the 19 references in the paper are to Miskolczi himself and the fundamental equations brought up for energy balance (where radiative exchange is referenced) rely on his more lengthy 2007 paper, Greenhouse effect in semi-transparent planetary atmospheres.

I will try to read this paper before commenting on these energy balance equations.

However, the key points are:

  • optical thickness of the total atmosphere is not a very useful number
  • the useful headline number has to be changes in TOA flux, or radiative forcing, or some value which expresses the overall radiative balance of the climate system (update: see this comment for the correct measure)
  • optical thickness calculated as constant over 60 years for CO2 and water vapor appears to prove that total optical thickness is not constant due to increases in other well-mixed “greenhouse” gases
  • clouds are not included in the calculation, but surely overwhelm the optical thickness calculations and cannot be assumed to be constant

Other Articles in the Series:

Part Two – Kirchhoff – why Kirchhoff’s law is wrongly invoked, as the author himself later acknowledged, from his 2007 paper

Part Three – Kinetic Energy – why kinetic energy cannot be equated with flux (radiation in W/m²), and how equation 7 is invented out of thin air (with interesting author comment)

Part Four – a minor digression into another error that seems to have crept into the Aa=Ed relationship

Part Five – Equation Soufflé – explaining why the “theory” in the 2007 paper is a complete dog’s breakfast

Part Six – Minor GHG’s – a less important aspect, but demonstrating the change in optical thickness due to the neglected gases N2O, CH4, CFC11 and CFC12.

Further Reading:

New Theory Proves AGW Wrong! – a guide to the steady stream of new “disproofs” of the “greenhouse” effect or of AGW. And why you can usually only be a fan of – at most – one of these theories.

References

The Stable Stationary Value of the Earth’s Global Average Atmospheric Planck-Weighted Greenhouse-Gas Optical Thickness, Miskolczi, Energy & Environment (2010)

Radiative forcing by well-mixed greenhouse gases: Estimates from climate models in the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4), Collins et al, JGR (2006)

Radiation and Climate, Vardavas & Taylor, Oxford University Press (2007)

Global Physical Climatology, Hartmann, Academic Press (1994) – reviewed here

Notes

Note 1 – For longwave radiation (>4 μm), scattering is negligible in the atmosphere.

Read Full Post »

The subject of atmospheric heating rates is one which is worth spending time on.

What is a heating rate?

To see the usefulness of a heating rate let’s consider the per capita income of a country.

Per capita income compares the ratio of total $ to the total population. If we compare the total income of China to the total income of Laos we don’t have a useful comparison. If we compare the per capita income of China to the per capita income of Laos.. well, who knows whether we have a meaningful comparison – but at least we have something more useful. Something more relevant.

Energy absorbed in a layer of the atmosphere causes heating at a certain rate. Energy lost from a layer of the atmosphere causes cooling at a certain rate.

Heating rates tell us something different from total energy lost or gained. Suppose a 1m layer of the atmosphere gains 1,000 J/m², what will the temperature change be?

The specific heat capacity of the atmosphere at constant pressure is 1005 J/(K.kg) – which means it takes just over 1,000 J to lift the temperature of 1 kg of the atmosphere by 1K (=1°C).

However, the atmospheric density decreases with height:

Figure 1

At the surface, where pressure = 1000 mbar, the density = 1.2 kg/m³.

So 1,000 J/m² lifts the temperature of a 1m layer of the atmosphere at the surface by 0.83 K (calculated by ΔT=1,000/[1.2 x 1005]) .

At the top of the stratosphere, near 50km where the pressure = 1 mbar, the density = 0.0016 kg/m³.

Here, 1,000 J/m² lifts the temperature of a 1 m layer of the atmosphere by 620 K (calculated by ΔT=1,000/[0.0016 x 1005]).

So it’s a bit like sharing out the total income of China among the residents of Laos.

That’s why heating rates are useful – they relate the amount of energy with the amount of atmosphere.

More CO2 equals More Absorption but More CO2 equals More Emission

One of the confusing aspects in atmospheric radiation comes as people start to consider the fact that the atmosphere emits as well as absorbs.

So more radiatively-active gases (=”greenhouse” gases) causes more absorption? Or more emission? Or doesn’t one just balance out the other and so there is no change?

There are legitimate questions to ask.

The only way to answer these questions is to solve the Schwarzschild equation – see Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations.

What we will do is first of all review the heating/cooling rates vs altitude and try and understand some features qualititively.

Here is the right way to think about the problem:

Absorption at any given wavelength depends on:

  • the quantity of gases that absorb at that wavelength
  • the effectiveness of each gas at absorbing at that wavelength
  • the “amount” of radiation travelling through that part of the atmosphere (note 1).

Emission at any wavelength depends on:

  • the quantity of gases that absorb at that wavelength (and therefore emit at the same wavelength)
  • the effectiveness of each gas at absorbing (and therefore emitting) at that wavelength
  •  the temperature of the gas

In the case of shortwave (=solar radiation) the atmosphere absorbs but does not emit. This is because the atmosphere is not hot enough to radiate significantly below 4 μm, see The Sun and Max Planck Agree – Part Two.

In the case of longwave (= terrestrial / atmospheric radiation) absorption is from radiation from above and below. But usually the radiation from below is a lot higher than from above. This is because the earth’s surface emits close to blackbody radiation (the surface has a very high emissivity over all wavelengths), and the atmosphere (which doesn’t emit as a blackbody) is hotter closer to the surface.

Doesn’t a Heating or Cooling Rate Mean that the Atmosphere is Heating up or Cooling Down?

No.

The sun heats the atmosphere (a heating rate), but the atmosphere radiates to space (a cooling rate), and also convection moves heat through the troposphere.

We can still have a heating rate, a cooling rate and convective heat transfer while the atmosphere is in approximate energy balance (=not changing in temperature). If the temperature of one part of the atmosphere is not changing then these will sum to zero.

So heating rates vs height give us insight into the strength of these effects, and we can break the effects up between the responsible gases (water vapor, CO2, ozone, etc).

Heating Rates

From the always excellent Grant Petty, A First Course in Atmospheric Radiation, the solar heating of the atmosphere, for a standardized tropical atmosphere:

From Petty (2006)

Figure 2 – Solar heating

If we showed total energy absorbed at each height in the atmosphere, then the troposphere would overwhelm the stratosphere (upper atmosphere). But because we are showing energy absorbed in proportion to the density of the atmosphere, the upper atmosphere appears more important.

We see that ozone causes the highest heating rate in the stratosphere, whereas water vapor causes the highest heating rate in the troposphere, and CO2 has a very small effect.

The water vapor heating rate is – of course – concentrated in the bottom few km of the atmosphere because water vapor is concentrated here.

Most of the absorbed solar radiation is absorbed by the earth’s surface. The surface absorption is not shown in this graph. In turn, the surface heats the atmosphere primarily through convection. The convective heat transfer is also not shown. 

Now let’s look at longwave heating (cooling) rates for a few different regions:

From Petty (2006)

Figure 3

We see that the heating rates are mostly negative, meaning that these are really cooling rates. Most of the atmosphere is cooling via longwave radiation. However, one small part of the atmosphere experiences a heating rate due to longwave radiation – the tropical tropopause.

Why?

The tropopause is the coldest part of the atmosphere – the top of the troposphere and bottom of the stratosphere. And the coldest part of the atmosphere radiates less than it absorbs.

Let’s see the breakdown of cooling rates by individual gases:

From Petty (2006)

Figure 4

We see that water vapor has a peak longwave cooling at around 3 km and another maximum at 10 km. The lower peak is caused by the “continuum” – also shown separately on the graph – which we will return to shortly.

Ozone shows a heating rate in the stratosphere below 30km. If we had the graph extend up to the top of the stratosphere, around 50km, we would see ozone with a cooling to space higher up.

We also see that CO2 has a very small cooling effect until we get into the stratosphere. Generally, each layer experiences a very small heating/cooling effect from CO2 because CO2 has a such a strong absorption that energy is absorbed from layers very close by – which are at very similar temperatures. The tropopause is the coldest part of the atmosphere so absorbs a little more radiation via CO2 than it emits – consequently a small heating effect.

As we get up into the stratosphere we see a progressively stronger cooling to space from CO2. In part, this is because of the reduction in pressure broadening at lower pressures = higher altitudes (see Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Nine). This effect causes the absorptivity of CO2 to reduce at higher altitudes meaning that the radiation from the layer below can “get through” to space.

The Continuum

Water vapor has an unusual absorption profile. In the so-called “atmospheric window” of 8-12 μm where there are no strong absorption bands from any atmospheric molecule (apart from ozone at 9.6 μm), water vapor still absorbs. Here the absorption coefficient is proportional to the atmospheric pressure.

In general, the more we have of a particular gas, the more absorption. This is expressed in the Beer-Lambert law.

But for other gases, the absorption coefficient is a constant for a given wavelength – not proportional to pressure. In the Beer-Lambert law we multiply the mass of absorbing molecules in the path by the absorption coefficient to find the optical thickness (note 2). 

For the water vapor continuum the absorption coefficient is proportional to pressure. And absorptivity is a function of the absorption coefficient and the total mass in the path (which is proportional to pressure). Therefore, total absorption in the continuum band is a very strong function of pressure.

Water vapor concentration is concentrated at lower levels in the atmosphere so the total absorption due to the water vapor continuum falls off very quickly with height.

This is why the lower peak cooling rate occurs. The absorption by water vapor due to the continuum above 3 km is very small – so around 3km the atmosphere (in these wavelengths) can very effectively cool to space. Other bands of water vapor absorb more strongly, so effective cooling to space doesn’t really begin until the concentration of water vapor drops to very low values. Hence the second peak at 10 km.

Stratospheric Cooling

A long time we had a look at Stratospheric Cooling. This strange phenonemon is expected from more CO2 in the atmosphere. All other things being equal, the troposphere will warm and the stratosphere will cool.

Radiative-convective models predict this. Once you’ve got to grips with basic radiation in the atmosphere, it is easy to see why the troposphere will warm.

But why will the stratosphere cool?

Some will look at Figure 4 and say “ah ha“. More CO2 will move the CO2 line over the left and so that’s why the stratosphere will cool.

As a cautionary note, the heating rate at level z is equal to:

From Petty (2006)

Figure 5

Where among other terms, the italicized “T” is the band-averaged transmittance between z and z’, and the integrals are (obviously to the mathematicians) for each “level” (note 3) between the surface and z, or between the top of atmosphere (∞) and z..

If we went through this equation we would find that there are competing terms – terms which represent absorption of energy from other parts of the atmosphere (heating), and terms which represent emission of energy from this layer (cooling). Increasing CO2 increases absorption in the stratosphere. Increasing CO2 increases emission from the stratosphere.

Given that radiative-convective models predict stratospheric cooling we can say confidently that more CO2 will move the cooling curve in figure 4 to the left in the stratosphere (note 4). So emission will be higher than absorption.

However, we haven’t developed an intuitive understanding of why. At least, I haven’t.

To develop an intuitive understanding I would need the solution of these equations for a variety of conditions, and after playing around with changed parameters and reviewing results it would all start to make sense. That’s what I would hope.

But that’s just me. Others can perhaps just see it all dance out of the equations in a flash (think – the crazy one in The Hangover in the casino). Or out of the fundamental physics.

Conclusion

Heating rates help give insight into how the atmosphere absorbs and emits radiation from different “greenhouse” gases at different levels.

Generally the peak cooling rates for each band occur when that band “thins out” enough in the layers above to allow significant radiation to space, rather than just to the level immediately above.

Convection is the most important mechanism for moving heat in the troposphere (but not the stratosphere).

This article hasn’t considered convection at all – which just demonstrates the ongoing plot to hide the importance of convection. Once people realize how important convection is, radiative heating and radiative cooling to space will be.. the same.

Other articles in the series:

Part One – a bit of a re-introduction to the subject.

Part Two – introducing a simple model, with molecules pH2O and pCO2 to demonstrate some basic effects in the atmosphere. This part – absorption only.

Part Three – the simple model extended to emission and absorption, showing what a difference an emitting atmosphere makes. Also very easy to see that the “IPCC logarithmic graph” is not at odds with the Beer-Lambert law.

Part Four – the effect of changing lapse rates (atmospheric temperature profile) and of overlapping the pH2O and pCO2 bands. Why surface radiation is not a mirror image of top of atmosphere radiation.

Part Five – a bit of a wrap up so far as well as an explanation of how the stratospheric temperature profile can affect “saturation”

Part Six – The Equations – the equations of radiative transfer including the plane parallel assumption and it’s nothing to do with blackbodies

Part Seven – changing the shape of the pCO2 band to see how it affects “saturation” – the wings of the band pick up the slack, in a manner of speaking

Part Eight – interesting actual absorption values of CO2 in the atmosphere from Grant Petty’s book

Part Nine – calculations of CO2 transmittance vs wavelength in the atmosphere using the 300,000 absorption lines from the HITRAN database

Part Ten – spectral measurements of radiation from the surface looking up, and from 20km up looking down, in a variety of locations, along with explanations of the characteristics

Part Eleven – Heating Rates – the heating and cooling effect of different “greenhouse” gases at different heights in the atmosphere

Part Twelve – The Curve of Growth – how absorptance increases as path length (or mass of molecules in the path) increases, and how much effect is from the “far wings” of the individual CO2 lines compared with the weaker CO2 lines

And Also –

Theory and Experiment – Atmospheric Radiation – real values of total flux and spectra compared with the theory.

Notes

Note 1 – Absorptivity is a different parameter from absorption. Absorptivity is the proportion of radiation absorbed and is dependent on the number of molecules of different radiatively-active gases. Absorption is the total amount of energy absorbed and so depends on the intensity of radiation passing through that part of the atmosphere and the absorptivity.

Note 2 – The Beer-Lambert law can be expressed in a number of different ways. Essentially the units for the amount of the gas (e.g. number of molecules, mass) in the radiation path has to match the units for the absorption coefficient. The same result is obtained.

Note 3 – There are no discrete “layers” in the atmosphere. This is a convenient term for explaining the physics in plainer English (as with many other inexact and non-formal explanations). All the properties of the atmosphere we are considering have continuous change with pressure and, therefore, with height.

Note 4 – The derivation of the equations for heating rates comes from the same equations which are used in radiative-convective models.

Read Full Post »

« Newer Posts - Older Posts »