Feeds:
Posts
Comments

Archive for the ‘Atmospheric Physics’ Category

A while ago we looked at some basics in Heat Transfer Basics – Part Zero.

Equations aren’t popular but a few were included.

As a recap, there are three main mechanisms of heat transfer:

  • conduction
  • convection
  • radiation

In the climate system, conduction is generally negligible because gases and liquids like water don’t conduct heat well at all. (See note 2).

Convection is the transfer of heat by bulk motion of a fluid. Motion of fluids is very complex, which makes convection a difficult subject.

If the motion of the fluid arises from an external agent, for instance, a fan, a blower, the wind, or the motion of a heated object itself, which imparts the pressure to drive the flow, the process is termed forced convection.

If, on the other hand, no such externally induced flow exists and the flow arises “naturally” from the effect of a density difference, resulting from a temperature or concentration difference in a body force field such as gravity, the process is termed natural convection. The density difference gives rise to buoyancy forces due to which the flow is generated..

The main difference between natural and forced convection lies in the mechanism by which flow is generated.

From Heat Transfer Handbook: Volume 1, by Bejan & Kraus (2003).

The Boundary Layer

The first key to understanding heat transfer by convection is the boundary layer. A typical example is a fluid (e.g. air, water) forced over a flat plate:

From Incropera & DeWitt (2007)

From Incropera & DeWitt (2007)

This first graphic shows the velocity of the fluid. The parameter u is the velocity (u) at infinity (∞) – or in layman’s terms, the velocity of the fluid “a long way” from the surface of the plate.

Another way to think about u – it is the free flowing fluid velocity before the fluid comes into contact with the plate.

Take a look at the velocity profile:

At the plate the velocity is zero. This is because the fluid particles make contact with the surface. In the “next layer” the particles are slowed up by the boundary layer particles. As you go further and further out this effect of the stationary plate is more and more reduced, until finally there is no slowing down of the fluid.

The thick black curve, δ, is the boundary layer thickness. In practice this is usually taken to be the point where the velocity is 99% of its free flowing value. You can see that just at the point where the fluid starts to flow over the plate – the boundary layer is zero. Then the plate starts to slow the fluid down and so progressively the boundary layer thickens.

Here is the resulting temperature profile:

From Incropera & DeWitt (2007)

From Incropera & DeWitt (2007)

In this graphic T is the temperature of the “free flowing fluid” and Ts is the temperature of the flat plat which (in this case) is higher than the free flowing fluid temperature. Therefore, heat will transfer from the plate to the fluid.

The thermal boundary layer, δt, is defined in a similar way to the velocity boundary layer, but using temperature instead.

How does heat transfer from the plate to the fluid? At the surface the velocity of the fluid is zero and so there is no fluid motion.

At the surface, energy transfer only takes place by conduction (note 1).

In some cases we also expect to see mass transfer – for example, air over a water surface where water evaporates and water vapor gets carried away. (But not with air over a steel plate).

From Incropera & DeWitt (2007)

From Incropera & DeWitt (2007)

So a concentration boundary layer develops.

Newton’s Law of Cooling

Many people have come across this equation:

q” = h(Ts – T)

where q” = heat flux in W/m², h is the convection coefficient, and the two temperatures were defined above

The problem is determining the value of h.

It depends on a number of fluid properties:

  • density
  • viscosity
  • thermal conductivity
  • specific heat capacity

But also on:

  • surface geometry
  • flow conditions

Turbulence

The earlier examples showed laminar flow. However, turbulent flow often develops:

Flow in the turbulent region is chaotic and characterized by random, three-dimensional motion of relatively large parcels of fluid.

Check out this very short video showing the transition from laminar to turbulent flow.

What determines whether flow is laminar or turbulent and how does flow become turbulent?

The transition from laminar to turbulent flow is ultimately due to triggering mechanisms, such as the interaction of unsteady flow structures that develop naturally within the fluid or small disturbances that exist within many typical boundary layers. These disturbances may originate from fluctuations in the free stream, or they may be induced by surface roughness or minute surface vibrations

from Incropera & DeWitt (2007).

Imagine treacle (=molasses) flowing over a plate. It’s hard to picture the flow becoming turbulent. That’s because treacle is very viscous. Viscosity is a measure of how much resistance there is to different speeds within the fluid – how much “internal resistance”.

Now picture water moving very slowly over a plate. Again it’s hard to picture the flow becoming turbulent. The reason in this case is because inertial forces are low. Inertial force is the force applied on other parts of the fluid by virtue of the fluid motion.

The higher the inertial forces the more likely fluid flow is to become turbulent. The higher the viscosity of the fluid the less likely the fluid flow is to become turbulent – because this viscosity damps out the random motion.

The ratio between the two is the important parameter. This is known as the Reynolds number.

Re = ρux / μ

where ρ = density, u = free stream velocity, x is the distance from the leading edge of the surface and μ = dynamic viscosity

Once Re goes above around 5 x 105 (500,000) flow becomes turbulent.

For air at 15°C and sea level, ρ=1.2kg/m³ and μ=1.8 x 10-5 kg/m.s

Solving this equation for these conditions, gives a threshold value of ux > 7.5 for turbulence.. This means that if the wind speed (in m/s) x the length of surface over which the wind flows (in m) is greater than 7.5 we will get turbulent flow.

For example, a slow wind speed of 1 m/s (2.2 miles / hour) over 7.5 meters of surface will produce turbulent flow. When you consider the wind blowing over many miles of open ocean you can see that the air flow will almost always be turbulent.

The great physicist and Nobel Laureate Richard Feynman called turbulence the most important unsolved problem of classical physics.

In a nutshell, it’s a little tricky. So how do we determine convection coefficients?

Empirical Measurements & Dimensionless Ratios

Calculation of the convection heat transfer coefficient, h, in the equation we saw earlier can only be done empirically. This means measurement.

However, there are a whole suite of similarity parameters which allow results from one situation to be used in “similar circumstances”.

It’s not an easy subject to understand “intuitively” because the demonstration of these similarity parameters (e.g., Reynolds, Prandtl, Nusselt and Sherwood numbers) relies upon first seeing the differential equations governing fluid flow and heat & mass transfer – and then the transformation of these equations into a dimensionless form.

As the simplest example, the Reynolds number tells us when flow becomes turbulent regardless of whether we are considering air, water or treacle.

And a result for one geometry can be re-used in a different scenario with similar geometries.

Therefore, many tables and standard empirical equations exist for standard geometries – e.g. fluid flow over cylinders, banks of pipes.

Here are some results for air flow over a flat isothermal plate (isothermal = all at the same temperature) – calculated using empirically-derived equations:

Click for a larger view

The 1st graph shows that the critical Reynolds number of 5×105 is reached at 1.3m. The 2nd graph shows how the boundary layer grows under first laminar flow, then second under turbulent flow – see how it jumps up as turbulent flow starts. The 4th graph shows the local convection coefficient as a function of distance from the leading edge – as well as the average value across the 2m of flat plate.

Conclusion

Not much of a conclusion yet, but this article is already long enough. In the next article we will look at the experimental results of heat transfer from the ocean to the atmosphere.

Notes

Note 1 – Heat transfer by radiation might also take place depending on the materials in question.

Note 2 – Of course, as explained in the detailed section on convection, heat cannot be transferred across a boundary between a surface and a fluid by convection. Conduction is therefore important at the boundary between the earth’s surface and atmosphere.

Read Full Post »

A long time ago I wrote the article The Dull Case of Emissivity and Average Temperatures and expected that would be the end of the interest in emissivity. But it is a gift that keeps on giving, with various people concerned that no one has really been interested in measuring surface emissivity properly.

Background

All solid and liquid surfaces emit thermal radiation according to the Stefan-Boltzmann formula:

E = εσT4

where ε=emissivity, a material property; σ = 5.67×10-8 ; T = temperature in Kelvin (absolute temperature)

and E is the flux in W/m²

More about this formula and background on the material properties in Planck, Stefan-Boltzmann, Kirchhoff and LTE.

The parameter called emissivity is the focus of this article. It is of special interest because to calculate the radiation from the earth’s surface we need to know only temperature and emissivity.

Emissivity is a value between 0 and 1. And is also depends on the wavelength of radiation (and in some surfaces like metals, also the direction). Because the wavelengths of radiation depend on temperature, emissivity also depends on temperature.

When emissivity = 1, the body is called a “blackbody”. It’s just the theoretical maximum that can be radiated. Some surfaces are very close to a blackbody and others are a long way off.

Note: I have seen many articles by keen budding science writers who have some strange ideas about “blackbodies”. The only difference between a blackbody and a non-blackbody is that the emissivity of a blackbody = 1, and the emissivity of a non-blackbody is less than 1. That’s it. Nothing else.

The wavelength dependence of emissivity is very important. If we take snow for example, it is highly reflective to solar (shortwave) radiation with as much as 80% of solar radiation being reflected. Solar radiation is centered around a wavelength of 0.5μm.

Yet snow is highly absorbing to terrestrial (longwave) radiation, which is centered around a wavelength of 10μm. The absorptivity and emissivity around freezing point is 0.99 – meaning that only 1% of incident longwave radiation would be reflected.

Let’s take a look at the Planck curve – the blackbody radiation curve – for surfaces at a few slightly different temperatures:

The emissivity (as a function of wavelength) simply modifies these curves.

Suppose, for example, that the emissivity of a surface was 0.99 across this entire wavelength range. In that case, a surface at 30°C would radiate like the light blue curve but at 99% of the values shown. If the emissivity varies across the wavelength range then you simply multiply the emissivity by the intensity at each wavelength to get the expected radiation.

Sometimes emissivity is quoted as an average for a given temperature – this takes into account the shape of the Planck curve shown in the graphs above.

Often, when emissivity is quoted as an overall value, the total flux has been measured for a given temperature and the emissivity is simply:

ε =  actual radiation measured / blackbody theoretical radiation at that temperature

[Fixed, thanks to DeWitt Payne for pointing out the mistake]

In practice the calculation is slightly more involved, see note 1.

It turns out that the emissivity of water and of the ocean surface is an involved subject.

And because of the importance of calculating the sea surface temperature from satellite measurements, the emissivity of the ocean in the “atmospheric window” (8-14 μm) has been the subject of many 100’s of papers (perhaps 1000’s). These somewhat overwhelm the papers on the less important subject of “general ocean emissivity”.

Measurements

Aside from climate, water itself is an obvious subject of study for spectroscopy.

For example, 29 years ago Miriam Sidran writing Broadband reflectance and emissivity of specular and rough water surfaces, begins:

The optical constants of water have been extensively studied because of their importance in science and technology. Applications include a) remote sensing of natural water surfaces, b) radiant energy transfer by atmospheric water droplets, and c) optical properties of diverse materials containing water, such as soils, leaves and aqueous solutions.

In this study, values of the complex index of refraction from six recent articles were averaged by visual inspection of the graphs, and the most representative values in the wavelength range of 0.200 μm to 5 cm were determined. These were used to find the directional polarized reflectance and emissivity of a specular surface and the Brewster or pseudo-Brewster angle as functions of wavelength.

The directional polarized reflectance and emissivity of wind-generated water waves were studied using the facet slope distribution function for a rough sea due to Cox and Munk [1954].

Applications to remote sensing of sea surface temperature and wave state are discussed, including effects of salinity.

Emphasis added. She also comments in her paper:

For any wavelength, the total emissivity, ε, is constant for all θ [angles] < 45° [from vertical]; this follows from Fig. 8 and Eq. (6a). It is important in remote sensing of thermal radiation from space, as discussed later..

The polarized emissivities are independent of surface roughness for θ < 25°, while for θ > 25°, the thermal radiation is partly depolarized by the roughness.

This means that when you look at the emission radiation from directly above (and close to directly above) the sea surface roughness doesn’t have an effect.

I thought some other comments might also be interesting:

The 8-14-μm spectral band is chosen for discussion here because (a) it is used in remote sensing and (b) the atmospheric transmittance, τ, in this band is a fairly well-known function of atmospheric moisture content. Water vapor is the chief radiation absorber in this band.

In Eqs. (2)-(4), n and k (and therefore A and B) are functions of salinity. However, the emissivity value, ε, computed for pure water differs from that of seawater by <0.5%.

When used in Eqs. (10), it causes an error of <0.20°C in retrieved Ts [surface temperature]. Since ε in this band lies between 0.96 and 0.995, approximation ε= 1 is routinely used in sea surface temperature retrieval. However, this has been shown to cause an error of -0.5 to -1.0°C for very dry atmospheres. For very moist atmospheres, the error is only ≈0.2°C.

One of the important graphs from her paper:

Click to view a larger image

Emissivity = 1 – Reflectance. The graph shows Reflectance vs Wavelength vs Angle of measurement.

I took the graph (coarse as it is) and extracted the emissivity vs wavelength function (using numerical techniques). I then calculated the blackbody radiation for a 15°C surface and the radiation from a water surface using the emissivity from the graph above for the same 15°C surface. Both were calculated from 1 μm to 100 μm:

The “unofficial” result, calculating the average emissivity from the ratio: ε = 0.96.

This result is valid for 0-30°C. But I suspect the actual value will be modified slightly by the solid angle calculations. That is, the total flux from the surface (the Stefan-Boltzmann equation) is the spectral intensity integrated over all wavelengths, and integrated over all solid angles. So the reduced emissivity closer to the horizon will affect this measurement.

Niclòs et al – 2005

One of the most interesting recent papers is In situ angular measurements of thermal infrared sea surface emissivity—validation of models, Niclòs et al (2005). Here is the abstract:

In this paper, sea surface emissivity (SSE) measurements obtained from thermal infrared radiance data are presented. These measurements were carried out from a fixed oilrig under open sea conditions in the Mediterranean Sea during the WInd and Salinity Experiment 2000 (WISE 2000).

The SSE retrieval methodology uses quasi-simultaneous measurements of the radiance coming from the sea surface and the downwelling sky radiance, in addition to the sea surface temperature (SST). The radiometric data were acquired by a CIMEL ELECTRONIQUE CE 312 radiometer, with four channels placed in the 8–14 μm region. The sea temperature was measured with high-precision thermal probes located on oceanographic buoys, which is not exactly equal to the required SST. A study of the skin effect during the radiometric measurements used in this work showed that a constant bulk–skin temperature difference of 0.05±0.06 K was present for wind speeds larger than 5 m/s. Our study is limited to these conditions.

Thus, SST used as a reference for SSE retrieval was obtained as the temperature measured by the contact thermometers placed on the buoys at 20-cm depth minus this bulk–skin temperature difference.

SSE was obtained under several observation angles and surface wind speed conditions, allowing us to study both the angular and the sea surface roughness dependence. Our results were compared with SSE models..

The introduction explains why specifically they are studying the dependence of emissivity on the angle of measurement – for reasons of accurate calculation of sea surface temperature:

The requirement of a maximum uncertainty of ±0.3 K in sea surface temperature (SST) as input to climate models and the use of high observation angles in the current space missions, such as the 55° for the forward view of the Advanced Along Track Scanning Radiometer (AATSR) (Llewellyn-Jones et al., 2001) on board ENVISAT, need a precise and reliable determination of sea surface emissivity (SSE) in the thermal infrared region (TIR), as well as analyses of its angular and spectral dependences.

The emission of a rough sea surface has been studied over the last years due to the importance of the SSE for accurate SST retrieval. A reference work for many subsequent studies has been the paper written by Cox and Munk (1954)..

The experimental setup:

From Niclos (2004)

From Niclos (2004)

The results (compared with one important model from Masuda et al 1988):

From Niclos (2004)

From Niclos (2004)

Click on the image for a larger graphic

This paper also goes on to compare the results with the model of Wu & Smith (1997) and indicates the Wu & Smith’s model is a little better.

The tabulated results, note that you can avoid the “eye chart effect” by clicking on the table:

Click on the image for a larger view

Note that the emissivities are in the 8-14μm range.

You can see that the emissivity when measured from close to vertical is 0.98 – 0.99 at two different wind speeds.

Konda et al – 1994

A slightly older paper which is not concerned with angular dependence of sea surface emissivity is by Konda, Imasato, Nishi and Toda (1994).

They comment on a few older papers:

Buettner and Kern (1965) estimated the sea surface emissivity to be 0.993 from an experiment using an emissivity box, but they disregarded the temperature difference across the cool skin.

Saunders (1967b, 1968) observed the plane sea surface irradiance from an  airplane and determined the reflectance. By determining the reflectance as the ratio of the differences in energy between the clear and the cloudy sky at different places, he calculated the emissivity to be 0.986. The process of separating the reflection from the surface irradiance, however, is not precise.

Mikhaylov and Zolotarev (1970) calculated the emissivity from the optical constant of the water and found the average in the infrared region was 0.9875.

The observation of Davies et al. (1971) was performed on Lake Ontario with a wave height less than 25 cm. They measured the surface emission isolated from sky radiation by an aluminum cone, and estimated the emissivity to be 0.972. The aluminum was assumed to act as a mirror in infrared region. In fact,aluminum does not work as a perfect mirror.

Masuda et al. (1988) computed the surface emissivity as a function of the zenith angle of observed radiation and wind speed. They computed the emissivity from the reflectance of a model sea surface consisting of many facets, and changed their slopes according to Gaussian distribution with respect to surface wind. The computed emissivity in 11 μm was 0.992 under no wind.

Each of these studies in trying to determine the value of emissivity, failed to distinguish surface emission from reflection and to evaluate the temperature difference across the cool skin. The summary of these studies are tabulated in Table 1.

The table summarizing some earlier work:

Konda (1994)

Konda (1994)

Konda and his co-workers took measurements over a one year period from a tower in Tanabe Bay, Japan.

They calculated from their results that the ocean emissivity was 0.984±0.004.

One of the challenges for Konda’s research and for Niclòs is the issue of sea surface temperature measurement itself. Here is a temperature profile which was shown in the comments of Does Back Radiation “Heat” the Ocean? – Part Three:

Kawai & Wada (2007)

Kawai & Wada (2007)

The point is the actual surface from which the radiation is emitted will usually be at a slightly different temperature from the bulk temperature (note the logarithmic scale of depth). This is the “cool skin” effect. This surface temperature effect is also moderated by winds and is very difficult to measure accurately in field conditions.

Smith et al – 1996

Another excellent paper which measured the emissivity of the ocean is by Smith et al (1996):

An important objective in satellite remote sensing is the global determination of sea surface temperature (SST). For such measurements to be useful to global climate research, an accuracy of ±0.3K or better over a length of 100km and a timescale of days to weeks must be attained. This criterion is determined by the size of the SST anomalies (≈1K) that can cause significant disturbance to the global atmospheric circulation patterns and the anticipated size of SST perturbations resulting from global climate change. This level of uncertainty is close to the theoretical limits of the atmospheric corrections..

It is also a challenge to demonstrate that such accuracies are being achieved, and conventional approaches, which compare the SST derived from drifting or moored buoys, generally produce results with a scatter of ±0.5 to 0.7K. This scatter cannot be explained solely by uncertainties in the buoy thermometers or the noise equivalent temperature difference of the AVHRR, as these are both on the order of 0.2K or less but are likely to be surface emissivity/reflectivity uncertainties, residual atmospheric effects, or result from the methods of comparison

Note that the primary focus of this research was to have accurate SST measurements from satellites.

From Smith et al (1996)

From Smith et al (1996)

The experimental work on the research vessel Pelican included a high spectral resolution Atmospheric Emitted Radiance Interferometer (AERI) which was configured to make spectral observations of the sea surface radiance at several view angles. Any measurement from the surface of course, is the sum of the emitted radiance from the surface as well as the reflected sky radiance.

Also measured:

  • ocean salinity
  • intake water temperature
  • surface air temperature
  • humidity
  • wind velocity
  • SST within the top 15cm of depth

There was also independent measurement of the radiative temperature of the sea surface at 10μm with a Heimann broadband radiation thermometer “window” radiometer. And radiosondes were launched from the ship roughly every 3 hours.

Additionally, various other instruments took measurements from a flight altitude of 20km. Satellite readings were also compared.

The AERI measured the spectral distribution of radiance from 3.3μm to 20μm at 4 angles. Upwards at 11.5° from zenith, and downwards at 36.5°, 56.5° and 73.5°.

There’s a lot of interesting discussion of the calculations in their paper. Remember that the primary aim is to enable satellite measurements to have the most accurate measurements of SST and satellites can only really “see” the surface through the “atmospheric window” from 8-12μm.

Here are the wavelength dependent emissivity results shown for the 3 viewing angles. You can see that at the lowest viewing angle of 36.5° the emissivity is 0.98 – 0.99 in the 8-12μm range.

From Smith et al (1996)

From Smith et al (1996)

Note that the wind speed doesn’t have any effect on emissivity at the more direct angle, but as the viewing angle moves to 73.5° the emissivity has dropped and high wind speeds change the emissivity considerably.

Henderson et al – 2003

Henderson et al (2003) is one of the many papers which consider the theoretical basis of how viewing angles change the emissivity and derive a model.

Just as an introduction, here is the theoretical variation in emissivity with measurement angle, versus “refractive index” as computed by the Fresnel equations:

The legend is refractive index from 1.20 to 1.35. Water, at visible wavelengths, has a refractive index of 1.33. This shows how the emissivity reduces once the viewing angle increases above 50° from the vertical.

The essence of the problem of sea surface roughness for large viewing angles is shown in the diagram below, where multiple reflections take place:

Henderson (2003)

Henderson (2003)

Henderson and his co-workers compare their results with the measured results of Smith et al (1996) and also comment that at zenith viewing angles the emissivity does not depend on the wind speed, but at larger angles from vertical it does.

A quick summary of their model:

We have developed a Monte Carlo ray-tracing model to compute the emissivity of computer-rendered, wind-roughened sea surfaces. The use of a ray-tracing method allows us to include both the reflected emission and shadowing and, furthermore, permits us to examine more closely how these processes control the radiative properties of the surface. The intensity of the radiation along a given ray path is quantified using Stokes vectors, and thus, polarization is explicitly included in the calculations as well.

Their model results compare well with the experimental results. Note that the approach of generating a mathematical model to calculate how emissivity changes with wind speed and, therefore, wave shape is not at all new.

Water retains its inherent properties of emissivity regardless of how it is moving or what shape it is. The theoretical challenge is handling the multiple reflections, absorptions, re-emissions that take place when the radiance from the water is measured at some angle from the vertical.

Conclusion

The best up to date measurements of ocean emissivity in the 8-14 μm range are 0.98 – 0.99. The 8-14 μm range is well-known because of the intense focus on sea surface temperature measurements from satellite.

From quite ancient data, the average emissivity of water across a very wide broadband range (1-100 μm) is 0.96 for water temperatures from 0-30°C.

The values from the ocean when measured close to the vertical are independent of wind speed and sea surface roughness. As the angle of measurement moves from the vertical around to the horizon the measured emissivity drops and the wind speed affects the measurement significantly.

These values have been extensively researched because the calculation of sea surface temperature from satellite measurements in the 8-14μm “atmospheric window” relies on the accurate knowledge of emissivity and any factors which affect it.

For climate models – I haven’t checked what values they use. I assume they use the best experimental values from the field. That’s an assumption. I’ve already read enough on ocean emissivity.

For energy balance models, like the Trenberth and Kiehl update, an emissivity of 1 doesn’t really affect their calculations. The reason, stated simply, is that the upwards surface radiation and the downward atmospheric radiation are quite close in magnitude. For example, the globally annually averaged values of both are 396 W/m² (upward surface) vs 340 W/m² (downward atmospheric).

Suppose the emissivity drops from 0.98 to 0.97 – what is the effect on upwards radiation through the atmosphere?

The upwards radiation has dropped by 4W/m², but the reflected atmospheric radiation has increased by 3.4W/m². The net upwards radiation through the atmosphere has reduced by only 0.6 W/m².

One of our commenters asked what value the IPCC uses. The answer is they don’t use a value at all because they summarize research from papers in the field.

Whether they do it well or badly is a subject of much controversy, but what is most important to understand is that the IPCC does not write papers, or perform GCM model runs, or do experiments – and that is why you see almost no equations in their many 1000’s of pages of discussion on climate science.

For those who don’t believe the “greenhouse” effect exists, take a look at Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part One in the light of all the measured results for ocean emissivity.

On Another Note

It’s common to find claims on various blogs and in comments on blogs that climate science doesn’t do much actual research.

I haven’t found that to be true. I have found the opposite.

Whenever I have gone digging for a particular subject, whether it is the diurnal temperature variation in the sea surface, diapycnal & isopycnal eddy diffusivity, ocean emissivity, or the possible direction and magnitude of water vapor feedback, I have found a huge swathe of original research, of research building on other research, of research challenging other research, and detailed accounts of experimental methods, results and comparison with theory and models.

Just as an example, in the case of emissivity of sea surface, at the end of the article you can see the first 30 or so results pulled up from one journal – Remote Sensing of the Environment for the search phrase “emissivity sea surface”. The journal search engine found 348 articles (of course, not every one of them is actually about ocean emissivity measurements).

Perhaps it might turn out to be the best journal for this subject, but it’s still just one journal.

References

Broadband reflectance and emissivity of specular and rough water surfaces, Sidran, Applied Optics (1981)

In situ angular measurements of thermal infrared sea surface emissivity—validation of models, Niclòs, Valor, Caselles, Coll & Sànchez, Remote Sensing of Environment (2005)

Measurement of the Sea Surface Emissivity, Konda, Imasato, Nishi and Toda, Journal of Oceanography (1994)

Observations of the Infrared Radiative Properties of the Ocean—Implications for the Measurement of Sea Surface Temperature via Satellite Remote Sensing, Smith, Knuteson, Revercomb, Feltz, Nalli, Howell, Menzel, Brown, Brown, Minnett & McKeown, Bulletin of the American Meteorological Society (1996)

The polarized emissivity of a wind-roughened sea surface: A Monte Carlo model, Henderson, Theiler & Villeneuve, Remote Sensing of Environment (2003)

Notes

Note 1: The upward radiation from the surface is the sum of three contributions: (i) direct emission of the sea surface, which is attenuated by the absorption of the atmospheric layer between the sea surface and the instrument; (ii) reflection of the downwelling sky radiance on the sea, attenuated by the atmosphere; and (iii) the upwelling atmospheric radiance emitted in the observing direction.

So the measured radiance can be expressed as:

where the three terms on the right are each of the three contributions noted in the same order.

Note 2: 1/10th of the search results returned from one journal for the search term “emissivity sea surface”:

Remote Sensing of Environment - search results

Remote Sensing of Environment - search results

Read Full Post »

This first part considers some elementary points. In the next part we will consider more advanced aspects of this subject.

Since 1978 we have had satellites continuously measuring:

  • incoming solar radiation
  • reflected solar radiation
  • outgoing terrestrial radiation

To see how we can differentiate the solar and terrestrial radiation, take a look at The Sun and Max Planck Agree – Part Two.

Top of Atmosphere Satellite Measurements

The top of atmosphere (TOA) radiation from the climate system is usually known as outgoing longwave radiation, or OLR. “Longwave” is a climate convention for wavelength >4μm.

Here’s what the OLR looks like to the satellites. I thought it might be interesting for some people to see how the values change each month:

CERES OLR

All of this data comes from CERES – Clouds and the Earth’s Radiant Energy System. You can review this data for yourself here. How accurate is the data?

The uncertainty of an individual top-of-atmosphere OLR measurement is 5 W/m², while the uncertainty of average OLR over a 1°-latitude x 1°-longitude box, which contains many viewing angles, is ≈1.5 W/m²

from Dessler et al (2007) writing about the CERES data.

If we summarize this data into monthly global averages:

The average for 2009 is 239 W/m². This average includes days, nights and weekends. The average can be converted to the total energy emitted from the climate system over a year like this:

Total energy radiated by the climate system into space in one year = 239 x number of seconds in a year x area of the earth in meters squared

= 239 x 60 x 60 x 24 x 365 x 4 x 3.14 x (6.37 x 106

= 239 x 3.15 x 107 x 5.10 x 1014

ETOA= 3.8 x 1024 J

The reason for calculating the total energy in 2009 is because many people have realized that there is a problem with average temperatures and imagine that this problem is carried over to average radiation. Not true. We can take average radiation and convert it into total energy with no problem.

What about the radiation from the surface?

Surface Radiation

What do the satellite measurements say about surface radiation?

Nothing.

Well strictly speaking – they say a lot, but only once certain theories of radiative transfer are embraced.

To be more accurate, what satellite measurements OF surface radiation do we have?

None.

That’s because the atmosphere interacts with the radiation emitted from the surface. So any top-of-atmosphere measurements by satellite are not “unsullied surface measurements”.

There are temperature stations all around the world – not enough for some people, and not as well-located as they could be – but what about stations for measuring radiation upwards from the earth (and ocean) surface?

Thin on the ground, extremely thin.

Luckily, there is a very simple formula for radiation emitted from the surface of the earth:

E = εσT4

where σ is a constant = 5.67 x 10-8, ε = emissivity, a property of surface material, and T = temperature in K (absolute temperature)

This equation is called the Stefan-Boltzmann equation. More about it in Planck, Stefan-Boltzmann, Kirchhoff and LTE. It is a well-proven equation with 150 years of evidence behind it – and from all areas of engineering and physics. It is used in calculations for heat-exchangers and boilers, for example.

Still, many people when they find out that the radiation from the surface of the earth is calculated not measured are very suspicious. It’s good to be skeptical. Ask questions. But don’t assume it’s made up just because it’s calculated. Why trust thermometers? They actually rely on material properties as well..

Anyway, back to the emission of radiation from the surface. What about this parameter emissivity, ε?

Emissivity is a function of wavelength. This means it varies as the wavelength of radiation varies. Some examples, not all of them materials from the surface of the earth:

Reflectivity vs wavelength for various surfaces, Incropera (2007)

Reflectivity vs wavelength for various surfaces, Incropera (2007)

Note that reflectivity = 1 – emissivity in the graph above.

Without going into a lot of detail, all it means is that the measurement of emissivity needs to be for the appropriate temperature. See note 1.

If we measure emissivity of water one day, we find it is the same the next day and also in 589 days time. It is a material property which means that once measured, the only questions we have are:

a) what is the temperature of the surface
b) what is the material of the surface (so we can look up the measured emissivity for this temperature)

Generally the emissivity of the earth’s surface is very close to 1 (for “longwave” measurements).

Oceans, which cover 71% of the earth’s surface, have an emissivity of about 0.98 – 0.99.

The average temperature of the earth’s surface (including days, nights and all locations) is around 15°C (288K). Average temperature is a problematic value because radiation is not linearly dependent on temperature – it is dependent on the 4th power of temperature. See The Dull Case of Emissivity and Average Temperatures for an example of the problems in using “average temperature”.

Here is an example of measurement of upward surface radiation:

Upward and downward radiation measurements, EBEX 2000, Kohsiek (2007)

Upward and downward radiation measurements, EBEX 2000, Kohsiek (2007)

The line with the x’s is the measured surface upward radiation.

Here is the actual temperature:

Temperature for 14 August 2000, from Wim Kohsiek, private communication

Temperature for 14 August 2000, from Wim Kohsiek, private communication

And calculated emitted radiation:

Calculated (theoretical) upward radiation, 14 August 2000

Calculated (theoretical) upward radiation, 14 August 2000

Note how it matches the measured value. You can see this in more detail in The Amazing Case of “Back Radiation” – Part Three.

The theory about emitted radiation

E = εσT4

– is a solid theory, backed up over the last 150 years.

If we calculate the average radiation from the surface, globally annually averaged, we get a value around 390 W/m².

If we calculate the total surface radiation over one year, we get Esurf = 6.2 x 1024 J.

The Inappropriately-Named “Greenhouse” Effect

The surface radiates around 390 W/m². The climate system radiates around 239 W/m² to space:

How does this happen?

As I found with previous articles, many people’s instinctive response is “you’ve made a mistake”.

Usually those that just aren’t happy with this diagram solve the “dissonance” by concluding that there is something wrong with the averaging, or Stefan-Boltzmann’s law, or the measurement of emissivity around the planet.

Here’s the total energy for one year radiated from top of atmosphere and from the surface:

Remember that the top of atmosphere number is measured. Remember that the surface radiation is calculated, and relies on measurements of temperature, the material property called emissivity and an equation backed up by 150 years of experimental work across many fields.

This effect which we see has come, inappropriately, to be called the “greenhouse” effect. We could convert the effect to a temperature but there are more important things to move onto.

Before examining how this amazing effect takes place and what happens to all this energy – “Does it just pile up and eventually explode, no – so obviously you made a mistake”, and so on – I’ll leave one thought for interested students..

We have looked at the average radiation from surface and top of atmosphere (and also totaled that up).

Instead, we could take a look at some individual ocean locations where the temperature is well known. We have the CERES monthly averages on a 1° x 1° grid above.

Take a few ocean locations and find the average temperatures for each month.

Then calculate the surface radiation using the known emissivity of 0.99. Compare that to the top of atmosphere radiation from the CERES charts at the start of the article. Also calculate what value of ocean emissivity would actually be needed for surface radiation to equal the top of atmosphere radiation (so as to make the “greenhouse” effect disappear). Please report back in the comments.

The reason I chose the ocean for this exercise is because the emissivity is well known and measured so many times, because ocean surfaces don’t change temperature very much from day to night (because of the high heat capacity of water) and because oceans cover 71% of the earth’s surface. If ocean data verifies the “greenhouse” effect to you, then it’s pretty hard to find emissivity values of other surface types that would make the “greenhouse” effect disappear.

Interaction of Matter with a Radiation Field

Huh? Let’s choose a different heading..

What Happens to Radiation as it Travels Through the Atmosphere

If longwave radiation (remember this is the radiation emitted by the earth and climate system) was transparent to the various gases in the atmosphere the surface radiation would not change on its journey to the top of atmosphere. See The Hoover Incident for more on this and the consequences.

Instead at each height in the atmosphere there is absorption of some radiation. The detail gets pretty complicated because each gas absorbs at very selective wavelengths (see note 2).

The very fact that radiation can be absorbed by gases shows that you shouldn’t expect the radiation going into a layer of atmosphere to be the same value when it emerges the other side. Here’s a simple diagram (which also can be found in Theory and Experiment – Atmospheric Radiation):

If a proportion of the upward radiation is absorbed by the atmosphere so that less radiation emerged than entered (the red text and arrows) then isn’t this a first law of thermodynamics problem?

Well, being specific:

Energy In – Energy Out = Energy Retained in Heating the Layer

So if the temperature of that layer was not increasing or decreasing then:

Energy in = Energy out.

So surely, absorption of radiation with no continuous heating is a problem for the first law of thermodynamics?

Of course, energy transfer can also take place via convection. So it is theoretically possible that energy could be absorbed as radiation and leave via convection. But that isn’t really possible all through the climate as convection would need to transfer energy from high up in the atmosphere to the surface, whereas in general, convection transfers energy in the other direction – from the surface to higher up in the atmosphere.

So what happens and how does the first law of thermodynamics stay intact?

Very simple – every layer of atmosphere also radiates energy. This is shown as blue text and arrows in the diagram.

Each layer in the atmosphere does obey the first law of thermodynamics. But by the time we reach the top of atmosphere the upwards radiation has been significantly reduced – on average from 390 W/m² to 239 W/m².

Each layer in the atmosphere absorbs radiation from below (and above). The gases that absorb the energy share this energy via collisions with other gases (thermalization), so that all of the different gases are at the same temperature.

And the radiately-active gases (like water vapor and CO2) then radiate energy in all directions.

This last point is the key point. If the radiation was (somehow magically) only upwards then the “greenhouse” effect would not occur.

Digression – Up & Down or All Around?

You will often see explanations with “the layer then radiates both up and down” – and I think I have used this expression myself. Some people then respond:

Doesn’t it radiate in all directions? Looks like another climate science over-simplication..

This is a good point. Radiation from the atmosphere does go in all directions not just up and down.

In the radiative transfer equations this is taken into account. The simplified explanation just makes for an easier to understand point for beginners. See Vanishing Nets under Diffusivity Approximation for more about the calculation.

End of digression.

Radiation Through the Atmosphere

Solar radiation is mostly absorbed by the earth’s surface (because the atmosphere is mostly transparent to solar radiation). This heats the surface, which radiates upward. The typical radiation from the earth’s surface at 15°C measured just above it looks something like this:

The atmosphere absorbs this longwave radiation and consequently radiates in all directions. This is why, when we view the spectrum of the upward radiation at the top of atmosphere we see something like this:

From Atmospheric Radiation, Goody (1989)

From Atmospheric Radiation, Goody (1989)

Note the reversal of the x-axis direction.

The “missing bits” in the curve are the wavelengths where the radiatively active gases have absorbed and re-radiated. Some of the radiation is downward, which explains where the “missing radiation” goes.

At the surface we can measure this downward radiation from the atmosphere. See The Amazing Case of “Back Radiation” -Part One and the following two parts for more discussion of this.

But – as already stated – at each height in the atmosphere, energy fluxes are balanced:

Energy in = Energy out

Or – the difference between energy in and energy out results in increasing or decreasing temperature.

If you like, think of the atmosphere as a partial mirror reflecting a proportion of the radiation at a number of layers up through the atmosphere. It’s a mental picture that might help even though what actually happens is somewhat different.

See also Do Trenberth and Kiehl understand the First Law of Thermodynamics? Part Three – The Creation of Energy?

Convection

No explanation of radiation would be complete without people saying that this argument is falsified by the fact that convection hasn’t been discussed. Just to forestall that: Convection moves heat from the surface up into the atmosphere very effectively and cools the surface compared with the case if convection didn’t occur.

But – emission and absorption of radiation still takes place. Convection doesn’t change the absorption of radiation (unless it changes the concentration of various gases). But convection, by changing the temperature profile, does change emission.

As we will see in Part Two, absorption is a function of concentration of each gas; while emission is a function of concentration of each gas plus the temperature of that portion of the atmosphere.

Conclusion

The atmosphere interacts with the radiation from the surface and that’s why the surface radiation has been reduced by the time it leaves the climate system.

The satellites measure the value at the top of atmosphere very comprehensively.

For those convinced that there is no “greenhouse” effect, I recommend focusing on the emissivity measurements used in the calculation of emission from the surface.

The ocean has been measured at 0.98-0.99 and covers 71% of the surface of the earth but perhaps the average surface emissivity at terrestrial temperatures is only 0.61.. A measurement snafu..

In the next part we will consider in more detail how the different effects cause changes in the OLR.

Other articles:

Part Two – introducing a simple model, with molecules pH2O and pCO2 to demonstrate some basic effects in the atmosphere. This part – absorption only

Part Three – the simple model extended to emission and absorption, showing what a difference an emitting atmosphere makes. Also very easy to see that the “IPCC logarithmic graph” is not at odds with the Beer-Lambert law.

Part Four – the effect of changing lapse rates (atmospheric temperature profile) and of overlapping the pH2O and pCO2 bands. Why surface radiation is not a mirror image of top of atmosphere radiation.

Part Five – a bit of a wrap up so far as well as an explanation of how the stratospheric temperature profile can affect “saturation”

Part Six – The Equations – the equations of radiative transfer including the plane parallel assumption and it’s nothing to do with blackbodies

Part Seven – changing the shape of the pCO2 band to see how it affects “saturation” – the wings of the band pick up the slack, in a manner of speaking

And Also –

Theory and Experiment – Atmospheric Radiation – real values of total flux and spectra compared with the theory.


References

An analysis of the dependence of clear-sky top-of-atmosphere outgoing longwave radiation on atmospheric temperature and water vapor, Dessler et al, Journal of Geophysical Research (2008).

Notes

Note 1: Radiation from a surface at 15°C (288K) will have a peak radiation at 10μm with radiation following the Planck curve. The average emissivity for 288K needs to be the wavelength-dependent emissivity weighted appropriately for the corresponding Planck curve. This will be very similar for the emissivity for the same surface type at 300K or 270K but is likely be totally different for the emissivity for the surface at 3000K – not a situation we find on earth.

Note 2: The most common gases in the atmosphere, Nitrogen and Oxygen, don’t interact with longwave radiation. They don’t absorb or emit – at least, any interaction is many orders of magnitude lower than the various trace gases like water vapor, CO2, methane, NO2, etc. This is after taking into account their much higher concentration. See CO2 – An Insignificant Trace Gas? Part Two

Read Full Post »

In New Theory Proves AGW Wrong! I said:

So, if New Theory Proves AGW Wrong is an exciting subject, you will continue to enjoy the subject for many years, because I’m sure there will be many more papers from physicists “proving” the theory wrong.

However, it’s likely that if they are papers “falsifying” the foundational “greenhouse” gas effect – or radiative-convective model of the atmosphere – then probably each paper will also contradict the ones that came before and the ones that follow after.

I noticed on another blog an article lauding the work of a physicist who reaches some different conclusions about the role of CO2 and other trace gases in the atmosphere.

This has clearly made a lot of people happy which is wonderful. However, if you want to understand the science of the subject, read on.

One of the areas that many people are confused by is the distinction between GCMs and the radiative transfer equations. Well, strictly speaking almost everyone who is confused about the distinction doesn’t know what the radiative transfer equations are.

So I should say:

Many people are confused about the distinction between GCMs and the effect of CO2 in the atmosphere

They are quite different. The role of CO2 and other trace gases is a component of GCMs.

Digression – As an analogy with less emotive power we could consider the subject of ocean circulation. Now it’s easy to prove theoretically that more dense water sinks and less dense water rises. We can do 100’s of experiments in tanks that prove this. Now if the models that calculate the whole ocean circulation don’t quite get the right answers one reason might be that the theory of buoyancy is a huge mistake.

But there could be other reasons as well. For example, flaws in equations for the amount of momentum transferred from the winds to the ocean, knowledge of the salinity throughout the ocean, knowledge of the variation in eddy diffusivity and tens – or hundreds – of other reasons. All we need to do to confirm buoyancy is to go back to our tank experiments.. End of digression.

Happily there is plenty of detailed experimental work to back up “standard theory” about CO2 and therefore prove “new theories” wrong.

Richard M. Goody

RM Goody was the doctoral advisor to Richard Lindzen. He wrote the classic work Atmospheric Radiation: Theoretical Basis (1964). I have the 2nd edition, co-authored with Y.L. Yung, from 1989.

Here are measured vs theoretical spectra at the top of atmosphere. Note that the spectra are displaced for easier comparison:From Atmospheric Radiation, Goody (1989)

From Atmospheric Radiation, Goody (1989)

Click for a larger image

This extract makes it easier to see the magnitude of any differences:From Atmospheric Radiation, Goody (1989)

From Atmospheric Radiation, Goody (1989)

Click for a larger image

Goody & Yung comment:

The agreement between theory and observation in Figs 6.1 and 6.2 is generally within about 10%. It is surprising, at first sight, that it is not better. Uncertainties in the spectroscopic data are partially responsible, but it is difficult to assign all the errors to this source. Local variations in temperature and departures from a strictly stratified atmosphere must also contribute.

The radiosonde data used may not correctly apply to the path of the radiation. The atmospheric temperatures could be adjusted slightly to give better agreement..

How was the theoretical calculation done? By solving this equation, which looks a little daunting, but I will explain it in simple terms:

Before we look in a little detail about the radiative transfer equations, it is important to understand that to calculate the interaction of the atmosphere and radiation, there are two parameters which are required:

  • the quantity of radiatively-active gases (like CO2 and water vapor) vertically through the atmosphere (affects absorption)
  • the temperature profile vertically through the atmosphere (affects emission)

If we have that data, the equation above can be solved to produce a spectrum like the one shown. The uncertainty in the data generates uncertainty in the results.

Given the closeness of the match, if a “new theory” comes along and produces very different results then there are two things that we would expect:

  • demonstrating the improvement in experimental/theoretical match
  • explaining why the existing theory is wrong OR under what specific circumstances the new theory does a better job

When you don’t see either of these you can be reasonably sure that the “new theory” isn’t worth spending too much time on.

Of course, the result from the great RM Goody could be a fluke, or he could have just made the whole thing up. Better to consider this possibility – after all, if a random person has produced a 27-page document with lots of equations it is very likely that this new person if correct, so long as they support your point of view..

Dessler, Yang, Lee, Solbrig, Zhang and Minschwaner

In their paper, An analysis of the dependence of clear-sky top-of-atmosphere outgoing longwave radiation on atmospheric temperature and water vapor, the authors provide a comparison of the measured results from CERES with the solution of the radiative transfer equations (using a particular band model, see note 1):

 

From Dessler et al (2008)

From Dessler et al (2008)

 

The authors say:

First, we compare the OLR measurements to OLR calculated from two radiative transfer models. The models use as input simultaneous and collocated measurements of atmospheric temperature and atmospheric water vapor made by the Atmospheric Infrared Sounder (AIRS). We find excellent agreement between the models’ predictions of OLR and observations, well within the uncertainty of the measurements.

Notice the important point that to calculate the OLR (outgoing longwave radiation) measurements at the top of atmosphere we need atmospheric temperature and water vapor concentration (CO2 is well-mixed in the atmosphere so we can assume the values of CO2).

For interest:

The uncertainty of an individual top-of-atmosphere OLR measurement is 5 W/m2 , while the uncertainty of average OLR over a 1-latitude  1-longitude box, which contains many viewing angles, is 1.5 W/m²

The primary purpose of this paper wasn’t to demonstrate the correctness of the radiative transfer equations – these are beyond dispute – but was first to demonstrate the accuracy of a particular band model, and second, to use that result to demonstrate the relationship between the surface temperature, humidity and OLR measurement.

So we have detailed spectral calculations matching standard theory as well as 100,000 flux measurements matching theory – at the top of atmosphere.

What about at the ground?

Walden, Warren and Murcray

In Measurements of the downward longwave radiation spectrum over the Antarctic plateau and comparisons with a line-by-line radiative transfer model for clear skies, the authors compare measured spectra at the ground with the theoretical results:

 

Antarctica - Walden (1998)

Antarctica - Walden (1998)

 

As you can see, a close match across all measured wavelengths.

I don’t remember seeing a paper which compares large numbers of DLR (downward longwave radiation) measurements vs theory (there probably are some), but I hope I have done enough to demonstrate that people with new theories have a mountain to climb if they want to prove the standard theory wrong.

Whether or not GCMs can predict the future or even model the past is a totally different question from Do we understand the physics of radiation transfer through the atmosphere? The answer to this last question is “yes”.

The Standard Approach – Theory

Understanding the theory of radiative transfer is quite daunting without a maths background, and as many readers don’t want to see lots of equations I will try and describe the approach non-mathematically. There is some simple maths for this subject in CO2 – An Insignificant Trace Gas? Part Three.

Consider a “monochromatic” beam of radiation travelling up through a thin layer of atmosphere:

Monochromatic means “at one wavelength”.

The light entering the layer at the bottom will be partly absorbed by the gas, dependent on the presence of any absorbers at that wavelength. The actual calculation of the amount of absorption is simple. The attenuation that results is in proportion to the intensity of radiation and in proportion to the amount of absorbers and a parameter called “capture cross section”. This last parameter relates to the effectiveness of the particular gas in absorbing that wavelength of radiation – and is measured in a spectroscopy lab.

There are complications in that the capture cross section of a gas is also dependent on pressure and temperature – and pressure varies by a factor of five from the surface to the tropopause. This just makes the calculation more tedious, it doesn’t present any major obstacles to carrying out the calculation.

That means we can calculate the intensity of radiation at that wavelength emerging from the other side of the slab of atmosphere. Or does it?

No, the problem is not complete. If a gas can absorb at a wavelength it will also radiate at that same wavelength.

Energy from radiation absorbed by the gas is shared thermally with all other gas molecules (except high up in the atmosphere where the pressure is very low) and so all radiatively-active gases will emit radiation. However, at the wavelength we are considering, only specific gases will radiate.

So the calculation for the radiation leaving the slab of atmosphere is also dependent on the temperature of the gas and its ability to radiate at that wavelength.

To complete the calculation we need to carry it out across all wavelengths (“integrate” across all wavelengths).

That calculation is then complete for the thin slab of atmosphere. So finally we need to “integrate” this calculation vertically through the atmosphere.

If you read back through the explanation, as it becomes clearer you will see that you need to know the quantity of CO2, water vapor and other trace gases at each height. And that you need to know the temperature at each height in the atmosphere.

Now it’s not a calculation you can do in your head, or on a pocket calculator. Which is why the many people writing poetry on this subject are usually wrong. If someone reaches a conclusion and it isn’t based on solving the equations shown above in the RM Goody section then it’s not reliable. And, therefore, poetry.

The Standard Approach – Doubling CO2

Armed with the knowledge of how to calculate the interaction of the atmosphere with radiation, how do we approach the question of the effect of doubling CO2?

In the past many people had slightly different approaches, so usually it is prepared in a standard way – explained further in CO2 – An Insignificant Trace Gas? Part Seven – The Boring Numbers.

The most important point to understand is that the atmosphere and surface are heated by the sun via radiation, and they cool to space via radiation. While all of the components of the climate are inter-related, the fundamental consideration is that if cooling to space reduces then the climate will heat up (assuming constant solar radiation). Which part of the climate, at what speed, in what order? These are all important questions but first understand that if the climate system radiates less energy to space then the climate system will heat up. See The Earth’s Energy Budget – Part Two.

Therefore, the usual calculation of the effect of doubling CO2 – prior to any feedbacks – assumes that the same temperature profile exists vertically through the atmosphere, along with the same concentration of water vapor. The question is then:

How much does the surface temperature have to increase to allow the same amount of radiation to be emitted to space?

See The Earth’s Energy Budget – Part Three for an explanation about why more CO2 means less radiation emitted to space initially.

The end result is that – without feedbacks – the surface will increase in temperature about 1°C to allow the same amount of radiation to space (compared with the case before CO2 was doubled).

The calculation relies on solving the radiative transfer equations as explained in words above, and shown mathematically in the extract from Goody’s book.

The “New Theory”

For reasons already explained, if someone has a new theory that gets a completely different result for the effect of more CO2, then we would expect them to explain where everyone else went wrong.

There is no sign of that in this paper.

For interested readers, I provide a few comments on the paper. The author is described as “John Nicol, Professor Emeritus of Physics, James Cook University, Australia”. Perhaps modesty prevents him mentioning the professorship in his own bio – in any case, he probably knows a lot of physics – as do the many professors of physics who have studied radiation in the atmosphere for many decades and written the books and papers on the subject..

In any case, on this blog, we weigh up ideas and evidence rather than resumés..

Here is his conclusion:

The findings clearly show that any gas with an absorption line or band lying within the spectral range of the radiation field from the warmed earth, will be capable of contributing towards raising the temperature of the earth. However, it is equally clear that after reaching a fixed threshold of so-called Greenhouse gas density, which is much lower than that currently found in the atmosphere, there will be no further increase in temperature from this source, no matter how large the increase in the atmospheric density of such gases.

So he understands the inappropriately-named “greenhouse” effect in basic terms but effectively claims that the effect of CO2 is “saturated”.

The paper’s advocate claimed:

..closely argued, mathematical and physical analysis of how energy is transmitted from the surface through the atmosphere, answers all questions..

– however, the paper is anything but.

There are some equations:

  • Planck’s law of blackbody radiation (p3)
  • Stefan-Boltzmann’s law of total radiation (p2)
  • Wien’s law of peak radiation (p3)
  • spectral line width due to natural broadening, doppler broadening and collision broadening (p7 &8)
  • density changes vs height in the atmosphere (p6)

These are all standard equations and it is not at all clear what equations are solved to demonstrate his conclusion.

He derives the expression for absorption of radiation (often known as Beer’s law – see CO2 – An Insignificant Trace Gas? Part Three). But most importantly, there is no equation for emission of radiation by the atmosphere. Emission of radiation is discussed, but whether or not it is included in his calculation is hard to determine.

Many of the sections in his paper are what you would find in a basic textbook (although line width equations would be in a more advanced textbook).

There are typos like the distance from the earth to the sun – which is not 1.5M km (p3). This doesn’t affect any conclusion, but shows that basic checking has not been done.

There are confusing elements. For example, the blackbody radiation curve (fig 1) for a 289K body, expressed against frequency. The frequency of peak radiation actually matches a wavelength of 17.6 μm, not 10 μm. (Peak frequency, ν = 1.7×1013 Hz, λ=c/ν = 3×108/1.7×1013 = 17.6 μm. This corresponds to a temperature of 2.898×10-3/λ = 165 K).

And comments like this suggest some flawed thinking about the subject of radiative transfer:

The black inverted curve shows the fraction of radiation emitted at each frequency which escapes from the top of the troposphere at a height of 10 km and thus represents the proportion of the energy which could be additionally captured by an increase of CO2 and so contribute to the further warming of air in the various layers of the troposphere. It thus represents the effective absorption spectrum of CO2 within the range of frequencies shown after accounting for collisional line broadening which provides a reduced but significant level of absorption even in the very far wings of the line which is represented in Figure 3 on page 6.

Why flawed? Because the radiation emitted from the top of the troposphere is made up from two components:

  • surface radiation which is transmitted through the atmosphere
  • radiation emitted by the atmosphere at different heights which is transmitted through the atmosphere

Because other parts of the paper discuss emission by the atmosphere it is hard to determine whether or not it is ignored in his calculations, or whether the paper fails to convey the author’s approach.

One interesting comment is made towards the end of the paper:

The calculations show that doubling the level of CO2 leads to an escape of only 0.75 %, a difference of 1.8 %.  Thus, in this example where the chosen value of the broadening used is significantly less than the actual case in the atmosphere, an additional 6 Watts, from the original 396 Watts, would be retained in the 10 km column within the troposphere, when the density of carbon dioxide is doubled.

Now when we consider the effect of doubling CO2 the question is what is the “radiative forcing” – the change in top of atmosphere flux. The standard result is 3.7 W/m². (This is what leads to the calculation of 1°C surface temperature change prior to feedback).

It appears (but I can’t be certain) that Dr. Nicol thinks that the radiative forcing for doubling CO2 is even higher than the calculations that appear in the many papers used in the IPCC report. From his calculations he reports that 6W/m² would be retained.

On a technical note, although radiative forcing has a precise definition, it isn’t clear what exactly Dr. Nicol means by his value of “retained radiation”.

However, it does appear to conflict which his conclusion (extract reported at the beginning of this section).

There are many other areas of confusion in his paper. The focus appears to be on the surface forcing from changes in CO2 rather than changes in the energy balance for the whole climate system. There is a section (fig 6, page 21) which examines how much terrestrial radiation is absorbed in the first 50m of the atmosphere by the CO2 band at current and higher concentrations.

What would be more interesting is to see what changes occur in the top of atmosphere forcing from these changes, for example:

 

Longwave radiative forcing from increases in various "greenhouse" gases

Longwave radiative forcing from increases in various "greenhouse" gases

 

This graph is from W.D. Collins (2006) – see CO2 – An Insignificant Trace Gas? – Part Eight – Saturation.

Note the blue curve. This graph makes clear the calculated forcing vs wavelength. By contrast Dr. Nicol’s paper doesn’t really make clear what surface forcing is considered – how far out into the “wings” of the CO2 band is considered, or what result will occur at the surface for any top of atmosphere changes.

It is almost as if he is totally unaware of the work done on this problem since the 1960’s.

It is also possible that I have misunderstood what he is trying to demonstrate or what he has demonstrated. Hopefully someone, perhaps even Dr. Nicol, can explain if that is the case.

Conclusion

Calculations of radiation through the atmosphere do require consideration of absorption AND emission. The formal radiative transfer equations for the atmosphere are not innovative or in question – they are in all the textbooks and well-known to scientists in the field.

Experimental results closely match theory – both in total flux values and in spectral analysis. This demonstrates that radiative transfer is correctly explained by the standard theory.

New and innovative approaches to the subject are to be welcomed. However, just because someone with a physics degree, or a doctorate in physics, produces lots of equations and writes a conclusion doesn’t mean they have overturned standard theory.

New approaches need to demonstrate exactly what is wrong with the standard approach as found in all the textbooks and formative papers on this subject. They also need to explain, if they reach different conclusions, why the existing solutions match the results so closely.

Dr. Nicol’s paper doesn’t explain what’s wrong with existing theory and it is almost as if he is unaware of it.

References

Atmospheric Radiation: Theoretical Basis, Goody & Yung, Oxford University Press (2nd ed. 1989)

An analysis of the dependence of clear-sky top-of-atmosphere outgoing longwave radiation on atmospheric temperature and water vapor, by Dessler et al, Journal of Geophysics Research (2008)

Measurements of the downward longwave radiation spectrum over the Antarctic plateau and comparisons with a line-by-line radiative transfer model for clear skies, Walden et al, Journal of Geophysical Research (1998)

Notes

Note 1 – A “band model” is a mathematical expression which simplifies the complexity of the line by line (LBL) solution of the radiative transfer equations. Instead of having to lookup a value at every wavelength the band model uses an expression which is computationally much quicker.

Read Full Post »

This post covers some foundations which are often misunderstood.

Radiation emitted from a surface (or a gas) can go in all directions and also varies with wavelength, and so we start with a concept called spectral intensity.

This value has units of W/m².sr.μm, which in plainer language means Watts (energy per unit time) per square meter per solid angle per unit of wavelength. (“sr” in the units stands for “steradian“).

Most people are familiar with W/m² – and spectral intensity simply “narrows it down” further to the amount of energy in a direction and in a small bandwidth.

We’ll consider a planar opaque surface emitting radiation, as in the diagram below.

 

Hemispherical Radiation, Incropera and DeWitt (2007)

Hemispherical Radiation, Incropera and DeWitt (2007)

 

The total hemispherical emissive power, E, is the rate at which radiation is emitted per unit area at all possible wavelengths and in all possible directions. E has the more familiar units of W/m².

Most non-metals are “diffuse emitters” which means that the intensity doesn’t vary with the direction.

For a planar diffuse surface – if we integrate the spectral intensity over all directions we find that emissive power per μm is equal to π (pi) times the spectral intensity.

This result relies only on simple geometry, but doesn’t seem very useful until we can find out the value of spectral intensity. For that, we need Max Planck..

Planck

Most people have heard of Max Planck, Nobel prize winner in 1918. He derived the following equation (which looks a little daunting) for the spectral intensity of a blackbody:

Spectral Intensity, Max Planck

where T = absolute temperature (K); λ = wavelength; h = Planck’s constant = 6.626 x 10-34 J.s; k = Boltzmann’s constant = 1.381 x 10-23 J/K; c0 = the speed of light in a vacuum = 2.998 x 108 m/s.

What this means is that radiation emitted is a function only of the temperature of the body and varies with wavelength. For example:

Note the rapid increase in radiation as temperature increases.

What is a blackbody?

A blackbody:

  • absorbs all incident radiation, regardless of wavelength and direction
  • emits the maximum energy for any wavelength and temperature (i.e., a perfect emitter)
  • emits independently of direction

Think of the blackbody as simply “the reference point” with which other emitters/absorbers can be compared.

Stefan-Boltzmann

The Stefan-Boltzmann equation (for total emissive power) is “easily” derived by integrating the Planck equation across all wavelengths and using the geometrical relationship explained at the start (E=πI). The result is quite well known:

E = σT4

where σ=5.67 x 10-8 and T is absolute temperature of the body.

The result above is for a blackbody. The material properties of a given body can be measured to calculate its emissivity, which is a value between 0 and 1, where 1 is a blackbody.

So a real body emits radiation according to the following formula:

E = εσT4

where ε is the emissivity. (See later section on emissivity and note 1).

Note that so long as the Planck equation is true, the Stefan-Boltzmann relationship inevitably follows. It is simply a calculation of the total energy radiated, as implied by the Planck equation.

The Smallprint

The Planck law is true for radiant intensity into a vacuum and for a body in Local Thermodynamic Equilibrium (LTE).

So that means it can never be used in the real world

Or so many people who comment on blogs seem to think. Let’s take a closer look.

The Vacuum

The speed of light in a vacuum, c0 = 2.998 x 108 m/s. This value appears in the Planck equation and so we need to cater for it when the emission of radiation is into air. The speed of light in air, cair = c0/n, where n is the refractive index of air = 1.0008.

Here’s a comparison of the Planck curves at 300K into air and a vacuum:

Not easy to separate. If we expand one part of the graph:

We can see that at the peak intensity the difference is around 0.3%.

The total emissive power into air:

E = n²σT4, where n is the refractive index of air

So the total energy radiated from a blackbody into air = 1.0016 x the total energy into a vacuum.

This is why it’s a perfectly valid assumption not to bother with this adjustment for radiation into air. In glass it’s a different proposition..

Local Thermodynamic Equilibrium

The meaning, and requirement, of LTE (local thermodynamic equilibrium) is often misunderstood.

It does not mean that a body is at the same temperature as its surroundings. Or that a body is all at the same temperature (isothermal).

An explanation which might help illuminate the subject – from Thermal Radiation Heat Transfer, by Siegel & Howell, McGraw Hill (1981):

In a gas, the redistribution of absorbed energy occurs by various types of collisions between the atoms, molecules, electrons and ions that comprise the gas. Under most engineering conditions, this redistribution occurs quite rapidly, and the energy states of the gas will be populated in equilibrium distributions at any given locality. When this is true, the Planck spectral distribution correctly describes the emission from a blackbody..

Another definition, which might help some (and be obscure to others) is from Radiation and Climate, by Vardavas and Taylor, Oxford University Press (2007):

When collisions control the populations of the energy levels in a particular part of an atmosphere we have only local thermodynamic equilibrium, LTE, as the system is open to radiation loss. When collisions become infrequent then there is a decoupling between the radiation field and the thermodynamic state of the atmosphere and emission is determined by the radiation field itself, and we have no local thermodynamic equilibrium.

And an explanation about where LTE does not apply might help illuminate the subject, from Siegel & Howell:

Cases in which the LTE assumption breaks down are occasionally encountered.

Examples are in very rarefied gases, where the rate and/or effectiveness of interparticle collisions in redistributing absorbed radiant energy is low; when rapid transients exist so that the populations of energy states of the particles cannot adjust to new conditions during the transient; where very sharp gradients occur so that local conditions depend on particles that arrive from adjacent localities at widely different conditions and may emit before reaching equilibrium and where extremely large radiative fluxes exists, so that absorption of energy and therefore populations of higher energy states occur so strongly that collisional processes cannot repopulate the lower states to an equilibrium density.

Now these LTE explanations are far removed from most people’s perceptions of what equilibrium means.

LTE is all about, in the vernacular:

Molecules banging into each other a lot so that normal energy states apply

And once this condition is met – which is almost always in the lower atmosphere – the Planck equation holds true. In the upper atmosphere this doesn’t hold true, because the density is so low. A subject for another time..

So much for Planck and Stefan-Boltzmann. But for real world surfaces (and gases) we need to know something about emissivity and absorptivity.

Emissivity, Absorptivity and Kirchhoff

There is an important relationship which is often derived. This relationship, Kirchhoff’s law, is that emissivity is equal to absorptivity, but comes with important provisos.

First, let’s explain what these two terms mean:

  • absorptivity is the proportion of incident radiation absorbed, and is a function of wavelength and direction; a blackbody has an absorptivity of 1 across all wavelengths and directions
  • emissivity is the proportion of radiation emitted compared with a blackbody, and is also a function of wavelength and direction

The provisos for Kirchhoff’s law are that the emissivity and absorptivity are equal only for a given wavelength and direction. Or in the case of diffuse surfaces, are true for wavelength only.

Now Kirchhoff’s law is easy to prove under very restrictive conditions. These conditions are:

  • thermodynamic equilibrium
  • isothermal enclosure

That is, the “thought experiment” which demonstrates the truth of Kirchhoff’s law is only true when there is a closed system with a body in equilibrium with its surroundings. Everything is at the same temperature and there is no heat exchanged with the outside world.

That’s quite a restrictive law! After all, it corresponds to no real world problem..

Here is how to think about Kirchhoff’s law.

The simple thought experiment demonstrates completely and absolutely that (under these restrictive conditions) emissivity = absorptivity (at a given wavelength and direction).

However, from experimental evidence we know that emissivity of a body is not affected by the incident radiation, or by any conditions of imbalance that occur between the body and its environment.

From experimental evidence we know that the absorptivity of a body is not affected by the amount of incident radiation, or by any imbalance between the body and its environment.

These results have been confirmed over 150 years.

As Siegel and Howell explain:

Thus the extension of Kirchhoff’s law to non-equilibrium systems is not a result of simple thermodynamic considerations. Rather it results from the physics of materials which allows them in most instances to maintain themselves in LTE and this have their properties not depend on the surrounding radiation field.

The important point is that thermodynamics considerations allow us to see that absorptivity = emissivity (both as a function of wavelength), and experimental considerations allow us to extend the results to non-equilibrium conditions.

This is why Kirchhoff’s law is accepted in thermodynamics.

Operatic Considerations

The hilarious paper by Gerlich and Tscheuschner poured fuel on the confused world of the blogosphere by pointing out just a few pieces of the puzzle (and not the rest) to the uninformed.

They explained some restrictive considerations for Planck’s law, the Stefan-Boltzmann equation, and for Kirchhoff’s law, and implied that as a result – well, who knows? Nothing is true? Not much is true?Nothing can be true? I had another look at the paper today but really can’t disentangle their various claims.

For example, they claim that because the Stefan-Boltzmann equation is the integral of the Planck equation over all wavelengths and directions:

Many pseudo-explanations in the context of global climatology are already falsified by these three fundamental observations of mathematical physics.

Except they don’t explain which ones. So no one can falsify their claim. And also, people without the necessary background who read their paper would easily reach the conclusion that the Stefan-Boltzmann equation had some serious flaws.

All part of their entertaining approach to physics.

I mention their papertainment because many claims in the blog world have probably arisen through uninformed people reading bits of their paper and reproducing them.

Conclusion

The fundamentals of radiation are well-known and backed up by a century and a half of experiments. There is nothing controversial about Planck’s law, Stefan-Boltzmann’s law or Kirchhoff’s law.

Everyone working in the field of atmospheric physics understands the applicability and limits of their use (e.g., the upper atmosphere).

This is not cutting edge stuff, instead it is the staple of every textbook in the field of radiation and radiant heat transfer.

Notes

Note 1 – Because emissivity is a function of wavelength, and because emission of radiation at any given wavelength varies with temperature, average emissivity is only valid for a given temperature.

For example, at 6000K most of the radiation from a blackbody has a wavelength of less than 4μm; while at 200K most of the radiation from a blackbody has a wavelength greater than 4μm.

Clearly the emissivity for 6000K will not be valid for the emissivity of the same material at a temperature of 200K.

Read Full Post »

In Part One we had a look at Ramanathan’s work (actually Raval and Ramanathan) attempting to measure the changes in outgoing longwave radiation vs surface temperature.

In Part Two (Part Zero perhaps) we looked at some basics on water vapor as well as some measurements. The subject of the non-linear effects of water vapor was raised.

Part One Responses attempted a fuller answer to various questions and objections about Part One

Water vapor feedback isn’t a simple subject.

First, a little more background.

Effectiveness of Water Vapor at Different Heights

Here are some model results of change in surface temperature for changes in specific humidity at different heights:

From Shine & Sinha (1991)

From Shine & Sinha (1991)

For newcomers, 200mbar is the top of the troposphere (lower atmosphere), and 1000mbar is the surface.

You can see that for a given increase in the mixing ratio of water vapor the most significant effect comes at the top of the troposphere.

The three temperatures: cool = 277K (4°C); average = 287K (14°C); and warm = 298K (23°C).

Now a similar calculation using changes in relative humidity:

From Shine & Sinha (1991)

From Shine & Sinha (1991)

The average no continuum shows the effect without the continuum absorption portion of the water vapor absorption. This is the frequency range between 800-1200 cm-1, (wavelength range 12-8μm) – often known as the “atmospheric window”. This portion of the spectral range is important in studies of increasing water vapor, something we will return to in later articles.

Here we can see that in warmer climates the lower troposphere has more effect for changes in relative humidity. And for average and cooler climates, changes in relative humidity are still more important in the lower troposphere, but the upper troposphere does become more significant.

(This paper, by Shine & Sinha, appears to have been inspired by Lindzen’s 1990 paper where he talked about the importance of upper tropospheric water vapor among other subjects).

So clearly the total water vapor in a vertical section through the atmosphere isn’t going to tell us enough (see note 1). We also need to know the vertical distribution of water vapor.

Here is a slightly different perspective from Spencer and Braswell (1997):

Spencer and Braswell (1997)

Spencer and Braswell (1997)

This paper took a slightly different approach.

  • Shine & Sinha looked at a 10% change in relative humidity – so for example, from 20% to 22% (20% x 110%)
  • Spencer & Braswell said, let’s take a 10% change as 20% to 30% (20% + 10%)

This isn’t an argument about how to evaluate the effect of water vapor – just how to illustrate a point. Spencer & Braswell are highlighting the solid line in the right hand graph, and showing Shine & Sinha’s approach as the dashed line.

In the end, both will get the same result if the water vapor changes from 20% to 30% (for example).

Boundary Layers and Deep Convection

Here’s a conceptual schematic from Sun and Lindzen 1993:

The bottom layer is the boundary layer. Over the ocean the source of water vapor in this boundary layer is the ocean itself. Therefore, we would assume that the relative humidity would be high and the specific humidity (the amount of water vapor) would be strongly dependent on temperature (see Part Two).

Higher temperatures drive stronger convection which creates high cloud levels. This is often called “deep convection” in the literature. These convective towers are generally only a small percentage of the surface area. So over most of the tropics, air is subsiding.

Here is a handy visualization from Held & Soden (2000):

Held and Soden (2000)

Held and Soden (2000)

The concept to be clear about is within the well-mixed boundary layer there is a strong connection between the surface temperature and the water vapor content. But above the boundary layer there is a disconnect. Why?

Because most of the air (by area) is subsiding (see note 2). This air has at one stage been convected high up in the atmosphere, has dried out and now is returning back to the surface.

Subsiding air in some parts of the tropics is extremely dry with a very low relative humidity. Remember the graphs in Part Two – air high up in the atmosphere can only hold 1/1,000th of the water vapor that can be held close to the surface. So air which is saturated when it is at the tropopause is – in relative terms – very dry when it returns to the surface.

Therefore, the theoretical connection between surface temperature and specific humidity becomes a challenging one above the boundary layer.

And the idea that relative humidity is conserved is also challenged.

Relationship between Specific Humidity and Local Temperature

Sun and Oort (1995) analyzed the humidity and temperature in the tropics (30°S to 30°N) at a number of heights over a long time period:

Sun and Oort (1995)

Sun and Oort (1995)

Note that the four graphs represent four different heights (pressures) in the atmosphere. And note as well that the temperatures plotted are the temperatures at that relevant height.

Their approach was to average the complete tropical domain (but not the complete globe) and, therefore, average out the ascending and descending portions of the atmosphere:

Through horizontal averaging, variations of water vapor and temperature that are related to the horizontal transport by the large-scale circulation will be largely removed, and thus the water vapor and temperature relationship obtained is more indicative of the property of moist convection, and is thus more relevant to the issue of water vapor feedback in global warming.

In analyzing the results, they said:

Overall, the variations of specific humidity correlate positively at all levels with the temperature variations at the same level. However, the strength of the correlation between specific humidity variations and the temperature variations at the same level appears to be strongly height dependent.

Sun & Oort (1995)

Sun & Oort (1995)

Early in the paper they explained that pre-1973 values of water vapor were more problematic than post-1973 and therefore much of the analysis would be presented with and without the earlier period. Hence, the two plots in the graph above.

Now they do something even more interesting and plot the results of changes in specific humidity (q) with temperature and compare with the curve for constant relative humidity:

Sun & Oort (1995)

Sun & Oort (1995)

The dashed line to the right is the curve of constant relative humidity. (For those still trying to keep up, if specific humidity was constant, the measured values would be a straight vertical line going through the zero).

The largest changes of water vapor with temperature occur in the boundary layer and the upper troposphere.

They note:

The water vapor in the region right above the tropical convective boundary layer has the weakest dependence on the local temperature.

And also that the results are consistent with the conceptual picture put forward by Sun and Lindzen (1993). Well, it is the same De-Zheng Sun..

Vertical Structure of Water Vapor Variations

How well can we correlate what happens at the surface with what happens in the “free troposphere” (the atmosphere above the boundary layer)?

If we want to understand temperature vertically through the atmosphere it correlates very well with the surface temperature. Probably not a surprise to anyone.

If we want to understand variations of specific humidity in the upper troposphere, we find (Sun & Oort find) that it doesn’t correlate very well with specific humidity in the boundary layer.

Sun & Oort (1995)

Sun & Oort (1995)

Take a look at (b) – this is the correlation of local temperature at any height with the surface temperature below. There is a strong correlation and no surprise.

Then look at (a) – this is the correlation of specific humidity at any height with the surface specific humidity. We can see that the correlation reduces the higher up we go.

This demonstrates that the vertical movement of water vapor is not an easy subject to understand.

Sun and Oort also comment on Raval and Ramanathan (1989), the source of the bulk of Clouds and Water Vapor – Part One:

Raval and Ramanathan (1989) were probably the first to use observational data to determine the nature of water vapor feedback in global warming. They examined the relationship between sea surface temperature and the infrared flux at the top of the atmosphere for clear sky conditions. They derived the relationship from the geographical variations..

However, whether the tropospheric water vapor content at all levels is positively correlated with the sea surface temperature is not clear. More importantly, the air must be subsiding in clear-sky regions. When there is a large-scale subsidence, the influence from the sea is restricted to a shallow boundary layer and the free tropospheric water vapor content and temperature are physically decoupled from the sea surface temperature underneath.

Thus, it may be questionable to attribute the relationships obtained in such a way to the properties of moist convection.

Conclusion

The subject of water vapor feedback is not a simple one.

In their analysis of long-term data, Sun and Oort found that water vapor variations with temperature in the tropical domain did not match constant relative humidity.

They also, like most papers, caution drawing too much from their results. They note problems in radiosonde data, and also that statistical relationships observed from inter-annual variability may not be the same as those due to global warming from increased “greenhouse” gases.

Articles in this Series

Part One – introducing some ideas from Ramanathan from ERBE 1985 – 1989 results

Part One – Responses – answering some questions about Part One

Part Two – some introductory ideas about water vapor including measurements

Part Four – discussion and results of a paper by Dessler et al using the latest AIRS and CERES data to calculate current atmospheric and water vapor feedback vs height and surface temperature

Part Five – Back of the envelope calcs from Pierrehumbert – focusing on a 1995 paper by Pierrehumbert to show some basics about circulation within the tropics and how the drier subsiding regions of the circulation contribute to cooling the tropics

Part Six – Nonlinearity and Dry Atmospheres – demonstrating that different distributions of water vapor yet with the same mean can result in different radiation to space, and how this is important for drier regions like the sub-tropics

Part Seven – Upper Tropospheric Models & Measurement – recent measurements from AIRS showing upper tropospheric water vapor increases with surface temperature

References

How Dry is the Tropical Free Troposphere? Implications for Global Warming Theory,
Spencer & Braswell, Bulletin of the American Meteorological Society (1997)

Humidity-Temperature Relationships in the Tropical Troposphere, Sun & Oort, Journal of Climate (1995)

Distribution of Tropical Tropospheric Water Vapor, Sun & Lindzen, Journal of Atmospheric Sciences (1993)

Sensitivity of the Earth’s Climate to height-dependent changes in the water vapor mixing ratio, Shine & Sinha, Nature (1991)

Some Coolness concerning Global Warming, Lindzen,Bulletin of the American Meteorological Society (1990)

Notes

Note 1 – The total amount of water vapor, TPW ( total precipitable water), is obviously something we want to know, but we don’t have enough information if we don’t know the distribution of this water vapor with height. It’s a shame, because TPW is the easiest value to measure via satellite.

Note 2 – Obviously the total mass of air is conserved. If small areas have rapidly rising air, larger areas will have have slower subsiding air.

Read Full Post »

After posting Part Two on water vapor, some people were unhappy that questions from Part One were not addressed.

I have re-read through the many comments and questions and attempt to answer them here. I ignore the questions unrelated to the feedbacks of water vapor and clouds – like the many questions about the moon, answered in Lunar Madness and Physics Basics. I also ignore the personal attacks from a commenter that my article(s) was/were deceptive.

The Definition

The major point from the perspective of a few commenters (including critics of Part Two) was about the radiometric definition of the “greenhouse” effect.

Ramanathan analyzed the following equation:

F = σT4 – G

where F is outgoing longwave radiation (OLR) at top of atmosphere (TOA), T is surface temperature, and G is the “greenhouse” effect.

For newcomers, F averages around 240 W/m² (and higher in clear sky conditions).

The first term on the right, σT4, is the Stefan-Boltzmann equation which calculates radiation from a surface from its temperature, e.g., for a 288K surface (15°C) the surface radiation = 390 W/m².

If the atmosphere had no radiative absorbers (no “greenhouse” effect) then F= σT4, which means G=0. See The Hoover Incident.

The approach Ramanathan took was to find out the actual climate response over 1988-89 from ERBE scanner data. What happens to the parameters F and G when temperature increases?

Why is it important?

If increasing CO2 warms the planet, will there be positive, negative or no feedback from water vapor? Apparently, Ramanathan thought that analyzing the terms in the equation under changing conditions could shed some light on the subject.

However, the equation itself was brought into question, mainly by Colin Davidson, in a number of comments including:

..In the section “Greenhouse Effect and Water Vapour”, he introduces an equation:

F = σTs^4 – G

I didn’t understand what this equation was trying to say. How are the Surface Radiation and the Outgoing Long Range Radiation linked, noting that there are other fluxes from the Surface into the Atmosphere? And one of these (evaporation) is stronger than the NET Surface radiation, while direct Conduction is also a significant flux?
The sentence “So the radiation from the earth’s surface less the “greenhouse” effect is the amount of radiation that escapes to space.” is not accurate.

Missing from this sentence are the following:
Incoming Solar Radiation Absorbed by the Atmosphere(A);
Evaporated Water from the Surface(E);
Direct Conduction from Surface to Atmosphere(C)
Back-Radiation from Atmosphere to Surface (B)

Writing down the fluxes for the atmosphere as a black box:
F= A+(S-B)+E+C (where S=Stephan-Boltzmann Surface Radiation),
Making G = S-F = B-A-E-C

So G doesn’t appear to me to make much PHYSICAL sense, and is certainly NOT the “Greenhouse Effect”, as the evaporative and conductive species are not greenhouse animals, but B and A certainly belong in the zoo..

And:

..I have shown that both those claims are incorrect. G does not represent the “Greenhouse” effect of an IR active atmosphere, as it contains terms (Evaporation and Conduction) which are plainly IR insensitive, nor does it represent the upward surface flux less the amount of longwave radiation leaving the planet.

What G represents is anyone’s guess, but it is not an easily identifiable physical quantity.

Hence my problem with the equation F=S-G as a starting point for any analysis – it doesn’t seem to represent anything coherent. Why not start with the TOA balance, the Surface balance, or the Atmospheric balance?

I am concerned about this. Is the whole theorem of climate sensitivity based on the incorrect notion that the factor G represents the Greenhouse Effect?

As well as:

In this post I summarise some of my concerns.

1. F= Sunlight – Reflected sunlight. Unless the earth’s short-wave albedo changes, the Outgoing Long-Wave Radiation(F) is constant, whatever the state of the Greenhouse. So dF/dTs does not represent the Greenhouse Effect, but is a representation of the change of surface temperature with cloudiness.

2. F= S(urface Radiation) + G, but G= E(vaporation) +C(onduction) + A(bsorbed Solar Radiation) – B(ack Radiation). Of these terms, only A and B are Greenhouse dependent. C and E are Greenhouse independent. dG/dTs is therefore not a measure of the Greenhouse Effect.

3. It is unclear if the amount of radiation from the surface escaping “through the window” direct to space is constant. If CO2 concentration increases we expect some tightening of the window, but not much. On the other hand any increase in surface temperature will increase the amount of radiation, so the two processes may balance. Kiehl and Trenberth keep this constant at 40W/m^2 despite raising the surface temperature over time by 1DegC, suggesting that it may be close to constant.

Assuming that is so, the fluxes warming the atmosphere from the Surface are constant, the (B)ack radiation increasing by roughly the same as the sum of the increases in Radiation from the Surface(S) and (E)vaporation. Basically when the surface temperature increases, the increase in Evaporation is balanced by a
decrease in Net Surface Radiation Absorbed by the Atmosphere.
As the heat entering the lower atmosphere is unchanged (though the amounts entering at each height will change), the overall Lapse Rate to the tropopause will be unchanged. So the temperature at the Tropopause will always be the Surface Temperature minus a Constant. The sensitivity of the Tropopause temperature is therefore the same as (and driven by) the sensitivity of the Surface temperature to changes in “forcing” (either solar or back-radiation).

This sensitivity is between 0.095 and 0.15 DegC/W/m^2.

And a search in that post will highlight all the other comments.

My attempts at explaining the concept did not appear successful. I don’t think I will have any more success this time, but clearly others think it is important.

I find Colin’s comments confused, but I’ll start with the main point of Ramanathan (paraphrased by me):

What happens if the climate warms from CO2 (or solar or any other cause) – will water vapor in the climate increase, causing a larger “greenhouse” effect?

That’s the question that many people have asked. These people include well-known figures like Richard Lindzen and Roy Spencer, who believe that negative feedbacks dominate.

Scenarios to Demonstrate the Usefulness of the Definition

If the surface temperature in one location goes from 288K (15°C) to 289K (16°C) the surface radiation will increase by 5.4 W/m². (The Stefan-Boltzmann law). How can we determine whether positive or negative feedbacks exist?

Condition 1. Suppose under clear skies when the temperature was 288K we measured OLR = 265 W/m² and when the temperature increased to 289K we measured OLR = 275 W/m². That means OLR has increased by 10 W/m² for a surface radiation increase of 5.4 W/m². Let’s call this condition Good.

Condition 2. Suppose instead that when the temperature increased to 289K we measured OLR = 265W/m². That means OLR has not changed when surface radiation increased by 5.4 W/m². Let’s call this condition Bad.

  • In condition Good we have negative feedback, where the atmospheric “greenhouse” response to higher temperatures is to reduce its absorption of longwave radiation
  • In condition Bad we have positive feedback – the situation where more heat has been trapped by the atmosphere – the atmosphere has increased its absorption of longwave radiation

Whether or not more heat also leaves the surface by evaporation or conduction doesn’t really matter for this analysis. It doesn’t tell us what we need to know.

In fact, it’s quite likely that if evaporation increases we might find that positive feedback exists. However, that depends on exactly where the water vapor ends up in the atmosphere (as the absorption of longwave radiation by water vapor is non-linear with height) and how this also changes the lapse rate (as the moist lapse rate is less than the dry lapse rate).

It’s possible that if convective heat fluxes from the surface increase we might find that negative feedback exists – this is because heat moved from the surface to higher levels in the atmosphere increases the ability of the atmosphere to radiate out heat. This is also part of the lapse rate feedback.

But all of these different effects are wrapped up in the ultimate question of how much heat leaves the top of atmosphere as a function of changes in the surface temperature. This is what feedback is about.

So for feedback we really want to know – does the absorptance of the atmosphere increase as surface temperature increases? (see note 3).

That’s as much as I can explain as to why this measure is the useful one for understanding feedback. This is why everyone that deals with the subject reviews the same fundamental equation. This includes those who believe that negative feedbacks dominate.

See Note 2 and Note 3.

Colin Davidson’s points

Colin often makes very sensible statements and points but many of the statements and claims cited earlier suffer from irrelevance, inaccuracy or a lack of any proof.

Missing the point – as I described above – was the main problem. In the interests of completeness we will consider some of his statements.

The third comment cited above indicates one of the main problems with his approach:

..Unless the earth’s short-wave albedo changes, the Outgoing Long-Wave Radiation(F) is constant, whatever the state of the Greenhouse. So dF/dTs does not represent the Greenhouse Effect..

This is not the case. Suppose that absorbed solar radiation is constant. This does not mean that OLR (=”F” in Colin’s description) will be constant. From the First Law of Thermodynamics:

Energy in = Energy out + energy added to the system

In long term equilibrium energy in = energy out. However, we want to know what happens if something disturbs the system. For example, if increased CO2 reduces OLR then heat will be added to the climate system until eventually OLR rises to match the old value – but with a higher temperature in the climate. The same is the case with any other forcing. (See The Earth’s Energy Budget – Part Two).

In fact we expect that for a particular location and time OLR won’t equal solar radiation absorbed. We also have the problem that any “out of equilibrium” signal we might try to measure at TOA is very small, and within the error bars of our measuring equipment.

I didn’t understand what this equation was trying to say. How are the Surface Radiation and the Outgoing Long Range Radiation linked, noting that there are other fluxes from the Surface into the Atmosphere? And one of these (evaporation) is stronger than the NET Surface radiation, while direct Conduction is also a significant flux?

This is a very basic point. The surface radiation and outgoing longwave radiation (OLR) are linked by the equations of atmospheric absorption and emission (see note 4). With no absorption, OLR = surface radiation. The more the concentration of absorbers in the atmosphere the greater the difference between surface radiation and OLR. If we want to find out the feedback effect of water vapor this is exactly the relationship we need to study. Surface radiation and OLR are linked by the very effect we want to study.

A similar problem is suggested in the second comment cited:

..Hence my problem with the equation F=S-G as a starting point for any analysis – it doesn’t seem to represent anything coherent. Why not start with the TOA balance, the Surface balance, or the Atmospheric balance?

How is it possible to extract positive or negative feedback from these?

We expect that at TOA and at the surface the long term global annual average will balance to zero. But we can’t easily measure evaporation or sensible heat. Without carefully placed pyrgeometers we can’t measure DLR (downward longwave radiation) and without pyranometers we can’t measure the incident solar radiation at the surface. In any case even if we had all of these terms it doesn’t help us extract the sign or magnitude of the water vapor feedback.

If we had lots of measurement capability at a particular location it might help us to estimate the evaporation. But then we have the problem of where does this water vapor end up? This is a problem that Richard Lindzen has frequently made – and is also made by Held & Soden in their review article (cited in Part Two). Approaching the problem (from the surface energy balance) without knowing the answer to where water vapor ends up we can’t attempt to calculate the sign of water vapor feedback.

Colin also makes a number of other comments of dubious relevance in the last section of text I extracted.

He states that evaporation and conduction are “greenhouse independent” – but I question this. More “greenhouse” gases mean more surface irradiation from the atmosphere, and therefore more evaporation and conduction (and convection).

The amount of radiation escaping through the so-called “atmospheric window” is not constant (perhaps a subject for a later article). The rest of the statement covers the belief in some kind of simplified atmospheric model where everything is in balance – and therefore a positive feedback is defined out of existence:

Basically when the surface temperature increases, the increase in Evaporation is balanced by a decrease in Net Surface Radiation Absorbed by the Atmosphere.
As the heat entering the lower atmosphere is unchanged (though the amounts entering at each height will change), the overall Lapse Rate to the tropopause will be unchanged. So the temperature at the Tropopause will always be the Surface Temperature minus a Constant. The sensitivity of the Tropopause temperature is therefore the same as (and driven by) the sensitivity of the Surface temperature to changes in “forcing” (either solar or back-radiation).

When surface temperature increases, evaporation is not balanced by a decrease in net surface radiation absorbed by the atmosphere. In fact, when surface temperature increases, surface radiation increases and possible atmospheric absorption of this radiation increases (due to humidity increases from more evaporation). Exactly what change this brings in DLR (atmospheric radiation received by the surface) is a question to be answered. By saying everything is in balance means that the solution about positive feedback is already known. If so, this needs to be demonstrated – not claimed.

The rest of the statement above suffers from the same problem. None of it has been demonstrated. If I understand it at all, it’s kind of a claim of climate equilibrium which therefore “proves” (?) that there isn’t water vapor feedback. However, I don’t really understand what it might demonstrate.

Other Comments Needing Response from the Original Article

From Leonard Weinstein:

Since the issue is not resolved that the temperature in the upper troposphere has increased, and the relative humidity has not stayed nearly constant (it has clearly decreased) over the period of greatest lower troposphere temperature increase, the argument seems less than resolved. The lack of increased water vapor in the stratosphere pushes that point even further.

The argument isn’t resolved by this piece of work. This is one attempt to measure the effect over a period of good quality data.

Finely, the data and analysis of Roy Spencer seems to lead to different conclusions even on the data interpretation. Can you point out his errors and respond to those issues?

Roy Spencer’s analysis doesn’t address this period of measurement. His paper is about the period from 2000-2008.

From NicL:

However, I take issue with your statement “It should be clear from these graphics that observed variations in the normalized “greenhouse” effect are largely due to changes in water vapor.” The spatial maps referred to merely indicate a correlation between these two things. It is unscientific to infer causation from correlation. Ramathan himself goes no further than to say the graphics suggest that variations in water vapour rather than lapse rates contribute to regional variations in the greenhouse effect.

It’s unscientific to infer causation from correlation in the absence of a theory that links them together. It’s solidly established that water vapor absorbs longwave radiation from the surface, and it’s solidly established that CO2 and other “greenhouse” gases are well-mixed through the atmosphere, while water vapor is not. Therefore, there is a strong theoretical link.

I think, in common with various other repondants, that changes in lapse rates and in the height of the tropopause are key issues in modelling the greenhouse effect, yet they seem rarely discussed. Ramanathan’s chapter does not really cover them.

What makes you say they are rarely discussed? There are many papers discussing the different processes involved in modeling water vapor feedback. However, Ramanathan’s chapter is primarily about measurements. Of course he refers to the different aspects of feedback in the chapter.

Conclusion

One commenter in part two said:

I want to give him a chance to reflect on whether he wants to defend the Ramanathan analysis in Part 1 or separate himself with dignity, which he can still do..

The primary question seemed to be the approach, and not the results, of Ramanathan.

Ramanathan tested the changes in atmospheric absorptance of longwave radiation with temperature changes. To claim this is inherently wrong is a bold claim and one I can’t understand. Neither can Richard Lindzen or Roy Spencer, at least, not from anything I have read of their work.

There are other possible approaches to Ramanathan’s results. Other researchers may have replicated his work and found different results. Other researchers may have analyzed different periods and found different changes.

There are also theoretical considerations – whether changes in the equilibrium temperature as a result of increased CO2 can be considered as the same conditions under which seasonal changes indicated positive water vapor feedback.

The question for readers to ask is: Did Ramanathan find something important that needs to be considered?

Ramanathan himself said:

However, our results do not necessarily confirm the positive feedback resulting from the fixed relative humidity models for global warming, for the present results are based on annual cycle.
If I someone can point out the theoretical flaw in Ramanathan’s work then I might “separate myself with dignity” otherwise I will be happy to stand by the idea that he has demonstrated something that needs to be considered.
 

Articles in this Series

Part One – introducing some ideas from Ramanathan from ERBE 1985 – 1989 results

Part Two – some introductory ideas about water vapor including measurements

Part Three – effects of water vapor at different heights (non-linearity issues), problems of the 3d motion of air in the water vapor problem and some calculations over a few decades

Part Four – discussion and results of a paper by Dessler et al using the latest AIRS and CERES data to calculate current atmospheric and water vapor feedback vs height and surface temperature

Part Five – Back of the envelope calcs from Pierrehumbert – focusing on a 1995 paper by Pierrehumbert to show some basics about circulation within the tropics and how the drier subsiding regions of the circulation contribute to cooling the tropics

Part Six – Nonlinearity and Dry Atmospheres – demonstrating that different distributions of water vapor yet with the same mean can result in different radiation to space, and how this is important for drier regions like the sub-tropics

Part Seven – Upper Tropospheric Models & Measurement – recent measurements from AIRS showing upper tropospheric water vapor increases with surface temperature

Note 1.

The actual change in emission of radiation for a 1°C rise in temperature depends on the temperature itself, one of the many non-linearities in science. The example in the article was for the specific temperature of 288K, along with the desire to avoid confusing readers with too many caveats.

Here is the graph of radiation change for a 1°C rise vs temperature:

For the mathematicians it is an easy exercise. For non-mathematicians, the change in radiation = 4σT³ W/m².K (obtained by differentiating the Stefan-Boltzmann equation with respect to T).

Note 2.

This article is about a specific point in Ramanathan’s work queried by some of my readers. His explanation of how to determine feedbacks is much more lengthy and includes some important points, especially the demonstration of the relationships in time between the various changes. These are important for the determination of cause and effect. See the original article and especially the online chapter for more a detailed explanation.

Note 3.

The rate of change of surface radiation with temperature, σdT4/dT = 4σT³ W/m².K (see note 1) is 5.4W/m² per K at 288K. However, the rate of change of OLR, dF/dT, for the no feedback condition is slightly more challenging to determine and not intuitively obvious.

Ramanathan, based on his earlier work from 1981, determined the “no feedback” condition (i.e., without lapse-rate feedback or water vapor feedback) was dF/dT=3.3 W/m².K. And for positive feedback this parameter, dF/dT would be less than 3.3.

Roy Spencer and William Braswell in their just-published work in JGR, On the diagnosis of radiative feedback in the presence of unknown radiative forcing has exactly the same value as the determination of the no feedback condition.

Note 4.

There are many different formulations of the solutions to the radiative transfer equations. This version is from Ramanathan’s chapter in Frontiers of Climate Modeling:

This is just to demonstrate that there is a strong mathematical link between surface radiation and OLR, and one that is very relevant for determining whether positive or negative feedbacks exist.

Read Full Post »

In Part One we covered a lot of ground. In this next part we will take a look at some basics about water vapor.

The response of water vapor to a warmer climate is at the heart of concerns about the effect of increasing the inappropriately-named “greenhouse” gases like CO2 and methane. Water vapor is actually the major “greenhouse” gas in the atmosphere. But unlike CO2, methane and NO2, there’s a huge potential supply of water vapor readily available to move into the atmosphere. And all it takes is a little extra heat to convert more of the oceans and waterways into water vapor.

Of course, it’s not so simple.

Before we dive into the subject, it’s worth touching on the subject of non-linearity – something that doesn’t just apply to the study of water vapor. Some people are readily able to appreciate the problem of non-linearity. For others it’s something quite vague. So before we’ve even started we’ll digress into slightly more familiar territory, just to give a little flavor to non-linearity.

A Digression on Non-Linearity

People who know all about this can just skip to the next section. For most people who haven’t studied a science or maths subject, it’s a natural assumption to assume that the world is quite a linear place. What am I talking about?

Here’s an example, familiar to regular readers of this blog and anyone who has tried to understand the basic concept of the “greenhouse” effect.

If the atmosphere did not absorb or emit radiation the surface of the earth would radiate at an average of around 240 W/m² (see The Hoover Incident, CO2 – An Insignificant Trace Gas? and many other articles on this blog).

This would mean a surface temperature of a chilly 255K (-18°C).

With the “greenhouse” effect of a radiating atmosphere, the surface is around 288K (+15°C) and radiates 390 W/m².

As one commenter put it (paraphrasing to save finding the quote):

Clearly you haven’t done your sums right. If 240 W/m² means a temperature of 255K, then 390 W/m² means a temperature of (390/240)x255 which is way more than the actual temperature of 288K (15°C).

Now that commenter spelt out the maths but many more people don’t even do that and yet feel instinctively that something is wrong when results can’t be simply added up, or fitted on a straight line.

In the case of that approach, the actual temperature – assuming a linear relationship between radiation and temperature – would be 414K or 141°C. That approach is wrong. The world is not linear.

How much radiation does it take to raise the equilibrium surface temperature by 10°C (or 10K)? This assumes a simple energy balance where more radiation received heats up the surface until it radiates out the same amount.

The answer might surprise you. It depends. It depends a lot. Here’s a graph:

So if the surface is at 100K ( -173°C), it takes only 2.6 W/m² to lift the temperature by 10K (10°C).

  • At 200K (-73°C), it takes 20 W/m²
  • At 300K (27°C), it takes 65 W/m²
  • At 400K (127°C), it takes 151 W/m²

The equation that links radiation to temperature is the Stefan-Boltzmann equation, and the relationship is j=εσT4,where T is temperature.

If the equation was something like j=kT, then it wouldn’t matter what the current temperature was – the same amount of energy would lift the temperature another 10K. For example, if it took 10 W/m² to lift the temperature from 100K to 110K, then it would take 10W/m² to lift the temperature from 300K to 310K. That would be a linear relationship.

But he world isn’t linear most of the time. Here are some non-linear examples:

  • radiation from surfaces (and gases) vs temperature
  • absorption of radiation by gases vs pressure
  • absorption of radiation by gases vs wavelength
  • pressure vs height (in the atmosphere)
  • water vapor concentration in the atmosphere vs temperature
  • convective heat flow

It’s important to try and unlearn the idea of linearity. Intuition isn’t a good guide for physics. At best you need a calculator or a graph.

Digression over.

Water Vapor Distribution

Let’s take a look at water vapor distribution in the real world (below).

Both graphs below have latitude along the horizontal axis (x-axis) and pressure along the vertical axis (y-axis). Pressure = 1000 (mbar) is sea level, and pressure = 200 is the top of the troposphere (lower atmosphere).

The left side graph is specific humidity, or how much mass of water vapor exists in grams per kg of dry air.

The right side graph is relative humidity, which will be explained. Both are annual averages.

Water Vapor Observations, Soden - "Frontiers of Climate Modeling", chapter 10

Water Vapor Observations, Soden – “Frontiers of Climate Modeling”, chapter 10

Click for a larger view

As a comparison the two graphs below show the change in specific humidity and relative humidity from June/Jul/August to Dec/Jan/Feb:

Water Vapor Observations, Soden - "Frontiers of Climate Modeling", chapter 10

Water Vapor Observations, Soden – “Frontiers of Climate Modeling”, chapter 10

Click for a larger view

The most important parameter for water vapor is the maximum amount of water vapor that can exist – the saturation amount. Here is the graph for saturation mixing ratio at sea level:

You can see that at 0°C the maximum mixing ratio of water vapor is 4 g/kg, while at 30°C it is 27 g/kg. Warmer air, as most people know, can carry much more water vapor than colder air.

(Note that strictly speaking air can become supersaturated, with relative humidities above 100%. But in practice it’s a reasonable guide to assume the maximum at 100%).

Here’s the graph for temperatures below zero, for water and for ice – they are quite similar:

Relative humidity is the ratio of actual humidity to the saturation value.

Saturation occurs when air is in equilibrium over a surface of water or ice. So air very close to water is usually close to saturation – unless it has just been blown in from colder temperatures.

The Simplified Journey of a Parcel of Moist Air

Let’s consider a parcel of air just over the surface of a tropical ocean where the sea surface temperature is 25°C. The relative humidity will be near to 100% and specific humidity will be close to 20 g/kg. The heating effect of the ocean causes convection and the parcel of air rises.

As air rises it cools via adiabatic expansion (see the lengthy Convection, Venus, Thought Experiments and Tall Rooms Full of Gas – A Discussion).

The cooler air can no longer hold so much water and it condenses out into clouds and precipitation. Eventually this parcel of air subsides back to ground. If the maximum height reached on the journey was more than a few km then the mixing ratio of the air will be a small fraction of its original value.

When the subsiding air reaches the ground – much warmer once again due to adiabatic compression – its relative humidity will now be very low – as the holding capacity of this air is once again very high.

Take a look at the graph shown earlier of relative humidity:

Annual averages don’t quite portray the journey of one little parcel of air, but the main features of the graph might make more sense. In a very broad sense air rises in the tropics and descends into the extra-tropics, which is why the air around 30°N and 30°S has a lower relative humidity than the air at the tropics or the higher latitudes.

Why isn’t the air higher up in the tropics at 100% relative humidity?

Because the air is not just made up of air rising, there is faster moving rising air, and a larger area of slowly subsiding air.

Held & Soden in an excellent review article (reference below), said this:

To model the relative humidity distribution and its response to global warming one requires a model of the atmospheric circulation. The complexity of the circulation makes it difficult to provide compelling intuitive arguments for how the relative humidity will change. As discussed below, computer models that attempt to capture some of this complexity predict that the relative humidity distribution is largely insensitive to changes in climate.
(Emphasis added).

The Complexity

The ability of air to hold water vapor is a very non-linear function of temperature. Water vapor itself has very non-linear effects in the radiative balance in the atmosphere depending on its height and concentration. Upper tropospheric water vapor is especially important, despite the low absolute amount of water vapor in this region.

Many many researchers have proposed different models for water vapor distribution and how it will change in a warmer world – we will have a look at some of them in subsequent articles.

Measurement of water vapor distribution has mostly not been accurate enough to paint a full enough picture.

Measurements

There are two ways that water vapor is measured:

Radiosondes (instruments in weather balloons) provide a twice-daily high resolution vertical profile (resolution of 100m) of temperature, pressure and water vapor. However, in many areas the coverage is low, e.g. over the oceans.

Radiosondes provide the longest unbroken series of data – going back to the 1940’s.

Measurements of humidity from radiosondes are problematic – often over-stating water vapor higher up in the troposphere. Many older sensors were not designed to measure the low levels of water vapor above 500hPa. As countries upgrade their sensors it appears to have introduced a spurious drying trend.

Comparison of measurements of water vapor between adjacent countries using different manufacturers of radiosonde sensors demonstrates that there are many measurement problems.

Here’s a map of radiosonde distribution:

From "Frontiers of Climate Modeling" (2006)

From “Frontiers of Climate Modeling” (2006)

Satellites provide excellent coverage but mostly lack the vertical resolution of water vapor. One method of measurement which gives the best vertical resolution (around 1km) is solar occultation or limb sounding. The satellite views the sun “sideways” through the atmosphere at a water vapor absorption wavelength like 0.94μm, and as the effective height changes the amount of water vapor can be calculated against height.

This method also allows us to measure water vapor in the stratosphere (and in fact it’s best suited for measuring the stratosphere and the highest levels of the troposphere).

Here are the established satellite systems for measuring water vapor:

From "Frontiers of Climate Modelling" (2006)

From “Frontiers of Climate Modelling” (2006)

Here is a water vapor measurement from Sage II:

From "Frontiers in Climate Modeling" (2006)

From “Frontiers in Climate Modeling” (2006)

There are many disadvantages of solar occultation measurement – large geographic footprint of measurement, knowledge of ozone distribution is required and measurements are limited to sunrise and sunset.

The other methods involve looking down through the atmosphere – so they provide better horizontal resolution but worse vertical resolution. Water vapor absorbs and emits thermal radiation at wavelengths through the infrared spectrum. Different wavelengths with stronger or weaker absorption provide different “weighting” to the water vapor vertical distribution.

The new Earth Observing System, EOS, which began in 1999 has many instruments for improved measurement:

Mostly these provide improvements, rather than revolutions, in accuracy and resolution.

Finally, an interesting picture of upper tropospheric relative humidity from Held & Soden (2000):

Upper tropospheric humidity, Held & Soden (2000)

Upper tropospheric humidity, Held & Soden (2000)

You can see – no surprise – that the relative humidity is highest around the clouds and reduces the further away you move from the clouds.

Conclusion

Understanding water vapor is essential to understanding the climate system and what kind of feedback effect it might have.

However, the subject is not simple, because unlike CO2, water vapor is “heterogeneous” – meaning that its concentration varies across the globe and vertically through the atmosphere. And the response of the climate system to water vapor is non-linear.

Measurements of water vapor are not quite at the level of accuracy and resolution they need to be to confirm any models, but there are many recent advances in measurements.

Articles in this Series

Part One – introducing some ideas from Ramanathan from ERBE 1985 – 1989 results

Part One – Responses – answering some questions about Part One

Part Three – effects of water vapor at different heights (non-linearity issues), problems of the 3d motion of air in the water vapor problem and some calculations over a few decades

Part Four – discussion and results of a paper by Dessler et al using the latest AIRS and CERES data to calculate current atmospheric and water vapor feedback vs height and surface temperature

Part Five – Back of the envelope calcs from Pierrehumbert – focusing on a 1995 paper by Pierrehumbert to show some basics about circulation within the tropics and how the drier subsiding regions of the circulation contribute to cooling the tropics

Part Six – Nonlinearity and Dry Atmospheres – demonstrating that different distributions of water vapor yet with the same mean can result in different radiation to space, and how this is important for drier regions like the sub-tropics

Part Seven – Upper Tropospheric Models & Measurement – recent measurements from AIRS showing upper tropospheric water vapor increases with surface temperature

References

Frontiers of Climate Modeling, ed. J.T. Kiehl & V. Ramanathan, Cambridge University Press (2006)

Water Vapor Feedback and Global Warming, I.M. Held & B.J. Soden, Annual Review of Energy and the Environment (2000)

Read Full Post »

During a discussion about Venus (Venusian Mysteries), Leonard Weinstein suggested a thought experiment that prompted a 2nd article of mine. Unfortunately, it really failed to address his point.

In fact, it took me a long time to get to grips with Leonard’s point and 500 comments in (!) I suggested that we write a joint article.

I also invited Arthur Smith who probably agrees mostly with me, but at times he was much clearer than I was. And I’m not sure we are totally in agreement either. I did offer Leonard the opportunity to have another contributor on his side, but he is happy to write alone – or draw on one of the other contributors in forming his article. The format here is quite open.

The plan is for me to write the first section, and then Arthur to write his, followed by Leonard. The idea behind it is to crystallize our respective thoughts so that others can review them, rather than wading through 500+ comments. What was the original discussion about?

It’s worth repeating Leonard Weinstein’s original thought experiment:

Consider Venus with its existing atmosphere, and put a totally opaque enclosure (to incoming Solar radiation) around the entire planet at the average location of present outgoing long wave radiation. Use a surface with the same albedo as present Venus for the enclosure. What would happen to the planetary surface temperature over a reasonably long time? For this case, NO Solar incoming radiation reaches the surface. I contend that the surface temperature will be about the same as present..

Those who are interested in that debate can read the complete idea and the many comments that followed. During the course of our debate we each posed different thought experiments as a means to finding the flaws in the various ideas.

At times we lost track of which experiment was being considered. Many times we didn’t quite understand the ideas that were posed by others.

And therefore, before I start, it’s worth saying that I might still misrepresent one of the other points of view. But not intentionally.

Introductory Ideas – Pumping up a Tyre

This first section is uncontroversial. It is simply aimed at helping those unfamiliar with terms like adiabatic expansion. Unfortunately, it will be brief with more Wikipedia links than usual.. (if there are many questions on these basics then I might write another article).

So let’s consider pumping up a bicycle tire. When you do pump up a tire you find that around the valve everything can get pretty hot. Why is that? Does high pressure cause high temperature? Let’s review two idealized ways of compressing an ideal gas:

  • isothermal compression – which is so slow that the temperature of the gas doesn’t rise
  • adiabatic compression – which is so fast that no heat escapes from the gas during the process

Pressure and volume of a gas are inversely related if temperature is kept constant – this is Boyle’s law

.

 

Isothermal compression, Thermal Physics - Schroeder

Isothermal compression, Thermal Physics - Schroeder

 

Imagine pumping up a tire very very slowly. Usually this isn’t possible because the valve leaks.

If it was possible you would find that the work done in compressing the gas didn’t increase the gas temperature because the heat increase in the gas would equalize out to the wheel rims and to the surrounding atmosphere.

Now imagine pumping up a tire very quickly. The usual way. In this case, you are adding energy to the system and there is no time for the temperature to equalize with the surroundings, so the temperature increases (because work is done on the gas):

 

Adiabatic Compression, Thermal Physics, Schroeder

Adiabatic Compression, Thermal Physics - Schroeder

 

The ideal gas laws can be confusing because three important terms exist in the one equation, pressure, volume and temperature:

PV = nRT or PV = NkT

where P = pressure, V = volume, T = absolute temperature (in K), N = number of molecules and k = Boltzmann’s constant

So the two examples above give the two extremes of compression. One, the isothermal case, has the temperature held constant because the process is very slow, and one, the adiabatic case, has the energy leaving the system being zero because the process is so fast.

In a nutshell, high pressures do not, of themselves, cause high temperatures. But changing the pressure – i.e., compressing a gas – does increase the temperature if it is done quickly.

Introductory Ideas – The “Environmental Lapse Rate” and Convection

Equally importantly, adiabatic expansion reduces the temperature in a gas.

If you lift air up in the atmosphere quickly then it will expand and cool. In dry air, some simple maths calculates this expansion as a temperature drop of just under 10K per km. In very moist air, this temperature drop can be as low as 4K per km. (The actual value depends on the amount of moisture).

Imagine the (common) situation where due to pressure effects a “parcel of air” is pushed upwards a small way, say 100m. Under adiabatic expansion, the temperature will drop somewhere between 1K (1°C) for dry air and 0.4K for very moist air.

Suppose that the actual atmospheric temperature profile is such that the temperature 100m higher up is 1.5K cooler. (We would say that the environmental lapse rate was 15K/km).

In this case, the parcel of air pushed up is now warmer than the surrounding air and, therefore, less dense – so it keeps rising. This is the major idea behind convection – if the environmental lapse rate is “more than” the adiabatic lapse rate then convection will redistribute heat. And if the environmental lapse rate is “less than” the lapse rate then the atmosphere tends to be stable against convection.

Note – the terminology can be confusing for newcomers. Even though temperature decreases as you go up in the atmosphere the adiabatic lapse rate is written as a positive number. Just imagine that the temperature in the atmosphere actually decreases by 1K per km and think what happens if the adiabatic lapse rate is 10K per km – air that is lifted up will be much colder than the surrounding atmosphere and sink back down.

Now imagine that the temperature decreases by 15K per km and think what happens if the adiabatic lapse rate is 10K per km – air that is lifted up will be much warmer than the surrounding atmosphere (so will expand and be less dense) and will keep rising.

All of this so far described is uncontentious..

The Main Contention

Armed with these ideas, the main contentious point from my side was this:

If you heat a gas sufficiently from the bottom, convection will naturally take place to redistribute heat. The environmental “lapse rate” can’t be sustained at more than the adiabatic lapse rate because convection will take over. This is the case with the earth, where most of the solar radiation is absorbed by the earth’s surface.

But if you heat a gas from the top (as in the original proposed thought experiment) then there is no mechanism to create the adiabatic lapse rate. This is because there is no mechanism to create convection. So we can’t have an atmosphere where the environmental lapse rate is greater than the adiabatic lapse rate – but we can have one where it is less.

Convection redistributes heat because of natural buoyancy, but convection can’t be induced to work the other way.

Well, maybe it’s not quite as simple..

The Very Tall Room Full of Gas

Leonard suggested – Take an empty room 1km square and 100km high and pour in gas at 250K from the top. The gas doesn’t absorb or emit any radiation. What happens?

The gas is adiabatically compressed (due to higher pressure below) and the gas at the bottom ends up at a much higher temperature.

Another way to think about adiabatic compression is that height (potential energy) is converted to speed (kinetic energy) because of gravity – like dropping a cannon ball.

We all agree on that – but what happens afterwards? (And I think we were all assuming that a lid is placed over the top of the tall room and the lid effectively stays at a temperature of 250K due to external radiation – however, no precise definition of the temperature of the room’s walls and lid was made).

My view – over a massively long time the temperature at the top and bottom will eventually reach the same value. This seemed to be the most contentious point.

However, in saying that, there was a lot of discussion about exactly the state of the gas so at times I wondered whether it was fundamental thermodynamics up for discussion or not understanding each other’s thought experiments.

In making this claim that the gas will become isothermal (all at the same temperature), I am assuming that the gas will eventually be stationary on a large scale (obviously the gas molecules move as their temperature is defined by their velocity). So all of the bulk movements of air have stopped.

Conduction of heat is left as the only mechanism for movement of heat and as gas molecules collide with each other they will all eventually reach the same temperature – the average temperature of the gas. (Because external radiation to and from the lid and walls wasn’t defined this will affect what final average value is reached). Note that temperature of a gas is a bulk property, so a gas at one temperature has a distribution of velocities (the Maxwell-Boltzmann distribution).

The Tall Room when Gases Absorb and Emit Radiation

We all appeared to agree that in this case (radiatively-absorbing gases) that as the atmosphere becomes optically thin then radiation will move heat very effectively and the top part of the atmosphere in this very tall room will become isothermal.

Heating from the Top

The viewpoint expressed by Leonard is that differential heating (night vs day, equatorial vs polar latitudes) will eventually cause large scale circulation, thus causing bulk movement of air down to the surface with the consequent adiabatic heating. This by itself will cause the environmental lapse rate to become very close to the adiabatic lapse rate.

I see it as a possibility that I can’t (today) disprove, but Leonard’s hypothesis itself seems unproven. Is there enough energy to drive this circulation when an atmosphere is heated from the top?

I found two considerations of this idea.

One was the Sandstrom theorem which considered heating a fluid from the bottom vs heating it from the top. More comment in the earlier article. I guess you could say Sandstrom said no, although others have picked some holes in it.

The other was in Atmospheres (1972) by the great Richard M. Goody and James C. Walker. In a time when only a little was known about the Venusian atmosphere, Goody & Walker suggested first that probably enough solar radiation made it to the surface to initiate heating from below (to cut a long story short). And later made this comment:

Descending air is compressed as it moves to lower levels in the atmosphere. The compression causes the temperature to increase.. If the circulation is sufficiently rapid, and if the air does not cool too fast by emission of radiation, the temperature will increase at the adiabatic rate. This is precisely what is observed on Venus.

Venera and Mariner Venus spacecraft have all found that the temperature increases adiabatically as altitude decreases in the lower atmosphere. As we explained this observation could also be the result of thermal convection driven by solar radiation deposited at the ground, but we cannot be sure that the radiation actually reaches the ground.

What we are now suggesting as an alternative explanation is that the adiabatic temperature gradient is related to a planetary circulation driven by heat supplied unevenly to the upper levels of the atmosphere. According to this theory, the high ground temperature is caused, at least in part, by compressional heating of the descending air.

In the specific case of the real Venus (rather than our thought experiments), much more has been uncovered since Goody and Walker wrote. Perhaps the question of what happens in the real Venus is clearer – one way or the other.

What do I conclude?

I’m glad I’ve taken the time to think about the subject because I feel like I understand it much better as a result of this discussion. I appreciate Leonard especially for taking the time, but also Arthur Smith and others.

Before we started discussing I knew the answers for certain. Now I’m not so sure.

_____________________________________________________________________

By Arthur Smith

First on the question of convective heat flow from heating above, which scienceofdoom just ended with: I agree some such heat flow is possible, but it is difficult. Goody and Walker were wrong if they felt this could explain high Venusian surface temperatures.

The foundation for my certainty on this lies in the fundamental laws of thermodynamics, which I’ll start by reviewing in the context of the general problem of heat flow in planetary atmospheres (and the “Very Tall Room Full of Gas”). Note that these laws are very general and based in the properties of energy and the statistics of large numbers of particles, and have been found applicable in systems ranging from the interior of stars to chemical solutions and semiconductor devices and the like. External forces like gravitational fields are a routine factor in thermodynamic problems, as are complex intermolecular forces that pose a much thornier challenge. The laws of thermodynamics are among the most fundamental laws in physics – perhaps even more fundamental than gravitation itself.

I’m going to discuss the laws out of order, since they are of various degrees of relevance to the discussion we’ve had. The third law (defining behavior at zero temperature) is not relevant at all and won’t be discussed further.

The First Law

The first law of thermodynamics demands conservation of energy:

Energy can be neither created nor destroyed.

This means that in any isolated system the total energy embodied in the particles, their motion, their interactions, etc. must remain constant. Over time such an isolated system approaches a state of thermodynamic equilibrium where the measurable, statistically averaged properties cease changing.

In our previous discussion I interpreted Leonard’s “Very Tall Room Full of Gas” example as such a completely isolated system, with no energy entering or leaving. Therefore it should, eventually at least, approach such a state of thermodynamic equilibrium. Scienceofdoom above interpreted it as being in a condition where the top of the room was held at a given specific temperature. That condition would allow energy to enter and leave over time, but eventually the statistical properties would also stop changing, and then energy flow through that top surface would also cease, total energy would be constant, and you would again arrive at an equilibrium system (but with a different total energy from the starting point).

That would also be the case in Leonard’s original thought experiment concerning Venus if the temperature of the “totally opaque enclosure” was a uniform constant value. The underlying system would reach some point where its properties ceased changing, and then with no energy flow in or out, it would be effectively isolated from the rest of the universe, and in its own thermodynamic equilibrium. However, Leonard allows the temperature of his opaque enclosure to vary with latitude and time of day which means that strictly such a statistical constancy would not apply and the underlying atmosphere would not be completely in thermodynamic equilibrium. I’ll look at that later in discussing the restrictions imposed by the second law.

In a system like a planetary atmosphere with energy flowing through it from a nearby star (or from internal heat) and escaping into the rest of the universe, you are obviously not isolated and would not reach thermodynamic equilibrium. Rather, if a condition where averaged properties cease changing is reached, this is referred to as a steady state. Under steady state conditions the first law must still be obeyed. Since internal statistical properties are unchanging, that means the system must not be gaining or losing any internal energy. So in steady state you have a balance between incoming and outgoing energy from the system, enforced by the first law of thermodynamics.

If such an atmospheric system is NOT in steady state, if there is, say, more energy coming in than leaving, then the total energy embodied in the particles of the system will increase. That higher average energy per particle can be measured as an increase in temperature – but that gets us to the definition of temperature.

The Zeroth Law

The zeroth law essentially defines temperature:

If two thermodynamic systems are each in thermal equilibrium with a third, then they are in thermal equilibrium with each other.

Here thermal equilibrium means that when the systems are brought into physical proximity so that they may exchange heat, no heat is actually exchanged. A typical example of the zeroth law is to make the “third system” a thermometer, something that you can use to read out a measurement of its internal energy level. Any number of systems can act as a thermometer: the volume of mercury liquid in an evacuated bulb, the resistance of a strip of platinum, or the pressure of a fixed volume of helium gas, for example.

If you divide a system “A” in thermodynamic equilibrium into two pieces, “A1” and “A2”, and then bring those two into physical proximity again, no heat should flow between them, because no heat was flowing between them before separating them since neither one’s statistical properties were changing. I.e. Any two subsystems of a system in thermodynamic equilibrium must be in thermal equilibrium with each other. That means that if you place a thermometer to measure the temperature of subsystem “A1”, and find a temperature “T” for thermal equilibrium of the thermometer with “A1”, then subsytem “A2” will also be in thermal equilibrium with “T”, i.e. its temperature will also read out as the same value.

That is, the temperature of a system in thermodynamic equilibrium is the same as the temperature of every (macroscopic) subsystem – temperature is constant throughout. The zeroth law implies temperature must be a uniform property of such equilibrium systems.

This means that in both “Very Large Room” examples and for my version of Leonard’s original thought experiment for Venus (with a uniform enclosing temperature), the thermodynamic equilibrium that the atmosphere must eventually reach must have a constant and uniform temperature throughout the system. Temperature in the room or in the pseudo-Venus’ atmosphere would be independent of altitude – an isothermal, not adiabatic, temperature profile.

The zeroth law can actually be derived from the first and second laws – this is done for example in Statistical Physics, 3rd Edition Part 1, Landau and Lifshiftz (Vol. 5) (Pergamon Press, 1980) – Chapter II, Thermodynamics Quantities, Section 9 – “Temperature” – and again the conclusion is the same:

Thus, if a system is in a state of thermodynamic equilibrium, the [absolute temperature] is the same for every part of it, i.e. is constant throughout the system.

The Second Law

The first and zeroth laws tell us what happens in the cases where the atmosphere can be characterized as in thermodynamic equilibrium, i.e. actually or effectively isolated from the rest of the universe after sufficient period of time that quantities cease changing. Under those conditions it must have a uniform temperature. But what about Leonard’s actual Venus thought experiment, where there are constant fluxes of energy in and out due to latitudinal and time-of-day variations in the temperature of the opaque enclosure? What can we say about the temperatures in the atmosphere below given heating from above under those conditions? Here the second law provides the primary constraint, and in particular the Clausius formulation:

Heat cannot of itself pass from a colder to a hotter body.

A planetary atmosphere is not driven by machines that move the air around, there are no giant fans pushing the air from one place to another. There is no incoming chemical or electrical form of non-thermal energy that can force things to happen. The driving force is the flux of incoming energy from the local star that brings heat when it is absorbed. All atmospheric processes are driven by the resulting temperature differences. Thanks to the first law of thermodynamics each incoming chunk of energy can be accounted for as it is successively absorbed, reflected, re-emitted and so forth until it finally leaves again as thermal radiation to the rest of the universe. In each of these steps the energy is spontaneously exchanged from a portion of the atmosphere at one temperature to another portion at another temperature.

What the second law tells us, particularly in the above Clausius form, is that the net spontaneous energy exchange describing the flow of each chunk of incoming energy to the atmosphere MUST ALWAYS BE IN THE DIRECTION OF DECREASING TEMPERATURE. Heat flows “downhill” from high to low temperature regions. The incoming energy starts from the star – very high temperature. If it’s absorbed it’s somewhere in the atmosphere or the planetary surface, and from that point it must go through successfully colder and colder portions of the system before it can escape to space (where the temperature is 2.7 K).

There can be no net flow of energy from colder to hotter regions. And that means, if the atmosphere below Leonard’s “opaque enclosure” is at a higher temperature than any point on the enclosure, heat must be flowing out of the atmosphere, not inward. The enclosure, no matter the distribution of temperatures on its surface, cannot drive a temperature below it that is any higher than the highest temperature on the enclosure itself.

So even in the non-equilibrium case represented by Leonard’s original thought experiment, while the atmosphere’s temperature will not be everywhere the same, it will nowhere be any hotter than the highest temperature of the enclosure, after sufficient time has passed for such statistical properties to stop changing.

The thermodynamic laws are the fundamental governing laws regarding temperature, heat, and energy in the universe. It would be extraordinary if they were violated in such simple systems as these gases under gravitation that we have been discussing. Note in particular that any violation of the second law of thermodynamics allows for the creation of a “perpetual motion machine”, a device legions of amateurs have attempted to create with nothing but failure to show for it. Both the first and second laws seem to be very strictly enforced in our universe.

Approach to Equilibrium

The above results on temperatures apply under equilibrium or steady state conditions, i.e. after the “measurable, statistically averaged properties cease changing.” That may perhaps take a long time – how long should we expect?

The heat content of a gas is given by the product of the heat capacity and temperature. For the Venus case we’re starting at 740 K near the surface and, under either of the “thought experiment” cases, dropping to something like 240 K in the end, about 500 degrees. Surface pressure on Venus is 93 Earth atmospheres, so in every square meter we have a mass of close to 1 million kg of atmosphere above it. [Quick calculation: 1 earth atmosphere = 101 kPa, or 10,300 kg of atmosphere per square meter, or 15 pounds per square inch. On Venus it’s 1400 pounds/sq inch.] The atmosphere of Venus is almost entirely carbon dioxide, which has a heat capacity of close to 1 kJ/kgK (see this reference). That means the heat capacity of the column of Venus’ atmosphere over 1 square meter is about one billion (109) J/K.

So a temperature change of 500 K amounts to 500 billion joules = 500 GJ for each square meter of the planetary surface. This is the energy we need to flow out of the system in order for it to move from present conditions to the isothermal conditions that would eventually apply under Leonard’s thought experiment.

Now from scienceofdoom’s previous post we expect at least an initial heat flow rate out of the atmosphere of 158 W/m² (that’s the outgoing flow that balances incoming absorption on Venus – since we’ve lost incoming absorption to the opaque shell, this ought to be roughly the initial net flow rate). Dividing this into 500 GJ/m² gives a first-cut time estimate for the cooling: 3.2 billion seconds, or about 100 years. So the cool-down to isothermal would be hardly immediate, but still pretty short on the scale of planetary change.

Now we shouldn’t expect that 158 W/m² to hold forever. There are four primary mechanisms for heat flow in a planetary atmosphere: conduction (the diffusion of heat through molecular movements), convection (movement of larger parcels of air), latent heat flow (movement of materials within air parcels that change phases – from liquid to gas and back, for example, for water) and thermal radiation. The heat flow rate for conduction is simply proportional to the gradient in temperature. The heat flow rate for radiation is similar except for the region of the atmospheric “window” (some heat leaves directly to space according to the Planck function for that spectral region at that temperature). Latent heat flow is not a factor in Venus’ present atmosphere, though it would come into play if the lower atmosphere cooled below the point where CO2 liquefies at those pressures.

For convection, however, average heat flow rates are a much more complex function of the temperature gradient. Getting parcels of gas to displace one another requires some sort of cycle where some areas go up and some down, a breaking of the planet’s symmetry. On Earth the large-scale convective flows are described by the Hadley cells in the tropics and other large-scale cells at higher latitudes, which circulate air from sea level to many kilometers in altitude. On a smaller scale, where the ground becomes particularly warm then temperature gradients exceeding the adiabatic lapse rate may occur, resulting in “thermals”, local convective cells up to possibly several hundred meters. If the temperature difference between high and low altitudes is too low, the convective instability vanishes and heat flow through convection becomes much weaker.

So as temperatures come closer to isothermal in an atmosphere like Venus’, except for the atmospheric “window” for radiative heat flow, we would expect all the heat flow mechanisms to decrease, and convection in particular to almost cease after the temperature difference gets too small. So we might expect the cool-down to isothermal conditions to slow down and end up much longer than this 100-year estimate. How long?

Another of the thought experiment versions discussed in the previous thread involved removing radiation altogether; with both radiation and convection gone, that leaves only conduction as a mechanism for heat flow through the atmosphere. For an ideal gas the thermal conductivity increases as the 2/3 power of the density (it’s proportional to density times mean free path) and the square root of temperature (mean particle speed). While CO2 is not really ideal at 93 atmospheres at 740 K, using this rule gives us a rough idea of what to expect – at 1 atmosphere and 273 K we have a value of 14.65 mW/(m.K) so at 93 atmospheres and 740 K it should be about 500 mW/(m . K). For a temperature gradient of 10 K/km that gives a heat flux of 0.005 W/m². 500 GJ would then take about 1014 seconds, or 3 million years.

So the approach to an isothermal equilibrium state for these atmospheres would take between a few hundred and a few million years, depending on the conditions you impose on the system. Still, the planets are billions of years old, so if heating from above was really the mechanism at work on Venus we should see the evidence of it in the form of cooler surface temperatures there by now, even if radiative heat flow were not a factor at all.

The View From a Molecule

Leonard in our previous discussion raised the point that an individual molecule sees the gravitational field, causing it to accelerate downwards. So molecular velocities lower down should be higher than velocities higher up, and that means higher temperatures.

Leonard’s picture is true of the behavior of a molecule in between collisions with the other molecules. But if the gas is reasonably dense, the “mean free path” (the average distance between collisions) becomes quite short. At 1 atmosphere and room temperature the mean free path of a typical gas is about 100 nanometers. So there’s very little distance to accelerate before a molecule would collide with another; to consider the full effect you need to look at the effect of collisions due to gas pressure along with the acceleration by gravity.

An individual molecule in a system in thermodynamic equilibrium at temperature T has available a collection of states in phase space (position, momentum and any internal variables) each with some energy E. In the case of our molecule in a gravitational field, that energy consists of the kinetic energy ½mv² (m = mass, v = velocity) plus the gravitational potential energy = gmz (where z = height above ground). The Boltzmann distribution applies in equilibrium, so that the probability of the molecule being in a state with energy E is proportional to:

e(-E/kT) = e(-(½mv² + gmz)/kT).

So the Boltzmann distribution in this case specifies both the distribution of velocities (the standard Maxwell-Boltzmann distribution) and also an exponential decrease in gas density (and pressure) with height. It is very unlikely for a molecule to be at a high altitude, just as it is very unlikely for a molecule to have a high velocity. The high energy associated with rare high velocities comes from occasional random collisions building up that high speed. Similarly the high energy associated with high altitude comes from random collisions occassionally pushing a molecule to great heights. These statistically rare occurences are both equally captured by the Boltzmann distribution. Note also that since the temperature is uniform in equilibrium, the distribution of velocities at any given altitude is that same Maxwell-Boltzmann distribution at that temperature.

Force Balance

The decrease in pressure with height produces a pressure-gradient force that acts on “parcels of gas” in the same way that the gravitational force does, but in the opposite direction. At equilibrium or steady-state, when statistical properties of the gas cease changing, the two forces must balance.

That leads to the equation of hydrostatic balance equating the pressure gradient force to the gravitational force:

dp/dz = – mng

(here p is pressure and n is the number density of molecules – N/V for N molecules in volume V). In equilibrium n(z) is given by the Boltzmann distribution:

n(z) = c.e(-mgz/kT);

for the ideal gas pressure is given by p = nkT, so the hydrostatic balance equation becomes:

dp/dz = k T dn/dz = k T c (-mg/kT) e(-g m z/kT) = – mg c e(-mgz/kT) = – m n(z) g

I.e. the Boltzmann distribution for this ideal gas system automatically ensures the system is in hydrostatic equilibrium.

Another approach to this sort of analysis is to look at the detailed flow of molecules near an imaginary boundary. This is done in textbook calculations of the thermal conductivity of an ideal gas, for example, where a gradient in temperature results in net flow of energy (necessarily from hotter to colder). In our system with gravitational force and pressure gradients both must be taken into account in such a calculation. Such calculations are somewhat complex and depend on assumptions about molecular size and neglecting other interactions that would make the gas non-ideal, but the net effect must always satisfy the same thermodynamic laws as every other such system: in thermodynamic equilibrium temperature is uniform and there is no net energy flow through any imagined boundary.

In conclusion, after sufficient time that statistical properties cease changing, all these examples of a system with a Venus-like atmosphere must reach essentially the same isothermal or near-isothermal state. The gravitational field and adiabatic lapse rate cannot explain the high surface temperature on Venus if incoming solar radiation does not reach (at least very close to) the surface.

_____________________________________________________________________

By Leonard Weinstein

Solar Heating and Outgoing Radiation Balances for Earth and Venus

The basic heating mechanism for any planetary atmosphere depends on the balance and distribution of absorbed solar energy and outgoing radiated thermal energy. For a planet like Earth, the presence of a large amount of surface water and a relatively optically transparent atmosphere to sunlight dominates where and how the input solar energy and outgoing thermal energy create the surface and atmospheric temperatures. The unequal energy flux for day and night, and for different latitudes, combined with the planet rotation result in wind and ocean currents that move the energy around.

Earth is a much more complex system than Venus for these reasons, and also due to the biological processes and to changing albedo due to changes in clouds and surface ice. The average energy balance for the Earth was previously shown by Science of Doom, but is shown in Figure 1 below for quick reference:

 

From Kiehl & Trenberth (1997)

From Kiehl & Trenberth (1997)

 

The majority of solar energy absorbed by the Earth directly heats the land and water. Some water evaporation carries this energy to higher altitudes, and is released by phase change (condensation). This energy is carried up by atmospheric convection.

In addition, convective heat transfer from the ground and oceans transfers energy to the atmosphere. It is the basic atmospheric temperature differences from day and night and at different latitude that creates pressure differences that drive the wind patterns that eventually mix and transport the atmosphere, but the buoyancy of heated air from the higher temperature surface areas also aids in the vertical mixing.

This energy is carried by convection up into the higher levels of the atmosphere and eventually radiates to space. The combination of convection of water vapor and surface heated air upwards dominates the total transported energy from the ground level. In addition, some of the ground level thermal energy is radiated up, with a portion of the thermal radiation passing directly from the ground to space. Water vapor, CO2, clouds, aerosols, and other greenhouse gases also absorb some of this radiated energy, and slightly increase the lower atmosphere and ground temperatures with back radiation effectively adding to the initial radiation, resulting in a reduced net radiation flux. This results in a higher temperature atmosphere and ground than without these absorbing materials.

Venus, however, is dominated by direct absorption of solar energy into the atmosphere (including clouds) rather than by the surface, so has a significantly different path to heat the atmosphere and ground. Venus has a very dense atmosphere (about 93 times the mass as Earth’s atmosphere), which extends to about 90 km altitude before the tropopause is reached. This is much higher than the Earth’s atmosphere.

Very dense clouds, composed mostly of sulfuric acid, reach to about 75 km, and cover the planet. The clouds have virga beneath them due to the very high temperatures at lower elevations. The clouds and thick haze occupy over half of the main troposphere height, with a fairly clear layer starting below about 30 km altitude. Due to the very high density of the atmosphere, dust and other aerosol particles (from the surface winds and possibly from volcanoes) also persist in significant quantity.

The atmosphere is 96.5% CO2, and contains significant quantities of SO2 (150 ppm), and even some H2O (20 ppm), and CO (17 ppm). These, along with the sulfuric acid clouds, dust, and other aerosols, absorb most of the incoming sunlight that is not reflected away, and also absorb essentially all of the outgoing long wave radiation and relay it to the upper atmosphere and clouds to eventually dump it into space.

A sketch is shown in Figure 2, similar to the one used for Earth, which shows the approximate energy transfer in and out of the Venus atmosphere system. It is likely that almost all of the radiation out leaves from the top of the clouds and a short distance above, which therefore locks in the level of the atmospheric temperature at that location.

The surface radiation balance shown is my guess of a reasonable level. The top portion of the clouds reflects about 75% of incident sunlight, so that Venus absorbs an average of about 163 W/m², which is significantly less than the amount absorbed by Earth. About 50% of the available sunlight is absorbed in the upper cloud layer, and about 40% of the available sunlight is captured on the way down in the lower clouds, gases, and aerosols.

Thus an average of solar energy that reaches the surface is only about 17 W/m², and the amount absorbed is somewhat less, since some is reflected. The question naturally arises as to what is the source of wind, weather, and temperature distribution on Venus, and why is Venus so hot at lower altitudes.

Venus takes 243 days to rotate. However, continual high winds in the upper atmosphere takes only about 4 days to go completely around at the equator, so the day/night temperature variation is even less than it would have otherwise been. Other circulation cells for different latitudes (Hadley cells) and some unique polar collar circulation patterns complete the main convective wind patterns.

The solar energy absorbed by the surface is a far smaller factor than for Earth, and I am convinced it is not necessary for the basic atmospheric and ground temperature conditions on Venus. Since the effect on the atmospheres of planets from absorbed solar radiation is to locally change the atmospheric pressure that drive the winds (and ocean currents if applicable), these flow currents transport energy from one location and altitude to another. There is no specific reason the absorption and release of energy has to be from the ground to the atmosphere unless the vertical mixing from buoyancy is critical. I contend that direct absorption of solar energy into the atmosphere can accomplish the mixing, and this along with the fact that the top of the clouds and a short distance above is where the radiation leaves from, is in fact the cause of heating for Venus.

We observed that unlike Earth, which had about 72% of the absorbed solar energy heat the surface, Venus has 10% or less absorbed by the ground. Also the surface temperature of Venus (about 735 K) would result in a radiation level from the ground of about 16,600 W/m².

Since back radiation can’t exceed radiation up if the ground is as warm or warmer than the atmosphere above it, the only thing that can make the ground any warmer than the atmosphere above it is the ~17 W/m² (average) from solar radiation. The ground absorbed solar radiation plus absorbed back radiation has to equal the radiation out for constant temperature. If the absorbed solar radiation were all used to heat the ground, and the net radiation heat transfer was near zero (the most extreme case of greenhouse gas blocking possible), the average temperature of the ground would only be about 0.19 K warmer than the atmosphere above it, and the excess heat would need to be removed by surface convective driven heat transfer. The buoyancy would be extremely small, and contribute little to atmospheric mixing.

However, the net radiation heat transfer out of the ground is almost surely equal to or larger than the average solar heating at the ground, from some limited transmission windows in the gas, and through a small net radiation flux. The most likely effect is that the ground is equal to or a small amount cooler than the lower atmosphere, and there is probably no buoyancy driven mixing. This condition would actually require some convective driven heat transfer from the atmosphere to the ground to maintain the ground temperature. Since the measured lower atmosphere and ground temperature are on the dry adiabatic lapse rate curve projected down from the temperature at the top of the cloud layer, the net ground radiation flux is probably close to the value of 17 W/m². This indicates that direct solar heating of the ground is almost certainly not a source for producing the winds and temperature found on Venus. The question still remains: what does cause the winds and high temperatures found on Venus?

The main point I am trying to make in this discussion is that the introduction of solar energy into the relatively cool upper atmosphere of Venus, along with the high altitude location of outgoing radiation, are sufficient to heat the lower atmosphere and surface to a much higher temperature even if no solar energy directly reaches the surface. Two simplified models are discussed in the following sections to support the plausibility of that claim. This issue is important because it relates to the mechanism causing greenhouse gas atmospheric warming, and the effect of changing amounts of the greenhouse gases.

The Tall Room

The first model is an enclosed room on Venus that is 1 km x 1km x 100 km tall. This was selected to point out how adiabatic compression can cause a high temperature at the bottom of the room, with a far lower input temperature at the top. This is the type of effect that dominates the heating on Venus. While the first part of the discussion is centered on that room model, the analysis is also applicable for part the second model, which examines a special simplified approximation of the full dynamics on Venus.

The conditions for the tall room model are:
1) A gas is introduced at the top of a perfectly thermally insulated fully enclosed room 1 km x 1 km x 100 km tall, located on the surface of Venus. The walls (and bottom and top) are assumed to have negligible heat capacity. The walls and bottom and top are also assumed to be perfect mirrors, so they do not absorb or emit radiation.

2) The supply gas temperature is selected to be 250 K. The gas pours in to fill all of the volume of the room. Sufficient quantity of gas is introduced so that the final pressure at the top of the room is at 0.1 bar at the end of inflow. The entry hole is sealed immediately after introduction of the gas is complete.

3) The gas is a single atom molecule gas such as Argon, so that it does not absorb or emit thermal radiation. This made the problem radiation independent. I also put in a qualifier, to more nearly approximate the actual atmosphere, that the gas had a Cp like that of CO2 at the surface temperature of Venus [i.e., Cp=1.14 (kJ/kg K) for CO2 at 735 K]. Cp is also temperature independent.

The room height selected would actually result in a hotter ground level than the actual case of Venus. This was due to the choice of a room 100 km tall. The height to 0.1 bar for Venus is only about 65 km, which would give a better temperature match, but the difference is not important to the discussion. A dry adiabatic lapse rate forms as the gas is introduced due to the adiabatic compression of the gas at the lower level. The value of the lapse rate for the present example comes from a NASA derivation at the Planetary Atmospheres Data Node:
http://atmos.nmsu.edu/education_and_outreach/encyclopedia/adiabatic_lapse_rate.htm
The final result for the dry adiabatic lapse rate is:

Γp = -dT/dz|a = g/Cp (1)

In the room at Venus, this results in:

TH =Ttop +H * 8.9/1.14 (2)

Where H is distance down from the top.

Ttop remains at 250 K since it is not compressed (not because it was forced to be at that temperature by heat transfer), and Tbottom=1,031 K due to the adiabatic compression.

Two questions arise:
1) Is this dry adiabatic lapse rate what would actually develop initially after all the gas is introduced?
2) What would happen when the system comes to final equilibrium (however long it takes)?

The gas coming in would initially spread down to the bottom due to a combination of thermal motion and gravity, but the converted potential energy due to gravity over the room height would add considerable downward velocity, and this added downward velocity would convert to thermal velocity by collisions. Once enough gas filled the room to limit the MFP, added gas would tend to stay near the top until additional gas piled on top pushed it downwards, and this would increasingly compress the gas below from its added mass. The adiabatic compression of gas below the incoming gas at the top would heat the gas at the bottom to 1,031 K for the selected model. The top temperature would remain at 250 K, and the temperature profile would vary as shown by equation (2). Thus the answer to 1) is yes.

Strong convection currents may or may not be introduced in the room, depending on how fast the gas is introduced. To simplify the model, I assume the gas flows in slowly enough so that the currents are not important. It is quite clear that the temperature profile at the end of inflow would be the adiabatic lapse rate, with the top at 250 K and bottom at 1,031 K. If the final lapse rate went toward zero from thermal conduction, as Arthur postulated, even he admits it would take a very long time. The question now arises- what would cause heat conduction to occur in the presence of an adiabatic lapse rate? i.e., why would an initial adiabatic lapse rate tend to go toward an isothermal lapse rate if there is no radiation forcing (note: this lack of radiation forcing assumption is for the room model only). The cause proposed by Arthur and Science of Doom is based on their understanding of the Zeroth Law of Thermodynamics. They say that if there is a finite lapse rate (i.e., temperature gradient), there has to be conduction heat transfer. This arises from not considering the difference between temperature and heat. This is discussed in:
http://zonalandeducation.com/mstm/physics/mechanics/energy/heatAndTemperature/heatAndTemperature.html (difference between temperature and heat)

When we consider if there will be heat conduction in the atmosphere, we need to look at potential temperature rather than temperature. This is discussed at: http://en.wikipedia.org/wiki/Potential_temperature
The potential temperature is shown to be:

(3)

This term in (3) is general, and thus valid for Venus, if the appropriate pressures are used.

A good discussion why the potential temperature is appropriate to use rather than the local temperature can be found at:
https://courseware.e-education.psu.edu/simsphere/workbook/ch05.html

This includes the following statements:

  • “if we return to the classic conduction laws and our discussion of resistances, we note that heat should be conducted down a temperature gradient. Since we are talking about sensible heat, the appropriate gradient for the conduction of sensible heat is not the temperature but the potential temperature. The potential temperature is simply a temperature normalized for adiabatic compression or expansion”
  • “When the environmental lapse rate is exactly dry adiabatic, there is zero variation of potential temperature with height and we say that the atmosphere is in neutral equilibrium. Under these conditions, a parcel of air forced upwards (downwards) will stay where it is once moved, and not tend to sink or rise after released because it will have cooled (warmed) at exactly the same rate as the environment”.

The above material supports the claim that there would be no movement from a dry adiabatic lapse rate toward an isothermal gas in the room model.

If the initial condition was imposed– that the lapse rate was below the dry adiabatic lapse rate, it is true that the gas would be very stable from convective mixing due to buoyancy, and the very slow thermal conduction, which would drive the temperature back toward the dry adiabatic lapse rate, could take a very long time (in the actual case, it would be much faster due to even small natural convection currents generally present). However, there is no reason for any lapse rate other than the dry adiabatic lapse rate to initially form, as the problem was posed, so that issue is not even relevant.

The final result of the room model is the fact that a very high ground temperature was produced from a relative cool supply gas due to adiabatic compression of the supply that was introduced at a high altitude. This is actually a consequence of gravitational potential energy being converted to kinetic energy. Once the dry adiabatic lapse rate formed, any small flow up or down stays in temperature equilibrium at all heights, so this is a totally stable situation, and would not tend toward an isothermal situation.

If there were present a sufficient quantity of gas in the present defined room that radiated and absorbed in the temperature range of the model, the temperature would tend toward isothermal, but that was not how the tall room example was defined.

Effect of an Optical Barrier to Sunlight reaching the Ground

The second model I discussed with Science of Doom, Arthur, and others, relates to my suggestion that if an optical barrier prevented any solar energy from transmitting to the ground, but that the energy was absorbed in and heated a thin layer just below the average location of effective outgoing radiation from Venus, in such a way that the heat was transmitted downward through the atmosphere, this could also result in the hot lower atmosphere and surface that is actually observed. The albedo, and the solar heating input due to day and night, and to different latitudes, was selected to match the actual values for Venus, and the radiation out was also selected to match the values for the actual planet.

This problem is much more complicated than the enclosed room case for two reasons.

The first is that it is a dynamic and non-uniform case. The second is due to the fact that the actual atmosphere is used, with radiation absorption and emission, and the presence of clouds. The lower atmosphere and surface of the planet Venus are much hotter than for the Earth, even though the planet does not absorb as much solar energy as the Earth. It is closer to the Sun than Earth, but has a much higher albedo due to a high dense cloud layer composed mostly of sulfuric acid drops. The discussion will not attempt to examine the historical circumstances that led up to the present conditions on Venus, but only look at the effect of the actual present conditions.

In order to examine this model, I had postulated that if all of the solar heating was absorbed near the top of Venus’s atmosphere, but with the day and night and latitude variation, the heat transfer to the atmosphere downward would eventually be mixed through the atmosphere and maintain the adiabatic lapse rate, with the upper atmosphere held to the same temperature as at the present. Since the atmosphere has gases, clouds, and aerosols that absorb and radiate most if not all of the thermal radiation, this is a different problem from the tall room.

However, it appears that the radiation flux levels are relatively small, especially at lower levels, so the issue hinges on the relative amount of forcing by convective flow compared to net radiation up (which does tend to reduce the lapse rate). I use the actual temperature profile as a starting point to see if the model is able to maintain the values. Different starting profiles would complicate the discussion, but if the selected initial profile can be maintained, it is likely all reasonable initial profiles would tend to the same long-term final levels.

The assumption that the solar heating and radiation out all occur in a layer at the top of the atmosphere eliminates positive buoyancy as a mechanism to initially couple the solar energy to the atmosphere. However, the direct thermal heat-transfer with different amounts of heating and cooling at different locations causes some local expansion and contraction of different locations of the top of the atmosphere, and this causes some pressure driven convection to form.

The pressure differences from expansion and contraction set a flow in motion that greatly increases surface heat transfer, and this flow would become turbulent at reasonable induced flow speeds. This increases heat transfer and mixing to larger depths. The portions cooler than the local average atmosphere will be denser than local adiabatic lapse rate values from the average atmosphere, and thus negative buoyancy would cause downward flow at these locations.

As the flow moved downward, it compressed but initially remains slightly cooler than the surrounding, with some mixing and diffusion spreading the cooling to ever-larger volumes. At some level, the flow down actually passes into a surrounding volume that is slightly cooler, rather than warmer, than the flow, due to the small but finite radiation flux removing energy from the surrounding. At this point the downward flow stream is warming the surrounding. The question arises: how much energy is carried by the convection, and could it easily replace radiated energy, so as to maintain a level near the dry adiabatic lapse rate?

A few numbers need to be shown here to best get an order of magnitude of what is going on. Arthur has already made some of these calculations. The average input energy of 158 W/m² applied to Venus’s atmosphere would take about 100 years to change the average atmospheric temperature by 500 K. This means that to change it even 0.1 K (on average) would take about 7 days. Since the upper atmosphere at low latitudes only takes about 4 days to completely circulate around Venus, the temperature variations from average values would be fairly small if the entire atmosphere were mixed.

However, for the model proposed, only a very thin layer would be heated and cooled under the absorbing layer. Differences in net radiation flux would also help transfer energy up and down some. This relatively thin layer would thus have much higher temperature variation than the average atmosphere mass would (but still only a few degrees). The pressure variations due to the upper level temperature variations would cause some flow circulation and vertical mixing to occur throughout the atmosphere. The circulating flows may or may not carry enough energy to overcome radiation flux levels to maintain the dry adiabatic lapse rate. Let us look at the near surface flow, and a level of radiation flux of 17 W/m².

What excess temperature and vertical convection speed is needed to carry energy able to balance that flux level. Assume a temperature excess of only 0.026 K is carried and mixed by convection due to atmospheric circulation. Also assume the local horizontal flow rate near the ground is 1 m/s. If a vertical mixing of only 0.01 m/s were available, the heat added would be 17 W/m², and this would balance the lost energy due to radiation, thus allowing the dry adiabatic lapse rate to be maintained.

This shows how little convective circulation and mixing are needed to carry solar heated atmosphere from high altitudes to lower levels to replace energy lost by radiation flux levels in the atmosphere, and maintain a near dry adiabatic lapse rate. It is the solar radiation that is supplying the input energy, and adiabatic compression that is heating the atmosphere. As long as the lapse rate is held close to the adiabatic level with sufficient convective mixing, it is the temperature at the location in the atmosphere where the effective outgoing radiation is located that sets a temperature on the adiabatic lapse rate curve and adiabatic compression determines the lower atmosphere temperature.

Since the exact details of the heat exchanges are critical to the process, this optically sealed region near the top of the atmosphere is a poorly defined model as it stands, and the question of whether it actually would work as Arthur and Science of Doom question is not resolved, although an argument can be made that there are processes to do the mixing. However, the real atmosphere of Venus absorbs almost all of the solar energy in its upper half, and this much larger initial absorption volume does clearly do the job. I have shown the ground likely has little if any effect on the actual temperature on Venus.

Some Concluding Remarks

The initial cause for this discussion was the question of why the surface of Venus is as hot as it is. A write-up by Steve Goddard implied that the high pressure on Venus was a major factor, and even though some greenhouse gas was needed to trap thermal energy, it was the pressure that was the major contributor. The point was that an adiabatic lapse rate would be present with or without a greenhouse gas, and the major effect of the greenhouse gas was to move the location of outgoing radiation to a high altitude. The outgoing level set a temperature at that altitude, and the ground temperature was just the temperature at the outgoing radiation effective level plus the increase due to adiabatic compression to the ground. The altitude where the effective outgoing radiation occurs is a function of amount and type of greenhouse gases. Steve’s statement is almost valid. If he qualified it to state that enough greenhouse gas is still needed to limit the radiation flux, and keep the outgoing radiation altitude near the top of the atmosphere, he would have been correct. Thus both the amount of atmosphere (and thus pressure and thickness), and amount of greenhouse gases are factors. Any statement that greenhouse gases are not needed if the pressure is high enough is wrong (but this was not what Steve Goddard said).

Another issue that came up was the need for the solar energy to heat the ground in order for the hot surface of Venus to occur. I think I made reasonable arguments that this is not at all true. While there is a small amount of solar heating of the ground, the ground is probably actually slightly cooler that the atmosphere directly above it due to radiation, and so there is no buoyant mixing and no heating of the atmosphere from the ground other than the small radiated contribution. The main part of solar energy is absorbed directly into the atmosphere and clouds, and is almost certainly the driver for winds and mixing, and the high ground temperature.

The final issue is what would happen if most (say 90%) of the CO2 were replaced by say Argon in Venus’s atmosphere. Three things would happen:

1) The adiabatic lapse rate would greatly increase due to the much lower Cp of Argon.

2) The height of the outgoing radiation would probably decrease, but likely not much due to both the presence of the clouds, and the fact that the density is not linear with altitude, and matching a height with remaining high CO2 level would only drop 10 to 15 km out of the 75 to 80, where outgoing radiation is presently from. If in fact it is the clouds that cause most of the outgoing radiation, there may not be any drop in outgoing level.

3) The radiation flux through the atmosphere would increase, but probably not nearly enough to prevent the atmospheric mixing from maintaining an adiabatic lapse rate. Keep in mind that Venus has 230,000 times the CO2 as Earth. Even 10% of this is 23,000 times the Earth value.

The combination of these factors, especially 1), would probably result in an increase in the ground temperature on Venus.

Read Full Post »

A lot of people have asked about this. I haven’t been so interested in the subject, but because I came across it in a paper I thought I would post it.

What is the effective height of outgoing longwave radiation? I.e., the radiation from the climate system that leaves the planet.

What is the effective height of downward longwave radiation as measured at the surface?

By effective height we mean that there isn’t just one level that radiation comes from, so it is the average height.

 

From "Tropospheric Water Vapor and Climate Sensitivity" by Schneider et al (1999)

From "Tropospheric Water Vapor and Climate Sensitivity" by Schneider et al (1999)

 

For reference here is a typical pressure vs height comparison (this one is calculated, using the standard hydrostatic equations and ideal gas laws):

So in the tropics the typical emission height of DLR we receive at the surface (called DLB in the paper) is just under 2km and in the mid-latitudes and poles it is around 5km.

Likewise for the OLR, the typical height is around 5km in low latitudes and 4km near the poles.

How is this calculated?

Here the effective emission level is defined as the level at which the climatological annual mean tropospheric temperature is equal to the emission temperature: (OLR/σ)1/4, where σ is the Stefan–Boltzmann constant.

The effective emission level for the downward longwave radiation at the ground is analogously defined as the level at which the climatological annual mean tropospheric temperature is equal to (DLB/σ)1/4 , where DLB is the clear-sky downward longwave radiation at the bottom.

Reference

Tropospheric Water Vapor and Climate Sensitivity, by Schneider, Kirtman & Lindzen, Journal of the Atmospheric Sciences, 1999

Read Full Post »

« Newer Posts - Older Posts »